Over the past few years, concerns have been raised at various times about the security of embedded software in medical devices such as insulin pumps, pacemakers and even Fitbit fitness monitors. Researchers have demonstrated that pacemakers can be hacked; patients have discovered flaws that could cause insulin pumps to accidentally discharge fatal doses; and users have been incensed after Fitbit data feeds were automatically set to publicly display information that included records of sexual behavior. With these flaws and others capturing the attention of researchers and the FDA, which issued a safety communication urging medical device developers to take stronger software security precautions in June, efforts are underway to improve security in these devices.
Tricking sensors and catching the tricks
While many researchers have focused on strengthening perimeter defenses and limiting the possibility of network breaches for connected health devices, another group has turned its eye toward figuring out other ways to bypass these defenses, InformationWeek Healthcare editor David Carr explained in a recent article. Rather than attacking software directly, researchers have figured out how to manipulate sensor readings with electromagnetic interference. The good news for now is that tests carried out on sensors implanted in an artificial cadaver found that these attacks are only effective from a distance of a couple of centimeters away. Even more promising, researchers suggested that such attacks could be defended against by building the capability into the embedded software to detect anomalous inputs and shift to a safe mode.
A similar process could be used to prevent fraud through the use of fitness devices such as Fitbit, according to another set of researchers. The data sent from these devices can be manipulated, an issue for health insurance companies that are beginning to offer discounts or incentives to customers who can demonstrate good exercise habits. Researchers showed it’s possible to convince the Fitbit dashboard system that a user took 12 million steps while only traveling 0.2 miles, for instance. To control against such manipulations, they recommended device manufacturers implement “sanity checks.”
Using source code analysis tools, developers can quickly examine software before releasing it to ensure these kinds of automatic checks are built in. With a careful eye toward making sure embedded software operates normally even under unusual conditions, device manufacturers can reduce the likelihood of an error.
Reducing the area of exposure
Another tactic researchers are exploring for improving the safety of implanted and wearable medical devices is developing centralized control systems for the devices. Researchers at Dartmouth College and Clemson University have launched the Amulet Project, which imagines a wearable, wristwatch-like computerized hub that could securely control the connected devices on the wearer’s body. By cutting down the number of potential access points, the device could reduce the possibility of, for instance, a pacemaker hack. At the same time, creating a device that essentially has administrator rights to the wearer’s body introduces its own security challenges and calls for flawlessly constructed software.
While the era of medical device cybersecurity threats may still seem a little like science fiction paranoia, such research and the recent FDA advisory suggest that it should already be on device manufacturers’ radar. Companies should be building devices with software security and FDA compliance in mind.
Software news brought to you by Klocwork Inc., dedicated to helping software developers create better code with every keystroke.