Scientists marked the 1970s and 1990s as two distinct “AI winters,” when sunny forecasts for artificial intelligence yielded to gloomy pessimism as projects failed to live up to the hype. IBM sold its AI-based Watson Health to a private equity firm earlier this year for what analysts describe as salvage value. Could this transaction signal a third AI winter?
Artificial intelligence has been with us longer than most people realize, reaching a mass audience with Rosey the Robot in the 1960s TV show “The Jetsons.” This application of AI—the omniscient maid who keeps the household running—is the science fiction version. In a healthcare setting, artificial intelligence is limited.
Intended to operate in a task-specific manner, the concept is similar to real-world scenarios like when a computerized machine beats a human chess champion. Chess is structured data with predefined rules for where to move, how to move and when the game is won. Electronic patient records, upon which artificial intelligence is based, are not suited to the neat confines of a chess board.
Collecting and reporting accurate patient data is the problem. MedStar Health sees sloppy electronic health records practices harming doctors, nurses and patients. The hospital system took initial steps to focus public attention on the issue in 2010, and the effort continues today. MedStar’s awareness campaign usurps the “EHR” acronym, turning it into “errors happen regularly” to make the mission clear.
Analyzing software from leading EHR vendors, MedStar found entering data is often unintuitive and displays make it confusing for clinicians to interpret information. Patient records software often has no connection to how doctors and nurses actually work, prompting yet more errors.
Examples of medical data errors appear in medical journals, the media and court cases, and they range from faulty code deleting critical information to mysteriously switching patient genders. Since there is no formal reporting system, there is no definitive number of data-driven medical errors. The high probability that bad data is dumped into artificial intelligence applications derails its potential.
Developing artificial intelligence starts with training an algorithm to detect patterns. Data is entered and when a large enough sample is realized, the algorithm is tested to see if it correctly identifies certain patient attributes. Despite the term “machine learning,” which implies a constantly evolving process, the technology is tested and deployed like traditional software development. If the underlying data is correct, then properly trained algorithms will automate functions making doctors more efficient.
Take, for example, diagnosing medical conditions based on eye images. In one patient the eye is healthy; in another the eye shows signs of diabetic retinopathy. Images of both healthy and “sick” eyes are captured. When enough patient data is fed into the artificial intelligence system, the algorithm will learn to identify patients with the disease.
Andrew Beam, a professor at Harvard University with private sector experience in machine learning, presented a troubling scenario of what could go wrong without anybody even knowing it. Using the eye example above, let’s say as more patients are seen, more eye images are fed into the system which is now integrated into the clinical workflow as an automated process. So far so good. But let’s say images include treated patients with diabetic retinopathy. Those treated patients have a small scar from a laser incision. Now the algorithm is tricked into looking for small scars.
Adding to the data confusion, doctors don’t agree among themselves on what thousands of patient data points actually mean. Human intervention is required to tell the algorithm what data to look for, and it is hard coded as labels for machine reading. Other concerns include EHR software updates that can create errors. A hospital may switch software vendors resulting in what is called data shift, when information moves elsewhere.
That’s what happened at MD Anderson Cancer Center and was the technical reason why IBM’s first partnership ended. IBM’s then-CEO Ginni Rometty described the arrangement, announced in 2013, as the company’s healthcare “moonshot.” MD Anderson’s stated, in a press release, that it would use Watson Health in its mission to eradicate cancer. Two years later the partnership failed. To go forward, both parties would have had to retrain the system to understand data from the new software. It was the beginning of the end for IBM’s Watson Health.
Artificial intelligence in healthcare is only as good as the data. Precision management of patient data is not science fiction or a “moonshot,” but it is essential for AI to succeed. The alternative is a promising healthcare technology becoming frozen in time.
Photo: MF3d, Getty Images