[Authors' note: This major part of this post was written before I had the actual IOM report itself. Having now read that report, available here in PDF, my opinions are unchanged.]
The Center for Public Integrity has published a story ahead of the Thursday release of the Institute of Medicine's report on health IT safety. This was a panel, by the way, that rejected my testifying, live, about my own relative's IT- related harm, despite my Medical Informatics credentials and explicit requests [see note below].
The IOM report apparently recommends that an extraordinary special accommodation be afforded to the healthcare IT industry regarding regulation of health IT software devices.
Excerpts from the Center for Public Integrity's article:
Health information technology has been touted as crucial to better health care, but a new report says an entirely new regulatory agency is needed to oversee this largely unregulated sector, which can also injure or kill patients if it’s not operating properly.
We would never have known that if not for the efforts of a small group of specialists with a conscience writing on this issue over the past decade; the industry long emphasized only the beneficence of the technology.
In pushing for a new oversight body, the respected Institute of Medicine, an independent research and advisory organization, is explicitly advising that the Food and Drug Administration (FDA) not be tasked with the job — a recommendation that is bound to be controversial. [Indeed - ed.] The eagerly anticipated report, titled “Health IT and Patient Safety: Building Safer Systems for Better Care,” will be publicly released Thursday. A copy was obtained by iWatch News. The study details nine other recommendations for how to ensure patient safety when doctors and other health care providers use health information technology, or health IT. The findings from the report were presented October 28 to the Department of Health and Human Services (HHS) and its agencies.
I do not consider the IOM's rationale for excluding the FDA to be reasonable...more below.
... the push [by the Administration] is occurring so far without any agency really ‘watch dogging’ the safety of health IT — the software, hardware and systems that record and manage patients’ health information. These expensive devices by and large have not gone through any regulatory checks for safety in the way that food, drugs and other medical technology must; most of that oversight is handled by the FDA. But at the moment, no one is required to report instances of harm caused by health information devices and no government agency currently monitors their safety.
This is a scandal of major proportions, considering the government has taken the approach "ready, shoot, aim" in putting in place penalties for non-adopters on a fantastically rushed timeline, via the HITECH Act within ARRA, while ignoring the risks.
“With all of that money, marketing and public outreach, most simply affirm the value of health IT as an article of faith, rather than investigate it via careful evaluation,” said Ross Koppel, adjunct professor of sociology at the University of Pennsylvania and its School of Medicine, and investigator for RAND Corporation. He is listed as one of the reviewers of the report."Faith" (e.g., irrational exuberance) in a technology has no place in science or medicine.
Addendum: from the report itself:
... While some studies suggest improvements in patient safety can be made, others have found no effect. Instances of health IT–associated harm have been reported. However, little published evidence could be found quantifying the magnitude of the risk.
Several reasons health IT–related safety data are lacking include the absence of measures and a central repository (or linkages among decentralized repositories) to collect, analyze, and act on information related to safety of this technology. Another impediment to gathering safety data is contractual barriers (e.g., nondisclosure, confidentiality clauses) that can prevent users from sharing information about health IT–related adverse events. These barriers limit users’ abilities to share knowledge of risk-prone user interfaces, for instance through screenshots and descriptions of potentially unsafe processes. In addition, some vendors include language in their sales contracts and escape responsibility for errors or defects in their software (i.e., “hold harmless clauses”). The committee believes these types of contractual restrictions limit transparency, which significantly contributes to the gaps in knowledge of health IT–related patient safety risks. These barriers to generating evidence pose unacceptable risks to safety.
[IOM (Institute of Medicine). 2012. Health IT and Patient Safety: Building Safer Systems for Better Care (PDF). Washington, DC: The National Academies Press, pg. S-2.]
Also in the IOM report:
… “For example, the number of patients who receive the correct medication in hospitals increases when these hospitals implement well-planned, robust computerized prescribing mechanisms and use barcoding systems. But even in these instances, the ability to generalize the results across the health care system may be limited. For other products— including electronic health records, which are being employed with more and more frequency— some studies find improvements in patient safety, while other studies find no effect.
More worrisome, some case reports suggest that poorly designed health IT can create new hazards in the already complex delivery of care. Although the magnitude of the risk associated with health IT is not known, some examples illustrate the concerns. Dosing errors, failure to detect life-threatening illnesses, and delaying treatment due to poor human–computer interactions or loss of data have led to serious injury and death.”
In other words, nobody has any real idea of the magnitude of harms, which also implies nobody knows if the magnitude of harms exceeds the magnitude of benefits. National rollout under these conditions is a horribly unethical situation on first principles.
Though a variety of studies have concluded that the use of health IT may improve patient safety, mistakes made in the systems or difficulty using the technology can lead to serious injury or death, according to the report.
Other studies actually show little or no benefit or cost savings. See this reading list for examples. So, it has been article of faith that the technology in its present form is of benefit, and is not a risk.
An allergy might be omitted from a computer record, for example, or an incorrect medication dosage might be recorded. In Rhode Island, a Lifespan computer glitch [5] caused about 2,000 patients to receive the wrong types of medications. In another instance [6] in March 2009, an unattended patient suffered multiple seizures for hours after a computer failed to alert doctors the patient was moved from the intensive care into their ward.
And people die needlessly (a few examples are at link, link, link).
As reports of patient harm began to emerge, the federal Office of the National Coordinator (ONC) for health IT asked the Institute of Medicine (IOM) a year ago to establish a Committee on Patient Safety and Health Information Technology to make recommendations to the government about how to maximize health IT safety.
I would rephrase that to: as reports on patient harm were no longer able to be suppressed by the industry...
In its report, the IOM committee says the FDA would likely restrict market innovation in health IT, which could also jeopardize patient safety.
There has been little to no real "innovation" in health IT in well over a decade; if anything, the usability and quality has deteriorated. Further, there is no data supporting the contention that FDA regulation of IT harms innovation. Pharma IT (regulated) is far more innovative than the IT in healthcare delivery (unregulated).
Stringent regulations “can negatively impact the development of new technology by limiting implementation choices and restricting manufacturers’ flexibility to address complex issues,” the report says.
Bull. It will keep the companies honest and "encourage" them to adhere to good software engineering and usability principles (unlike here), which will save lives. It's unfortunate such "encouragement" is needed, I note.
The FDA currently receives voluntary reports [7] of health IT-related incidents, but has no resources or protocols through which to take action; the agency has long fought a losing battle [8] with health IT vendors over trying to monitor the technology. The report also notes the agency does not have the investigative capabilities, funding or manpower to regulate devices such as electronic health records, personal health records or health information exchanges.Then give them the resources, not develop an entire new agency. The FDA has the talent and experience. [Note: I have no connections to FDA whatsoever - ed.]
... To adequately oversee health IT safety, the committee recommends that the secretary of health and human services create and fund a new independent watchdog agency, along the lines of the National Transportation Safety Board. Like NTSB, the new agency would conduct investigations and make recommendations for all stakeholders, including the secretary of the health and human services, vendors and health care organizations. Vendors of the technology would be required to report adverse events, while reporting would be voluntary for clinicians. Like NTSB, though, the new agency would also have no enforcement power.
That is to say, it will be toothless and ignored, leaving a cavalier industry that should have gotten its act together twenty years ago to continue on with its nihilistic ways.
The panel also recommends that the HHS secretary publicly report on the progress of health IT safety each year, beginning in 2012. If the secretary determines at any time that adequate safety progress has not been made, only then should the FDA take the regulatory lead and be given the resources to do so, the report recommends, adding that the agency should be developing a framework now to be prepared.
This makes little sense. In fact, it's an extraordinary special accommodation to the health IT industry (or should I say lobby) relative to other healthcare medicine/device sectors, and is bizarre. It continues health IT as a human subjects research experiment without informed consent and opt-out.
With catastrophe-inviting events like this one becoming more commonplace, just how many patients will have been maimed or died in the meantime while the HHS Secretary's 'determination of adequate safety progress' is being made? (What, exactly, will be deemed 'adequate', I also ask?)
Creating a new independent agency would, of course, require resources; the current budget for NTSB is set at $559 million over the 2010 to 2014 period. In the current climate of fiscal restraint, convincing Congress to appropriate that sort of cash for a new government body might be a tall order.
I note that it's a waste of taxpayer money to create a new agency to maintain/increase health IT industry profits at the expense of patients - not a wise choice IMO.
... Republican Sen. Chuck Grassley [11] of Iowa, senior member of the Senate Finance Committee, said the new report “adds more to the list of unresolved questions, including which government agency, if any, should regulate health care information technology.” Grassley, who wrote [12] HHS and health IT vendors two years ago asking what was being done to ensure the safety of the devices, said “the approach seemed to be, write checks first, solve the problems later, instead of the other way around.”
Having spoken extensively with Sen. Grassley's staff on these issues, I agree - except for the "seemed to be" disclaimer. Replace "seemed to be" with "was."
The Institute of Medicine committee does have one dissenter. Dr. Richard Cook [13] from the University of Chicago feels the FDA is indeed the proper agency to oversee health IT safety. Cook writes that health IT is considered a “Class III medical device,” that is to say, a device that performs integral medical functions, which the FDA already has the jurisdiction to regulate.
Dr. Cook is a co-author of the short 2005 paper "Hiding in plain sight: What Koppel et al. tell us about healthcare IT" which I consider seminal in understanding why health IT as it exists today is so poorly done.
In its report, the IOM panel also recommended that another study be done to quantify health IT-related deaths, serious injuries or unsafe conditions so that the safety concerns can be properly addressed. “You can only improve what you measure,” says the report.
As I wrote in Oct. 2010, that is putting the cart before the horse again. We study these issues while a national rollout under threat of penalty is underway? That's simply crazy.
Other recommendations in the report: establishing and enforcing criteria for the safety of electronic health records, funding a new Health IT Safety Council to set standards for safety, and requiring all health IT vendors to publicly register and list their products with the Office of the National Coordinator.
No issues there, except these efforts should have occurred over a decade ago, at the latest.
Finally, while I disagree with the action agenda, I do thank the IOM for bringing the issues of health IT risks out into the sunlight. Perhaps now these issues will start to be addressed, and ultimately patient safety and human rights safeguarded.
From the report's introduction:
... Caught in the middle are the patients—the ultimate recipients of care. Stories of patient injuries and deaths associated with health information technologies (health IT) frequently appear in the news, juxtaposed with stories of how health professionals are being provided monetary incentives to adopt the very products that may be causing harm. These stories are frightening, but they shed light on a very important problem and a realization that, as a nation, we must do better to keep patients safe ... the entire committee believes the current state of safety of health IT must not be permitted to continue.
I've been writing words like that for over a decade.
-- SS
Addendum Nov. 8, 2011:
The IOM has issued this press release:
From: National Academies News <InternetMailforONPI@nas.edu>
Date: November 8, 2011 8:25:43 AM PST
To: National Academies News <InternetMailforONPI@nas.edu>
Subject: Health Information Technology and Patient Safety - For Immediate Release
Health Information Technology and Patient Safety – For Immediate ReleaseHealth IT and Patient Safety: Building Safer Systems for Better Care, a new report from the Institute of Medicine, is available for IMMEDIATE RELEASE. The report examines a broad range of health information technologies and recommends actions that the government, health care providers, and technology vendors should take to improve patient safety. Contrary to some early news accounts, the report does not recommend that a new agency be established to regulate these technologies. Reporters can obtain a copy of the report by contacting the National Academies' Office of News and Public Information; tel. 202-334-2138 or e-mail news@nas.edu. In addition, members from the committee that wrote the report will discuss their recommendations and take questions at a one-hour public briefing starting at 10:30 a.m. EST Thursday, Nov. 10, in Room 100 of the National Academies’ Keck Center, 500 Fifth St. N.W., Washington, D.C. Those who cannot attend may participate through a live audio webcast accessible at http://www.nationalacademies.org.
I am not sure why they state "Contrary to some early news accounts, the report does not recommend that a new agency be established to regulate these technologies."
From the report prepub, page 6-28 to 6-29:
This recommendation followed:
... To truly improve patient safety, a new approach is needed. The committee believed that the experiences of other industries such as transportation and nuclear energy in creating the NTSB and the NRC were instructive, and concluded that the development of an independent, federal entity was best suited to performing the needed above-described analytic and investigative functions for health IT–related adverse events in a transparent, nonpunitive manner. The committee envisions an entity that would be similar in structure to the NTSB or the NRC, which are both independent federal agencies created by and reporting directly to Congress.
Among other responsibilities, these entities conduct investigations, for the purpose of ensuring safety. NTSB is a nonregulatory agency that does not establish fault or liability in the legal sense but investigates incidents. The NRC is a regulatory body that has the ability to issue fines and fees. The committee considered both agencies and concluded the NTSB to be most similar to the needs of health IT–assisted care.
An independent, federal entity analogous in form and function to the NTSB is needed. This entity would not have enforcement power and would be nonpunitive. Instead, it would have the authority to conduct investigations and, upon their completion, make recommendations.
... Recommendation 8: The Secretary of HHS should recommend that Congress establish an independent federal entity for investigating patient safety deaths, serious injuries, or potentially unsafe conditions associated with health IT. This entity should also monitor and analyze data and publicly report results of these activities.
Hopefully this issue will we clarified when the final report is generally available and the public briefing occurs.
I hope it's not an issue over the word "entity" vs. "agency", a difference that in practice would probably make no difference, economically speaking.
Addendum #2, Nov. 8, 2011:
After re-reading all of this, it seems the issue at question in the National Academies press release above is "regulation" vs. "investigating and monitoring."
That apparently being the case, let me state again that I have problems with the fact that the recommendations seem to preclude true regulation that should start ASAP, not at the whim of the Secretary of HHS when he/she decides that "adequate safety progress has not been made."
That represents, in my opinion, a special accommodation to the healthcare IT industry as previously mentioned.
In fact, Dr. Cook had a dissenting recommendation in Appendix E of the report that I agree with:
Recommendation 9: The Secretary of Health and Human Services should direct the FDA to exercise its authority to regulate health IT, including all EHRs and associated components, and health information exchanges, as Class III medical devices. [Under the 1976 Medical Device Amendments to the Federal Food, Drug, and Cosmetic Act - ed.]
-- SS
[Note] Regarding the IOM, my presenting in person the events that led to my relative's travails was rejected on the basis of 'protocol.' Despite the remarkable aspect of a physician/Medical Informatics specialist's relative (in fact, a specialist writing on health IT risks for over a decade) being gravely injured as a result of health IT-related disruption of care continuity, despite support for my personal attestation by internal members of the study group, my live testimony was rejected. Considering the issues were not just relevant to the IOM study, but at its heart, the rejection on procedural grounds brings to mind the simian adage "see no evil, hear no evil, speak no evil." The Institute of Medicine, on matters of life and death, acted more as an "Institute of Protocol" (or perhaps "Institute of Politics") in my opinion than an institute of science. I believe this lessens the credibility of their action agenda.