The report was prepared with the assistance researches the best approaches to improving the safety, quality, and cost-effectiveness of patient care." I've mentioned it and its bylaws in this blog in the past as a model for independent, unbiased testing and reporting of healthcare techonlogies.
Regarding the Patient Safety Authority:
The Pennsylvania Patient Safety Authority was established under Act 13 of 2002, the Medical Care Availability and Reduction of Error ("Mcare") Act, as an independent state agency. It operates under an 11-member Board of Directors, six appointed by the Governor and four appointed by the Senate and House leadership. The eleventh member is a physician appointed by the Governor as Board Chair. Current membership includes three physicians, three attorneys, three nurses, a pharmacist and a non-healthcare worker.
The Authority is charged with taking steps to reduce and eliminate medical errors by identifying problems and recommending solutions that promote patient safety in hospitals, ambulatory surgical facilities, birthing centers and certain abortion facilities. Under Act 13 of 2002, these facilities must report what the Act defines as "Serious Events" and "Incidents" to the Authority.
The Authority maintains a database of serious events and incidents:
Consistent with Act 13 of 2002, the Authority developed the Pennsylvania Patient Safety Reporting System (PA-PSRS, pronounced "PAY-sirs"), a confidential web-based system that both receives and analyzes reports of what the Act calls Serious Events (actual occurrences) and Incidents (so-called "near-misses").
Cutting right to the chase, the paper's summary:
As adoption of health information technology solutions like electronic health records (EHRs) has increased across the United States, increasing attention is being paid to the safety and risk profile of these technologies. However, several groups have called out a lack of available safety data as a major challenge to assessing EHR safety, and this study was performed to inform the field about the types of EHR-related errors and problems reported to the Pennsylvania Patient Safety Authority and to serve as a basis for further study. Authority analysts queried the Pennsylvania Patient Safety Reporting System for reports related to EHR technologies and performed an exploratory analysis of 3,099 reports using a previously published classification structure specific to health information technology. The majority of EHR-related reports involved errors in human data entry, such as entry of “wrong” data or the failure to enter data, and a few reports indicated technical failures on the part of the EHR system. This may reflect the clinical mindset of frontline caregivers who report events to the Authority.
... Reported events were categorized by their reporter-selected harm score (see Table 1). Of the 3,099 EHR-related events, 2,763 (89%) were reported as “event, no harm” (e.g., an error did occur but there was no adverse outcome for the patient) [a risk best avoided to start with, because luck runs out eventually - ed.], and 320 (10%) were reported as “unsafe conditions,” which did not result in a harmful event. Fifteen reports involved temporary harm to the patient due to the following: entering wrong medication data (n = 6), administering the wrong medication (n = 3), ignoring a documented allergy (n = 2), failure to enter lab tests (n = 2), and failure to document (n = 2). Only one event report, related to a failure to properly document an allergy, involved significant harm.
A significant "study limitations" section was included that addressed:
- Issues regarding reporting statutes of the PA-PSRS errors database;
- lack of awareness of EHRs as a potential contributing factor to an error;
- limitations of narrative reporting affecting both the types of reports queried and the tags applied (the study used textual data mining methodolgies);
- query design of the study; and
- the need for further refinement of the machine learning tool used in creating the working dataset, which may have missed relevant cases.
Some of these impediments to knowing the magnitude of extant HIT issues are also present in the 2008 Joint Commission Sentinel Events Alert on HIT, the 2010 FDA internal memorandum on HIT Safety, and the 2011 IOM report on the same topic.
(The IOM report specifically observed that the "barriers to generating evidence pose unacceptable risks to safety.")
The major obstacle to this study in my view, though, was the nature of the dataset. The database is for general reporting of medical errors, and it contains no specific fields or reminders about EHRs or the known ways in which they can contribute to, or cause, medical mistakes.
The attempt was made, as acknowledged in the study, to glean information about EHR-related events from, in large part, textual analysis of narrative in the hopes that the reporter recognized the role of IT, and reported it using terms that could be detected by the search algorithms. In other words, the data was not "purposed" for this type of study.
It is axiomatic that one cannot find data that is simply not present, no matter how fancy the search algorithm. Further, passive analysis of clinical IT risk/harms data in an industry where lack of knowledge of causation and misconceptions abound will produce only partial results that suggest further study is needed, and not give an indicator of just how incomplete the results are.
Thus, this cautionary statement was made in the new PA Patient Safety Authority report:
"Although the vast majority of EHR-related reports did not document actual harm to the patient, analysts believe that further study of EHR-related near misses and close calls is warranted as a proactive measure."
The report is welcome.
The most important part of the paper, I point out, is the “Limitations” section. FDA, IOM and others have made similar observations – we don’t know the true magnitude of the problem due to systematic limitations of the available data.
Therefore, at best what is available must be deemed as risk management-relevant case reports, a “red flag” that could represent (using the words of FDA CDRH director Jeffrey Shuren regarding HIT safety), the tip of the iceberg.
It is imperative far more work be done in post-market surveillance as this technology is deployed nationally and internationally. This is to ensure that good health IT (GHIT) prevails and bad health IT (BHIT) is either remediated or removed from the marketplace. I had defined those in other writings as follows:
Good Health IT ("GHIT") is defined as IT that provides a good user experience, enhances cognitive function, puts essential information as effortlessly as possible into the physician’s hands, keeps eHealthinformation secure, protects patient privacy and facilitates better practice of medicine and better outcomes.
Bad Health IT ("BHIT") is defined as IT that is ill-suited to purpose, hard to use, unreliable, loses data or provides incorrect data, causes cognitive overload, slows rather than facilitates users, lacks appropriate alerts, creates the need for hypervigilance (i.e., towards avoiding IT-related mishaps) that increases stress, is lacking in security, compromises patient privacy or otherwise demonstrates suboptimal design and/or implementation.
An additional major factor that also contributes to lack of knowledge of EHR-related adverse events is hospital reporting non-compliance. For instance, I know of cases from my own legal consulting work and personal experience that I would have expected to appear in the database, but apparently do not.
But don’t take it from me alone. Here is PA Patient Safety Authority Board Member Cliff Rieders, Esq. on this.
From “Hospitals Are Not Reporting Errors as Required by Law, Phila. Inquirer”, pg. 4,http://articles.philly.com/
... Hospitals don’t report serious events if patients have been warned of the possibility of them in consent forms, said Clifford Rieders, a trial lawyer and member of the Patient Safety Authority’s board.
He said he thought one reason many hospitals don’t want to report serious events is that the law also requires that patients be informed in writing within a week of such problems. So, if a hospital doesn’t report a problem, it doesn’t have to send the patient that letter. [Thus reducing risk of litigation, and, incidentally, potentially infringing on patients' rights to legal recourse - ed.]
Rieders says the agency has allowed hospitals to determine for themselves what constitutes a serious event and the agency has failed to come up with a solid definition in six years.
Fixing this “is not a priority,” he added.
This coincides with my own personal experience precisely. In a case where my relative was permanently injured as a result of EHR-related medication error, and then died of the injuries, I never received the required report in writing from the hospital. I also do not believe the case was reported to the Safety Authority, at least not as IT-related.
I suspect the true rates of EHR-related close calls, reversible injuries, permanent injuries and deaths is significantly higher than the limited data available suggests. That data is merely a red flag that much more education, stringent reporting requirements, templates of known causes of error, and enforcement are needed. (An April 2010 "thought experiment" on this issue I wrote about at "If The Benefits Of Healthcare IT Can Be Guesstimated, So Can And Should The Dangers" certainly suggested as much.)
Slides where I made those types of recommendations to the Patient Safety Authority, at a presentation I gave in July 2012 at their invitation, are at http://www.ischool.drexel.edu/
A major concern I have is that the HIT industry will use this new report in a manner that ignores its limitations.
(Disclosure: I was an invited reviewer of this new PPSA report.)
Addendum Dec. 13:
Also worth review is "Patient Safety Problems Associated with Heathcare Information Technology: an Analysis of Adverse Events Reported to the US Food and Drug Administration", Magrabi, Ong, Runciman, and Coiera, AMIA Annu Symp Proc. 2011.
Data here came from FDA's voluntary (i.e., also tip of the iceberg) Manufacturer and User Facility Device Experience (MAUDE) database. Ironically, the study was done in Australia using Australian grant funds.