Showing posts with label IOM Committee on Patient Safety and Health Information Technology. Show all posts
Showing posts with label IOM Committee on Patient Safety and Health Information Technology. Show all posts

Tuesday, July 2, 2013

Is ONC's definition of "Significant EHR Risk" when body bags start to accumulate on the steps of the Capitol?

In a June 25, 2013 Bloomberg News article "Digital Health Records’ Risks Emerge as Deaths Blamed on Systems" by technology reporter Jordan Robertson (http://go.bloomberg.com/tech-blog/author/jrobertson40/), an EHR-harms case in which I am (unfortunately) intimately involved as substitute plaintiff is mentioned:

When Scot Silverstein’s 84-year-old mother, Betty, starting mixing up her words, he worried she was having a stroke. So he rushed her to Abington Memorial Hospital in Pennsylvania.

After she was admitted, Silverstein, who is a doctor, looked at his mother’s electronic health records, which are designed to make medical care safer by providing more information on patients than paper files do. He saw that Sotalol, which controls rapid heartbeats, was correctly listed as one of her medications.

Days later, when her heart condition flared up, he re-examined her records and was stunned to see that the drug was no longer listed, he said. His mom later suffered clotting, hemorrhaged and required emergency brain surgery. She died in 2011. Silverstein blames her death on problems with the hospital’s electronic medical records.

“I had the indignity of watching them put her in a body bag and put her in a hearse in my driveway,” said Silverstein, who has filed a wrongful-death lawsuit. “If paper records had been in place, unless someone had been using disappearing ink, this would not have happened.”

How can I say that?  Because I trained in this hospital and worked as resident Admitting Officer in that very ED pre-computer.  The many personnel in 2010 who were given the meds history by my mother and myself directed it not to paper for others to see, but to /dev/null.

Why can I say that?  Because the hospital's Motion for Prior Restraint (censorship) against me was denied outright by the presiding judge just days before the Bloomberg article was published (http://en.wikipedia.org/wiki/Prior_restraint):

Prior restraint (also referred to as prior censorship or pre-publication censorship) is censorship imposed, usually by a government, on expression before the expression actually takes place. An alternative to prior restraint is to allow the expression to take place and to take appropriate action afterward, if the expression is found to violate the law, regulations, or other rules.

Prior restraint prevents the censored material from being heard or distributed at all; other measures provide sanctions only after the offending material has been communicated, such as suits for slander or libel. In some countries (e.g., United States, Argentina) prior restraint by the government is forbidden, subject to certain exceptions, by a constitution.

Prior restraint is often considered a particularly oppressive form of censorship in Anglo-American jurisprudence because it prevents the restricted material from being heard or distributed at all. Other forms of restrictions on expression (such as actions for libel or criminal libel, slander, defamation, and contempt of court) implement criminal or civil sanctions only after the offending material has been published. While such sanctions might lead to a chilling effect, legal commentators argue that at least such actions do not directly impoverish the marketplace of ideas. Prior restraint, on the other hand, takes an idea or material completely out of the marketplace. Thus it is often considered to be the most extreme form of censorship.

The First Amendment lives.

(I wonder if it irks the hospital that they cannot perform sham peer review upon me now that the censorship motion is denied.  Sham peer review is a common reaction by hospital executives to "disruptive" physicians, but I have not worked there since 1987 and I no longer practice medicine.)

In the Bloomberg story Mr. Robertson wrote:

... “So far, the evidence we have doesn’t suggest that health information technology is a significant factor in safety events,” said Jodi Daniel (http://www.healthit.gov/newsroom/jodi-daniel-jd-mph), director of ONC’s office of policy and planning. “That said, we’re very interested in understanding where there may be a correlation and how to mitigate risks that do occur.”

In my opinion this statement represents gross negligence by a government official.  Ms. Daniel is unarguably working for a government agency pushing this technology.   She makes the claim that "so far the evidence we have doesn't suggest significant risk" while surely being aware (or having the fiduciary responsibility to be aware) of the impediments to having such evidence.

From my March 2012 post "Doctors and EHRs: Reframing the 'Modernists v. Luddites' Canard to The Accurate 'Ardent Technophiles vs. Pragmatists' Reality" at http://hcrenewal.blogspot.com/2012/03/doctors-and-ehrs-reframing-modernists-v.html  (yes, this was more than a year ago):

... The Institute of Medicine of the National Academies noted this in their late 2011 study on EHR safety:


... While some studies suggest improvements in patient safety can be made, others have found no effect. Instances of health IT–associated harm have been reported. However, little published evidence could be found quantifying the magnitude of the risk.

Several reasons health IT–related safety data are lacking include the absence of measures and a central repository (or linkages among decentralized repositories) to collect, analyze, and act on information related to safety of this technology. Another impediment to gathering safety data is contractual barriers (e.g., nondisclosure, confidentiality clauses) that can prevent users from sharing information about health IT–related adverse events. These barriers limit users’ abilities to share knowledge of risk-prone user interfaces, for instance through screenshots and descriptions of potentially unsafe processes. In addition, some vendors include language in their sales contracts and escape responsibility for errors or defects in their software (i.e., “hold harmless clauses”). The committee believes these types of contractual restrictions limit transparency, which significantly contributes to the gaps in knowledge of health IT–related patient safety risks. These barriers to generating evidence pose unacceptable risks to safety.[IOM (Institute of Medicine). 2012. Health IT and Patient Safety: Building Safer Systems for Better Care (PDF). Washington, DC: The National Academies Press, pg. S-2.]

Also in the IOM report:

… “For example, the number of patients who receive the correct medication in hospitals increases when these hospitals implement well-planned, robust computerized prescribing mechanisms and use barcoding systems. But even in these instances, the ability to generalize the results across the health care system may be limited. For other products— including electronic health records, which are being employed with more and more frequency— some studies find improvements in patient safety, while other studies find no effect.

More worrisome, some case reports suggest that poorly designed health IT can create new hazards in the already complex delivery of care. Although the magnitude of the risk associated with health IT is not known, some examples illustrate the concerns. Dosing errors, failure to detect life-threatening illnesses, and delaying treatment due to poor human–computer interactions or loss of data have led to serious injury and death.”


I also noted that the 'impediments to generating evidence' effectively rise to the level of legalized censorship, as observed by Koppel and Kreda regarding gag and hold-harmless clauses in their JAMA article "Health Care Information Technology Vendors' Hold Harmless Clause: Implications for Patients and Clinicians", JAMA 2009;301(12):1276-1278. doi: 10.1001/jama.2009.398.

FDA had similar findings about impediments to knowledge of health IT risks, see my Aug. 2010 post "Internal FDA memorandum of Feb. 23, 2010 to Jeffrey Shuren on HIT risks. Smoking gun?" at http://hcrenewal.blogspot.com/2010/08/smoking-gun-internal-fda-memorandum-of.html.

I also note this from amednews.com's coverage of the ECRI Deep Dive Study (http://hcrenewal.blogspot.com/2013/02/peering-underneath-icebergs-water-level.html):


... In spring 2012, a surgeon tried to electronically access a patient’s radiology study in the operating room but the computer would show only a blue screen. The patient’s time under anesthesia was extended while OR staff struggled to get the display to function properly. That is just one example of 171 health information technology-related problems reported [voluntarily] during a nine-week period [from 36 hospitals] to the ECRI Institute PSO, a patient safety organization in Plymouth Meeting, Pa., that works with health systems and hospital associations in Kentucky, Michigan, Ohio, Tennessee and elsewhere to analyze and prevent adverse events. Eight of the incidents reported involved patient harm, and three may have contributed to patient deaths, said the institute’s 48-page report, first made privately available to the PSO’s members and partners in December 2012. The report, shared with American Medical News in February, highlights how the health IT systems meant to make care safer and more efficient can sometimes expose patients to harm.


One wonders if Ms. Daniels' definition of "significant" is when body bags start to accumulate on the steps of the Capitol.

I also note she is not a clinician but a JD/MPH.

I am increasingly of the opinion that non-clinicians need to be removed from positions of health IT leadership at regional and national levels.

In large part many just don't seem to have the experience, insights and perhaps ethics necessary to understand the implications of their decisions.

At the very least, such people who never made it to medical school or nursing school need to be kept on a very short leash by those who did.

-- SS

Thursday, February 14, 2013

Bipartisan Policy Center's Health Innovation Initiative: Health IT Industry Officials Lying to Regulators With Impunity?

On Wednesday, February 13, 2013, The Bipartisan Policy Center's Health Innovation Initiative held a discussion on its new report: An Oversight Framework for Assuring Patient Safety in Health Information Technology.  The announcement is here:  https://bipartisanpolicy.org/news/press-releases/2013/02/bipartisan-policy-center-releases-recommendations-oversight-framework-pa

The report is here (PDF):  "An Oversight Framework for Assuring Patient Safety in Health Information Technology."

The "who's" of the Bipartisan Policy Center's Health Innovation Initiative included these people:

  • Senator Tom Daschle, Former U.S. Senate Majority Leader; Co-founder, Bipartisan Policy Center (BPC); and Co-leader BPC Health Project Carolyn M. Clancy, M.D., Director, Agency for Healthcare Research and Quality, Department of Health and Human Services
  • Farzad Mostashari, M.D., ScM, National Coordinator for Health Information Technology, Department of Health and Human Services
  • Peter Angood, M.D., Chief Executive Officer, American College of Physician Executives
  • Russ Branzell, Chief Executive Officer, Colorado Health Medical Group, University of Colorado Health
  • John Glaser, Ph.D., Chief Executive Officer, Siemens Health Services
  • Douglas E. Henley, M.D., FAAFP, Executive Vice President and Chief Executive Officer, American Academy of Family Physicians
  • Jeffrey C. Lerner, Ph.D., President and Chief Executive Officer, ECRI Institute
  • Ed Park, Executive Vice President and Chief Operating Officer, athenahealth
  • Emad Rizk, M.D., President, McKesson Health Solutions
  • Janet Marchibroda, Moderator; Director, BPC Health Innovation Initiative 

Unfortunately, I was unable to attend.  I was at the 2013 Annual Winter Convention of the American Association for Justice (Trial Lawyer's Association) in Florida, as an invited speaker on health IT risk, its use in evidence tampering, and other legal issues.


"United for Justice" - click to enlarge



I found the following statement from the Bipartisan Policy Center's Health Innovation Initiative report remarkable as a "framework for health IT safety":

The Bipartisan Policy Center today proposed an oversight framework for assuring patient safety in health information technology. Among other guiding principles, the framework should be risk-based, flexible and assure patient safety is a shared responsibility, the authors said. “Assuring safety in clinical software in particular is a shared responsibility among developers, implementers, and users across the various stages of the health IT life cycle, which include design and development; implementation and customization; upgrades, maintenance and operations; and risk identification, mitigation and remediation,” the report states. Among other recommendations, the center said clinical software such as electronic health records and software used to inform clinical decision making should be subject to a new oversight framework, rather than traditional regulatory approaches [e.g.,  FDA - ed.] applied to medical devices given its lower risk profile.

I find it remarkable that the health IT industry and its supporters now feel they can lie to our government and regulatory agencies with impunity.  Stating that health IT has a "lower risk profile" is an example.

One cannot know what is acknowledged to be unknown.

From the Institute of Medicine in its 2012 report on health IT safety:

Institute of Medicine. 2012. Health IT and Patient Safety: Building Safer Systems for Better Care .  Washington, DC: The National Academies Press.

... While some studies suggest improvements in patient safety can be made, others have found no effect. Instances of health IT–associated harm have been reported. However, little published evidence could be found quantifying the magnitude of the risk.

Several reasons health IT–related safety data are lacking include the absence of measures and a central repository (or linkages among decentralized repositories) to collect, analyze, and act on information related to safety of this technology. Another impediment to gathering safety data is contractual barriers (e.g., nondisclosure, confidentiality clauses) that can prevent users from sharing information about health IT–related adverse events. These barriers limit users’ abilities to share knowledge of risk-prone user interfaces, for instance through screenshots and descriptions of potentially unsafe processes. In addition, some vendors include language in their sales contracts and escape responsibility for errors or defects in their software (i.e., “hold harmless clauses”). The committee believes these types of contractual restrictions limit transparency, which significantly contributes to the gaps in knowledge of health IT–related patient safety risks. These barriers to generating evidence pose unacceptable risks to safety.

... More worrisome, some case reports suggest that poorly designed health IT can create new hazards in the already complex delivery of care. Although the magnitude of the risk associated with health IT is not known, some examples illustrate the concerns. Dosing errors, failure to detect life-threatening illnesses, and delaying treatment due to poor human–computer interactions or loss of data have led to serious injury and death.” 

Even to those with particularly thick skulls, this statement seems easy to comprehend:

"The magnitude of the risk associated with health IT is not known."

I repeat once again:

One cannot know what is acknowledged to be unknown.

A statement that health IT has a "lower risk profile" compared to other regulated healthcare sectors such as devices or drugs, in order to seek continued and extraordinary regulatory accommodations, is remarkable.  It is either reckless regarding something that the statement's makers should know, or should have made it their business to know - or a deliberate prevarication with forethought.

The report did attempt to shroud the declarative "lower risk profile" in a sugar coating through misdirection, citing the need to take into account "several factors" including:

"the level of risk of potential patient harm, the degree of direct clinical action on patients, the opportunity for clinician involvement, the nature and pace of its development, and the number of factors beyond the development stage that impact its level of safety in implementation and use." 

These "factors" speak to a higher level of potential risk, not lower, and are a justification for stronger regulatory oversight, not weaker.  I would opine that there is a possibility that health IT. through which almost all transactions of care need to pass (e.g., orders, results reporting, recording and review of observations, finding, diagnoses, prognoses, treatment plans, etc.), could have a higher risk profile than one-off devices or drugs.  Health IT affects every patient, not just those under a specific therapy or using a specific device or drug.

Partial taxonomies developed from limited data themselves speak to the issue of a potentially huge risk profile of health IT, e.g., the FDA Internal Memo on HIT Risks (link), the AHRQ Hazards Manager taxonomy (link), and the sometimes hair-raising voluntary defects reports (largely from one vendor) in the FDA MAUDE database (link).  Further, health IT can and does affect thousands or tens of thousands of patients en masse even due to one simple defect, such as happened in Rhode Island at Lifespan (link), or due to overall design and implementation problems such as at Contra Costa County, CA (link) and San Francisco's Dept. of Public Health (link).

We don't know the true levels of risk and harm - but we need to, and rapidly.  Industry self-policing is not the answer; it didn't work in drugs and devices, and even with regulation there are still significant problems in those sectors.  (Imagine how it would be if those sectors received the special accommodations that health IT receives, and wishes to continue to receive.)

My other issue is with the "shared responsibility" including "users."

The user's responsibility is patient care, not being a beta tester for bug-laden or grossly defective health IT products.  Their responsibility ends at reporting problems without retaliation, and ensuring patient safety.

Their responsibility is to avoid carelessness - as it is when they drive their cars.

In other words, the inclusion of "users" in the statement is superfluous.

It is not a responsibility to be omniscient and be held accountable when bad health IT promotes "use error" (the NIST definition of "use error" I will not repeat again here; search the blog) -- as opposed to and as distinct from "user error" - note the final "r" - i.e., carelessness.

Bad health IT (see here):

Bad Health IT ("BHIT") is defined as IT that is ill-suited to purpose, hard to use, unreliable, loses data or provides incorrect data, causes cognitive overload, slows rather than facilitates users, lacks appropriate alerts, creates the need for hypervigilance (i.e., towards avoiding IT-related mishaps) that increases stress, is lacking in security, compromises patient privacy or otherwise demonstrates suboptimal design and/or implementation. 

One special accommodation that the health IT industry has been afforded for far too long is to be able to "blame the user."

"Blaming the victim" of bad health IT is a more appropriate description.

-- SS