Showing posts with label healthcare IT regulation. Show all posts
Showing posts with label healthcare IT regulation. Show all posts

Tuesday, October 8, 2013

Quality and Safety Implications of Emergency Department Information Systems: ED EHR Systems Pose Serious Concerns, Report Says

A report "Quality and Safety Implications of Emergency Department Information Systems"
appeared in the Oct. 2013 issue of "Annals of Emergency Medicine."  It is available fulltext at http://www.annemergmed.com/article/S0196-0644%2813%2900506-4/fulltext, or in PDF via the tab, free as of this writing.

First, a preamble:  I once tried to alert a hospital where I'd trained decades before, Abington Memorial Hospital (http://www.amh.org/), of impediments to safe care I'd noted in their EHR's, predominantly their ED EHR.  They did not listen.  In fact, their response to my concerns was characterized by an apparent incompetence regarding conduct of safety investigations.  For instance, to my written concern in an April 2010 letter to the CEO and CMO about the ED EHR that:

... I've also had to stop administration [to my mother] of an antibiotic (Levaquin) in the recent past in the ED that she has had an adverse reaction to (torn rotator cuff), despite my having told ED intake she was allergic to it. She relates that administration of Levaquin was then almost repeated on the floor until she herself refused it during that past admission.

This was the sworn testimony in May 2013 about the "investigation" that resulted, from the hospital's VP of Risk Management, Regina Sturgis:

A:      Deborah [hospital General Counsel] asked me to investigate the Levaquin issue which I did.
Q:      Did you do that on your own or did you delegate some of the --
A:      No.  I did it on my own.
Q:      Do you know whether any of the IT folks were ever brought in to look at the -- the EMR issues referenced in this letter?
A:     No, I do not.  I know that I was asked to look at the Levaquin because of my clinical background.
Q:      Okay.  Did you come up with any conclusions?
A:      Yes.
Q:      What was your conclusion?
A:      That she had been ordered Levaquin in the ETC [Emergency Trauma Center a.k.a. ED], that it had been discontinued about a very short period of time later, under a half an hour, and that she never received it.

So, the investigation of a complaint that family and then the patient themselves had to stop the administration of a drug whose staff and EHR had been informed of an allergy consisted of confirming that the medication was never given.  No problem, the ED EHR is safe.

(I am not joking; that is the testimony given.  Imagine such an investigation and conclusion about, say, reported aircraft flaws, or, in the industry in which I was once a safety officer, public transit vehicle defects and dangers.)

However, when competent people investigate similar issues, the findings are concerning.  From Modern Healthcare (http://www.modernhealthcare.com/), a publication for healthcare executives, on the new Annals of Emergency Medicine article:

ED EHR systems pose serious concerns, report says

By Joseph Conn
Modern Healthcare

June 24, 2013
Electronic health-record systems used in emergency departments are beset with poor data displays, loaded with so many alerts warning of potential patient-safety issues that they can lead to user alert fatigue, and may be generating incorrect physician orders, according to a report by two emergency physicians' study groups.

Meanwhile, providers wanting to address these EHR issues are hampered by a lack of research and solid evidence of the extent of the problem with these systems, and by contract provisions with EHR vendors that stymie the free flow of information about system-linked safety concerns, the report authors say.

So, ED's across the country are rolling out technology, often taking advantage of ARRA's HITECH incentives ... but there is a lack of research and solid evidence into the risks.  Allow me to opine - that's simply crazy.

The groups found that “poor data display is a serious problem with many of today's EDISs,” while “the sheer volume” of alerts that range from the “completely irrelevant to life threatening” [or lack of appropriate alerts to relevant, simple issues such as data input errors - ed.] can “dull the senses, leading to a failure to react to a truly important warning.” They also found that “an alarming number of clinicians are anecdotally reporting a substantial increase in the incidence of wrong order/wrong patient errors while using the computerized physician order entry component of information systems.

The word "anecdote", as I have written, is being misused.  The reports are not "anecdotes."  They are risk management-relevant incident reports.  (See "From a Senior Clinician Down Under: Anecdotes and Medicine, We are Actually Talking About Two Different Things" at http://hcrenewal.blogspot.com/2011/08/from-senior-clinician-down-under.html.)

Two study groups from the American College of Emergency Physicians have recommended a program of systemic vigilance over electronic health-record systems used in emergency departments to improve patient safety and enhance quality of care.

ACEP workgroups on informatics and on quality improvement and patient safety published their findings in an article, “Quality and Safety Implications of Emergency Department Information Systems,” in the current issue of the Annals of Emergency Medicine.

Post marketing surveillance, a standard for decades in other healthcare sectors, has been absent from health IT due to a long-obsolete special regulatory accommodation afforded that industry.  This accommodation was initiated when systems were simple and merely advisory - not the comprehensive enterprise clinical resource and clinician command-and-control systems they are today.  Now, clinician investigators of the technology such as the authors of this study are realizing that continuing this accommodation is a mistake.

It follows in the wake of, and references, an Institute of Medicine report from 2011, “Health IT and Patient Safety: Building Safer Systems for Better Care.” That report concluded that “current market forces are not adequately addressing the potential risks associated with the use of health IT.” It also comes eight months after the New England Journal of Medicine published “Electronic Health Records and National Patient-Safety Goals,” which warned that recent evidence “has highlighted substantial and often unexpected risks resulting from the use of EHRs and other forms of health information technology.”

I note that if you frequent this blog, you likely read material similar to the bolded red statements above here first, as authored by me, dating to the founding of this blog in 2004.

... “The rush to capitalize on the huge federal investment of $30 billion for the adoption of electronic medical records led to some unfortunate and unintended consequences, particularly in the unique emergency department environment,” said Dr. Heather L. Farley, the lead author of the report, in a news release. “The irreversible drive toward EDIS implementation should be accompanied by a constant focus on improvement and hazard prevention." Farley is assistant chairwoman of the Department of Emergency Medicine at Christiana Care Health System in Newark, Del.

Ironically, I note in Dr. Farley's statement some of my own advice, given to ED staff when I was Chief Medical Informatics Officer at Christiana Care 1996-8.   I had in that time period advised Charles Reese IV, MD, Chair of Emergency Medicine, to not implement EHRs or, at best, implement document imaging systems (since ED charts are not that long or complex), not full field-based EHRs, due to the "unfortunate and unintended consequences" of bad health IT in such an environment I recognized even then.  It was only a few years ago that my advice was finally overturned.

The authors also report “(t)here are few consistent data on how commonly these errors occur, and few studies are actually focused on collecting evidence of these errors.” Meanwhile, “there is currently no mechanism in place to systematically allow, let alone encourage, users to provide feedback about ongoing safety issues or concerns” with EHRs in general, and EDISs specifically.

On its face, that is not a safety-conscious environment and the rollout and use of such systems seems a fundamental violation of patient's rights, made worse by the fact that there is no informed consent process whatsoever to ED EHR use.

The workgroups came up with seven recommendations: appointing an emergency department “clinician champion,” creating within healthcare delivery organizations an EDIS performance improvement group and an ongoing review process, paying timely attention to EDIS-related patient-safety issues raised by the review process, disseminating to the public lessons learned from performance improvement efforts, distributing vendors' product updates in a timely manner to all EDIS users and removing the “hold harmless” and “learned intermediary clauses” from vendor contracts.

Many of these issues have been discussed on this blog.

“The learned intermediary doctrine implies that the end users (clinicians) are the medical experts and should be able to detect and overcome any fallibility or contributing factor of the product,” the authors said.

I have also pointed out the absurdity of such a "doctors are clairvoyant" attitude, e.g., at my 2011 post on basic common sense on IT adverse consequences at http://hcrenewal.blogspot.com/2011/04/common-sense-on-side-effects-lacking-in.html.

They conclude that the “lack of accountability for vendors through hold harmless clauses and the shifting of liability to the clinicians through the learned intermediary doctrine are significant and additional impairments to safety improvement. Electronic health records and EDISs are sufficiently complex that the physician and other users cannot be expected to anticipate unpredictable errors.”

That aligns with the work of Dr. Jon Patrick in Sydney, whose treatise "A study of an Enterprise Health information System" on the Cerner FirstNet ED EHR is available here: http://sydney.edu.au/engineering/it/~hitru/index.php?option=com_content&task=view&id=91&Itemid=146

Earlier this month, the Electronic Health Record Association, an EHR developers trade group affiliated with the Chicago-based Healthcare Information and Management Systems Society, announced the launch of a voluntary “code of conduct in which adherents would agree to drop “gag clauses” in the contracts with their provider customers.

Great.  Per the wonderful 2007 article "The Denialists' Deck of Cards: An Illustrated Taxonomy of Rhetoric Used to Frustrate Consumer Protection Efforts" by Chris Jay Hoofnagle, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=962462, as of this writing free:

... At this point [of losing the argument], the denialist must propose "self regulation" to deal with the problem that doesn't exist. The cool thing about self regulation is that it cannot be enforced, and once the non-existent problem blows over, the denialist can simply scrap it! [20]

[20] In the runup to passage of bank privacy legislation, data brokers created a group called the "Individual Reference Services Group" that promptly disappeared after the legislation passed.

("Denialism" is the use of rhetorical techniques and predictable tactics to erect barriers to debate and consideration of any type of reform, regardless of the facts.)

IMO 'self regulation' of healthcare is, on its face, a deception.  There are simply too many conflicts of interest.

On use of "integrated" big systems:

“These systems do have glitches [indeed - see http://hcrenewal.blogspot.com/search/label/glitch - ed], but it can be plain and simple bad design that can lead to clinical errors,” Cozzens said.  But ED physicians, he said, are “having the enterprise systems forced upon them. To think you can take one system and adapt it to those different environments is totally wrong. That's why you see low physician satisfaction and the productivity is going down, all for the sacrifice of having an integrated system.”

In fact, so-called "best of breed" systems can be bad health IT as well.  See the aforementioned evaluation by Dr. Patrick in Australia.

Bad Health IT ("BHIT") is defined as IT that is ill-suited to purpose, hard to use, hard to customize, unreliable, loses data or provides incorrect data, causes cognitive overload, slows rather than facilitates users, lacks appropriate alerts, creates the need for hypervigilance (i.e., towards avoiding IT-related mishaps) that increases stress, is lacking in security, compromises patient privacy or otherwise demonstrates suboptimal design and/or implementation. 


Through my own work, I've seen bad health IT result in patient harm and death.  It's just unfortunate that I got started in this line of work by being, in effect, shot out of a cannon.  That is, my own mother was a victim.

-- SS

Addendum 10/8/13:

From the article:

End-User Recommendation 4: EDIS-related patient safety concerns identified by the review process should be addressed in a timely manner by ED providers, the EDIS vendors, and hospital administration. Each of these processes should be performed in full transparency, specifically with openness, communication, and accountability. 

I'm not sure the aforementioned levaquin near-accident "investigation" meets these standards.

-- SS

Friday, August 9, 2013

A War on Patients: Panel Says EHRs Should Not Be Vetted Before Marketing and Deployment

"First, do harm - it's a learning experience, and injured or dead patients are just a bump in the road, anyway" - the apparent creed of the healthcare computing hyperenthusiasts

Joe Conn and Modern Healthcare published the following article:

Work group says OK to some HIT safety regs (link), Joe Conn, Modern Healthcare, Aug. 7, 2013

What is important is what safety regs the Workgroup said "no" to.  It comes as no surprise:
A federally chartered special work group with representatives from three federal agencies has submitted its draft recommendations on establishing a regulatory framework for health information technology. Chief among those recommendations is that health IT should not be subjected to pre-market federal regulation, but there were a few exceptions.

The exceptions are narrow, and are likely already covered as Class III medical devices by FDA (see http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/Overview/ClassifyYourDevice/):

The exceptions under which there should be FDA regulation, according to the work group, include medical device accessories to be defined as such by the FDA; certain forms of “high risk” clinical decision support systems, such as “computer aided diagnostics,” also to be defined by the FDA; and some “higher risk software” use cases to be defined by the committee's own safety work group.

They did acknowledge the need for postmarket surveillance:
... The group also recommended: developing a federally supported, post-market surveillance system for health IT products “to ensure safety-related decision support is in place,” creating a process for gathering information on safety issues, aggregated at the federal level and establishing a public process for “customer rating of HIT to enhance transparency.”

Dr. David Bates [a professor at Harvard Medical School], chairman of the Food and Drug Administration Safety Innovation Act work group, presented the preliminary findings Wednesday at a meeting of HHS' Health Information Technology Policy Committee.

Let me translate this to plain English:  the health IT systems that go in (and their upgrades and patches) are recommended to be free from pre-marketing regulation and regulatory vetting.  Patients are to be the guinea pigs for testing of the software.  

If patients are harmed or killed, they get the honor of being named as "postmarket surveillance learning cases" who gave their all for the betterment of healthcare information technology.  

(Without their consent, but who needs consent to test experimental and unvetted devices on guinea pigs?)

Bates did express some liability concerns:

Asked during a question and answer period following his presentation whether the committee had considered the liability implications of its recommendations, Bates said, “It's not something we discussed at length, but it's something we can discuss over the next month.”

I, on the other hand, as a legal consultant on health IT-related medical errors and evidence tampering, am considering liability issues.

Unfortunately, patients would rather be whole than in lawsuits (or dead).  Also, sadly, it's physicians and nurses who will bear the brunt, if not all, of the liability for bad outcomes due to defective IT such as at these two recent posts, with vendor alerts regarding serious flaws of medication and other orders not being retained:

A clarification for all those proletarians who lack Harvard educations, and for the Workgroup members as well. Allow me to point out that the above manufacturer safety alerts of life-threatening fundamental flaws (involving entered text that "disappears", apparently found in live-patient scenarios, and the other "glitches" that did cause life-threatening errors sometimes en masse involving thousands of patients such as another apparent Siemens debacle at http://hcrenewal.blogspot.com/2011/11/lifespan-rhode-island-yet-another.html) would likely not have occurred if the systems had been vetted before being turned loose on patients.

Finally:  David and panel members, my mother and I thank you profusely. 

Oh wait...my mother can't thank you, she's dead from the toxic effects of un-premarket-vetted health IT on simple care processes at the very hospital where I performed my residency two decades ago.

She might have died a few times before she actually did thanks to other IT "glitches" that cropped up during her recovery from the first one, but I was able to (in one case, by sheer happenstance of showing up at  the right time) discover or provide staff with information to work around additional unvetted-health-IT flaws before those did her in.

It's taken more than a decade for critical-thinking, unconflicted writers and researchers ("iconoclasts") to force cybernetics-over-all hyperenthusasts (see here) like Bates and his panel members to own up the risks of health IT at all, e.g. via sites like this blog and this teaching site. These panel members IMO have their heads buried in sand.

Dr. Bates and his panel are, in my opinion, healthcare IT extremists, which is in part the apparent holding of the belief that computers have more rights than patients - and the other beliefs mentioned in this post:  "Another Health IT 'Glitch' - Can Digital Disappearing Ink Kill Patients?" at http://hcrenewal.blogspot.com/2013/08/another-health-it-glitch-can.html.

-- SS

Thursday, February 28, 2013

Arguments with Pavlov's Dogs: Health IT Regulation Will "Harm Innovation"? How, exactly?


Health IT hyper-enthusiasts, when faced with the prospect of government regulations, react like Pavlov's dogs with the response "regulation of health IT will harm innovation."

Here's a soliloquy of critical questions that need be asked:

-----------------------

Now, Mr. (or Dr.)  Hyper-Enthusiast, you state HIT regulation will harm innovation.

What aspects of regulation, specifically, will harm innovation?

Good manufacturing processes (GMPs)?

Building a safety case for review and inspection?

Pre-market safety/fitness/quality/reliability testing?

Post-marketing surveillance?

What?

Innovations happen before regulatory evaluation, do they not?

What, exactly, are your objections to safety and quality testing of innovations?

Don't innovations need to be tested for safety and quality?

If innovations are not safe, should they not be used on live patients?

How can the industry with its conflicts of interest effectively regulate HIT?

Even if it could, again, how would additional regulatory oversight harm innovation?

----------------------- 

And perhaps this needs to be asked as well:

  • Don't you really mean regulation would harm the bottom line?

-- SS