In Conversation with...Richard P. Shannon, MD
In Conversation with..Richard P. Shannon, MD. PSNet [internet]. 2010.In Conversation with...Richard P. Shannon, MD. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2010.
In Conversation with..Richard P. Shannon, MD. PSNet [internet]. 2010.In Conversation with...Richard P. Shannon, MD. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2010.
Editor's note: Richard P. Shannon, MD, is the Frank Wister Thomas Professor of Medicine at the University of Pennsylvania School of Medicine and Chairman of the Department of Medicine. Although he was trained as a traditional academic cardiologist, Dr. Shannon is now best known for his pioneering work in reducing health care–associated infections, first at Allegheny General Hospital and now at Hospital of the University of Pennsylvania. Since he is one of the first chairs of a major academic department whose career has focused on quality and safety improvement, we asked him to speak with us about safety in academic medical centers.
Dr. Robert Wachter, Editor, AHRQ WebM&M: Let's begin by hearing your perspective on trying to improve patient safety in academic environments.
Dr. Richard Shannon: At Penn, we picked four medicine units: the medical intensive care unit and three oncology units, with liquid tumor patients, solid tumor patients, and bone marrow transplant patients. We picked those four units because the incidence of hospital-acquired infections tended to be particularly high in those units. The second reason is that certain biological factors in immunocompromised patients predispose them to a risk of line infections. I thought it would be interesting to challenge that notion and see whether we could address this issue in these very high-risk patients, particularly bone marrow transplant patients with chronic indwelling catheters. This involves picking an activity—in this case, placing, maintaining, and manipulating both central catheters and urinary tract catheters—and understanding what the expected outcomes are.
We do this by asking housestaff and nurses to observe the current condition. How is a catheter placed right now, and then how is it maintained and manipulated? How many times does the procedure have to be interrupted because the proceduralist doesn't have what he or she needs to do the work? What is different in academic medicine is the opportunity to engage workers in process-improvement methods at the point of care as they do their work and try to figure out how to free people's time to do that. To free residents' time, we bring on a resident to help cover the service while they engage in patient safety tutorials. Same with the nurses: We would bring an extra nurse in on the day shift to free up the nurses to do some of these observations and problem solving, and to redesign the work. That's the approach we've taken—really putting the problem in the hands of the people engaged in doing the work.
The early outcomes here are quite astonishing. In the 6 months prior to the effort with central lines, 86 patients had central line infections in those four units. In the 5 months since we started, there have been four infections. So the concept of actually putting these tools in the hands of people who do the work, much in the way that Toyota does on its assembly line, is very effective.
RW: Given the importance of training the next generation of physicians and nurses, as you were thinking about trying to get trainees to focus on these issues and learn about them in a new way, how did you think through the benefits and downsides of different methods?
RS: The concept for the housestaff was to try to create a 4-hour tutorial around placing these devices. The first part is reading guideline information and answering 20 questions on the content. Then you move on to part two, which is watching a video that we created back in Pittsburgh called "The Perfect Line Placement." Then the third part is working on simulators to demonstrate your competency as a team member. The important observation there was that we initially started out having the house officers do this, supervised by a fellow. But we soon realized that when doctors do training in isolation from nurses, the team concept of who does handoffs—who provides what to whom, what you do when that catheter is placed and you're now hooking it up to your infusion pump—all those handoffs were absent. So we expanded the training to include the opportunity for trainees to participate with nurses to demonstrate that nurses knew what their job was at the same time that residents and fellows knew what their jobs were.
But I want to point out one other thing that I struggled with when I came to Penn. When I got here, the notion was, "Well, come on, Dr. Shannon, we're great. You know, this is Penn. We don't have to worry about these problems." And of course when you show people the data they're astonished, because, although we collect these data, we rarely share them in a way that is understandable to the people actually doing the work. We would toss around infection rates of 5.7 infections per thousand line-days and everyone would look at each other and say, "What does that mean? They must be talking about somebody else." But we began to decode those data and discuss how many human beings are affected, who were the attending physicians, and in what units did they occur? Decoding it so that people understood the human element of it, making sure they understood what the consequences of these infections were—that was key. So we said things like, "The average patient at Penn stayed 17 additional days when they got a central line infection on these units." And, "In those four units, individuals who developed line infections had 22% mortality." Those facts create the call to action that the typical epidemiological metric could not achieve. We coupled this foundation with empowering them with a new set of tools and real-time, problem-solving skills that they could then apply each day as they encounter problems.
Another factor responsible for the rapid uptake at Penn is that in Pennsylvania we had a burning platform—these results are publicly reported. They're reported by institutions. And now, any patient who contracts a hospital-acquired infection has to receive a letter and an explanation from his or her attending physician about what happened, and the experience is that these letters lead to malpractice claims about 20% of the time. In that context, we've been able to make rapid improvements. In addition, the state of Pennsylvania will no longer pay for components of admissions complicated by these infections. So there is a real sense of urgency. Penn likes to pride itself on being a great place, but when it's publicly reported that we had the second highest rate of these complications when risk adjusted by peer group in the region, it's hard to claim that you're really that great.
RW: Talk about the unique issues of doing safety work in academia. By that, I mean use of evidence and the reluctance to change without strong data. And the issue of where funding comes from for faculty. They're all areas where academia was not particularly advanced, and some have argued that these areas set academia behind some community organizations when the patient safety field began.
RS: Those are important considerations here. Trying to help people understand this science of performance improvement and how it compares and contrasts to the typical science of discovery is a real challenge. In this process, which is modified from Toyota, we encourage innovation, experimentation, and testing hypotheses, but on a more granular level that typically is not subject to randomization and prospective evaluation the way a classical clinical trial would be. So a lot of doubters say, "How do you really know that the innovations we put in place are responsible for the results?" And the honest answer is that you cannot link cause and effect like you can in a transgenic mouse experiment, but when the results are as dramatic as we've experienced—going from 86 infections to 4—and it occurs in the midst of these particular innovations, I think you can argue, even to the most rigorous scientists, that there certainly is an effect.
The question is how much of that effect is enduring. That's a fundamental question: When the implementation team leaves, do you sustain this? That remains to be determined here at Penn. But the 26-bed unit in Pittsburgh where we did this effort has now gone 35 months without a central line infection. Once you ingrain the process as a culture, I think it is sustainable. By the same token, engaging people in rapid process improvement and being able to share with them the daily results—every member of this team gets a daily report on whether or not there's an infection on their unit—begins to create a different type of process improvement science that I think people can at least begin to understand and appreciate. It is essential that we begin to document these things and publish them—because arguably at an academic level, where you publish this stuff could be an equally important consideration. But we're trying to put the methodologies together with the results and use it as an opportunity to disseminate learning in the typical way we would other scientific discoveries.
RW: People are getting energized about it through work that you're doing and then wondering what this career path will be like 10 years from now. In some ways, the faculty member who is focusing his or her career on quality and safety is placing a bet in a game that has lots of long-term uncertainties.
RS: I cannot underscore enough the importance of leadership at some senior level in the organization to the success of this. I think the burning platform that exists in Pennsylvania, Penn's desire to really try to be great in this domain, and my commitment and belief that eliminating unsafe conditions is the foundation of all quality efforts create this alignment that allows us to move this forward rapidly. We're talking about funding a national demonstration effort in several academic centers. Because the point you made earlier is really true: Community hospitals actually have an easier time with this than large academic centers for many of the reasons that you highlighted. We're trying to find the basis for forming a demonstration project nationally in 10 academic centers, and to do so coupled with a business case that would argue that there is economic advantage to the health system to do this work. HUP [Hospital of the University of Pennsylvania] is packed with patients, and the health system is entertaining building a new hospital. My argument would be that if we eliminated the unnecessary 17-day additional length of stay for every patient at HUP who gets a central line infection, we'd create a tremendous amount of additional capacity. The additional length of stay for a urinary tract infection is closer to 6 days, but we have 200 of those. So if we were to eliminate those, what capacity would we generate? We're trying to analyze the economic opportunity that would be coupled with this. That's best done if we could create a consortium of 10 like-minded organizations that would be willing to do this work and, in parallel, to perform the economic analysis around it.
RW: You've chosen in your career to focus on hospital-acquired infections. In some ways, they're unique in that they're more easily measurable. The evidence that you've made a difference is more easily obtainable, and the economic impact is tangible. Now let's say you've finished this experiment with infections, you've had great success, it's sustainable, and you're moving on to other issues in patient safety. Will the same approach work?
RS: That's where the toolkit using lessons borrowed from Toyota is so widely applicable. The concept is to be able to identify defects in any domain—to understand your capability to identify the defect and solve it. In the course of these observations, we encounter not only defects in the processes around placing, maintaining, and manipulating catheters, but we find defects in medication delivery. We find defects in laboratory blood draws and labeling of tubes. We find all these defects that we can begin to codify and do the same type of real-time problem solving around them. The problem at the start is that the number of defects is overwhelming. When you begin to codify them, it's just staggering. Our notion is to build upon some early successes by truly committing to an audacious goal of eliminating infections, and then build upon that success to begin to move into areas like medication errors, or timeliness of service delivery. I mean, how often when a patient is scheduled to be in a cath lab at 10:00 AM does it actually happen at 10:00 AM? That's arguably a defect. Industry wouldn't tolerate that imprecision. Looking at those systems across what we call the customer–supplier relationship between the cath lab and our inpatient medicine unit is the next opportunity that these skills can be applied to. Decoding the data, developing observations in the current condition, and then applying these principles of real-time problem solving can be applied more generally. We hope that builds a cadre of skills that allows us to tackle these more formidable issues going forward. The issues of care delivery are even more complex than the relatively easily measurable issues of hospital infections. But it's a good place to start. There's a lot of alignment that you get around hospital-acquired infections, and it's still formidable. If you can master it, it builds great confidence and institutional memory to tackle other problems.