Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation With… Christine Cassel, MD

June 1, 2015 

In Conversation With… Christine Cassel, MD. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2015.

Save
Print
Cite
Citation

In Conversation With… Christine Cassel, MD. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2015.

Editor's note: Christine Cassel, MD, is President and CEO of the National Quality Forum (NQF). Dr. Cassel, one of the founders of the field of geriatric medicine, is a former medical school dean and CEO of the American Board of Internal Medicine (ABIM). We spoke with her about NQF's work in developing and utilizing measures to improve safety and quality in health care.

Dr. Robert Wachter, Editor, AHRQ WebM&M: Tell us about what the NQF is and does.

Dr. Christine Cassel: The National Quality Forum is a multi-stakeholder, consensus-based organization. It was created 15 years ago in order to have all of the key stakeholders in health care around a single table to agree on the directions for quality of care in the country and, importantly, how that quality is measured. The original intent was to have the public sector (all the key government agencies) and the private sector (the insurance industry, clinicians, hospitals, long-term care providers, consumers, and purchasers of health care) come together with technical experts to agree on measures for public reporting and comparative reporting of health care—and increasingly, in the last 7 or 8 years, on measures for value-based purchasing and other payment models.

RW: It's an interesting structure and obviously complicated. Where does that structure work well and where is it particularly challenging?

CC: The structure is both challenging and uniquely advantageous. If you want to have all the people affected by a decision at the table, then you must have time and a skilled staff to help enable a real conversation to occur and for it to achieve consensus. The process itself can often be very cumbersome. It's one thing I've paid a lot of attention to since I came here, trying to find ways to make that process less costly and less cumbersome. But ultimately it has to allow that important conversation to go on in a transparent and public way.

The advantage is that, when you do get to consensus, you have everybody on board. So you have the ability for the government or private sector organizations that use these measures (like LeapFrog, who you just talked to recently; NCQA; The Joint Commission; and others) to say these measures have been vetted and agreed to by the health care community and are rigorous and science based—an important combination of both political buy-in and evidence-based measurement.

RW: That's an interesting tension. One might think of your work as largely technical—the evidence is the evidence and what you're doing is trying to say that, for a particular measure, if p is less than .05, that's good enough. But there is this other dimension, in that you're operating in a very political environment, which must mean that you need to consider issues other than just the evidence.

CC: The criteria that we use for endorsement, which is the process that individual measurements and composite measures goes through—what you refer to as the more technical part of our work—does have evidence and rigor as a key part of it. But it also has more deliberative decisions that involve judgments such as the importance of the measure. Lots of things could be technically accurate but not deemed very important by the group and therefore would not be worth the time and effort to collect the data to measure and report. Another criterion is feasibility. How feasible is it to get this data, and would it be hugely expensive and very difficult for doctors and hospitals to collect the data that you're asking for? That is a factor as well.

RW: Let's talk about the role of NQF in safety. One of the first times I remember hearing about the NQF was when Ken Kizer put out the never events list. Is there a back-story around that, and how has that list evolved to support the field of patient safety?

CC: That list has been a very important one. It's now called the Serious Reportable Events list. We continue to update that list, and the definition is serious and harmful, largely preventable, patient safety issues—harms that happen to patients that could be prevented. They are required to be reported in 25 states and the District of Columbia. A number of states have different approaches to these measures. But these are considered the core harms that, particularly in patient safety, hospitals work to try to reduce.

Now the back-story is that I think Ken and the NQF board at the time were very bold in referring to these as never events and saying that wrong-site surgery should never happen; medication errors should never happen. So therefore you want to pick out those things and highlight the urgent need to focus on ways to reduce them. But the Institute of Medicine report, To Err is Human, pointed out that to err is human. We've seen notable health care systems around the country drive, for example, central line infections to zero for some period of time, then something happens and there is one. It's regrettable and we have to work to improve it. Or a patient falls or gets what might be a preventable pressure ulcer, or a mistake occurs in a surgical procedure. What we want to do is be realistic about this and say never is probably unachievable if you look at the whole range of serious reportable events. That led to the decision to refer to them now as serious reportable events.

RW: I wonder whether Ken Kizer could have predicted in 2002, when he introduced that list, that a few years later it would be the subject of state reporting requirements in half the states. Or to Medicare choosing 6 years later to no longer pay for harms on the list. How much energy and attention do you give to how your measures will be used? Or is the feeling that your job is to put out the best measures, and the world will do what it does with them?

CC: That's a very good question. I think in the early years it was the latter. Our job was to put out the list of serious reportable events, to put out endorsed measures, and as long as they remained valid and met the other criteria—that was the end of the discussion. It was up to the health care industry, the regulatory agencies, and the various government organizations to decide how to use these measures. The expert panels that we pulled together—and we are privileged to be able to draw on the volunteer efforts of experts throughout the field, and the discussions are very rich, very intense, very high level—often ask, how will this be used? So it does get discussed at those meetings but has not been part of the report when a measure is endorsed.

In 2011, in the wake of the Affordable Care Act and now a proliferation of more than 600 endorsed measures and additional measures used in public reports and other public programs, HHS created the Measure Applications Partnership (MAP as we call it), and NQF manages that process.

The goal of the MAP is to ask, of all these measures, which ones are best used for which purpose? So it gets a little bit toward what you're describing. Those committees are charged with trying to decide how this will play out if we use this measure in a pay-for-performance program. They have that deliberation and they then recommend that those measures should be used for those purposes. As we've looked recently at our processes, we realized that, between the endorsement process and the selection process, we could be much more efficient if the endorsement process were to offer opinions about the ultimate purpose of a given measure. Would this measure be used appropriately for payment, for public reporting, or for improvement purposes? Or maybe there are measures that would be very well suited for all of those. So we're doing a Kaizen [an improvement process drawn from Lean thinking], seeking to merge those processes, with the goal of more efficiently getting the expert opinion on all of those topics in a more streamlined way.

RW: The sepsis measure came under some scrutiny recently. How does NQF deal with the fact that new science can come along and sometimes change the suitability of a measure?

CC: We absolutely have to be nimble enough that we can respond to emerging science in real time. Because of that, just this year we began to have standing committees. Much of our work is done by federal contracts. So we would get a contract for cardiovascular measures or serious reportable events, and we would launch a project on that. We would put out a report, and then it wouldn't be until we got another contract from CMS that we had the resources to revisit any new evidence in that area—very inefficient. It requires getting a new committee together every time and that takes lead time, etc. Now we have some standing committees and we're able to respond in real time. The committees may need to bring in a few content area experts. We had a chance to test this recently with the new information about sepsis. It worked very well.

When the ProCESS trial came out and the new data about the lack of utility of central venous monitoring in the sepsis bundle emerged, we were able to go immediately to the standing committee to reconsider the sepsis measure. The committee voted to remove that item from the bundle, and put the revised measure up for public comment, which we are required to do by the government. So that's a perfect example of a new study emerging after we had already developed and published a consensus-based agreement on a certain process and a certain set of measures. We have to be able to respond to those things in real time.

RW: You might look at a measure and say this is a good, important measure. The science is good, but it would be hard to gather these data today. Yet, we know that medicine is turning digital very quickly and the fact that it becomes an NQF measure may drive the digital industry to make it feasible to measure. How do you think about that push versus pull?

CC: We have a number of interactions with the electronic health record industry. Through our interactions with the Office of the National Coordinator for Health Information Technology, we participate in a number of their committees. We have been asked to increasingly specify many of our measures for electronic use. So this gets down into the weeds of the technical part of what do those specifications require and what kind of expertise do you need. My personal view is we should be doing a lot more of this. We're seeking support to impanel standing committees charged with making recommendations that would immediately allow the electronic record vendors to make possible the ability to report measures electronically.

It's unfortunate that we don't have more flexible technology in the process of collecting electronic information even now. You'd like to think in this cloud-based world that you could just pluck some data elements out of whomever's record it is, put it together, push some button on your keyboard, and send it to whomever needs it. That's the dream future state that we'd all like to get to—so that people didn't have to spend extra time, hire extra staff to collect the data that needs to be reported. We're not there yet. So we are eager to work with the electronic data world in order to make that happen.

RW: A few years ago, there was a case that got a lot of press related to conflicts of interest within the organization around safety practices. Without going into the specifics of the case, can you talk about how you manage this issue, given that you have a huge number of experts and many of them do have conflicts?

CC: I'm glad you asked about that because that report on our audit of safe practices is another example of a way in which we've been able to respond in real time. A concern emerged about potential conflicts of interest and a report on safe practices that had been issued in 2010, which probably needed to be re-reviewed anyway. In light of that new information, we impaneled a group of experts (in full disclosure we should say that you were a member of that group). That group—this was a bunch of real experts in the field—responded very quickly. We pulled our staff together, reviewed all of the 34 safe practices and reference citations, deleted obsolete references, added current references, and recommended substantive changes where there were potential conflicts in the recommendations. We wanted to especially look at those, but also where new evidence had emerged. Interestingly, many of the safe practices were not changed, but the evidence was updated and that was helped along by a terrific report updating the evidence on patient safety practices from the Agency for Healthcare Research and Quality that came out in 2013. And CDC also had a report that was helpful. So we didn't have to do extensive literature reviews. After receiving public comment, the report is now finished and has been released.

RW: Diagnostic errors are an important issue. They are tricky to measure, and given the profusion of measures in safety that your organization does endorse, there's a risk that the world will shift its attention away from diagnoses if there are no good measures. What do you think about the state of measurement and how do we as a system deal with that asymmetry of measurement when an issue is really important?

CC: An issue like diagnostic error is one that is near and dear to my heart as an internist. As someone who spent many recent years of my career at the ABIM, a big part of what we do is evaluate internists' ability to synthesize information in order to get to the right diagnosis. But that's in a simulated model. It's not in the real world of actual patient care.

We know that many patients do get the wrong diagnosis. We know this from malpractice claims and from anecdotal literature from which some extrapolations can be made. But it's very hard to measure something that doesn't happen. The patient safety world is all about measuring when mistakes or bad things happen. Often when a diagnosis is wrong you don't know about it until much later—the patient may not report it, and there aren't good ways of capturing that information. So it's not only tricky to measure, we may need a very different kind of measurement here. I don't think that's a reason we shouldn't pursue it and try to figure out what NQF can do to contribute to greater awareness of this issue. Of course the reason to do that is to try to address it and figure out stakeholders who can best come together to try to improve that situation.

The good news is that a vibrant new society, the Society to Improve Diagnosis in Medicine, is addressing the academic and methodological issues in this area. And, with funding from multiple sources, the Institute of Medicine has launched a full-fledged IOM committee to study diagnostic error in medicine. (I am a member of that committee. We're not permitted to disclose the deliberations of what goes on with that committee.) Recently, NQF and The Joint Commission recognized Dr. Mark Graber with the John Eisenberg Patient Safety and Quality Award for his individual leadership is this area. Other people in the field should be thinking about this important problem. I would urge people to continue to study it, to continue to think creatively about how to identify diagnostic errors, and importantly how to move toward a world in which patients can expect and demand greater attention to this important issue.

RW: What was the biggest surprise when you came to NQF? You knew the organization, but from a different vantage point.

CC: I knew the organization. I knew the amazing quality of the staff and how hard they work. I knew that the board was a multi-stakeholder board and the challenge of multiple and different sectors communicating with each other. I was surprised in a very positive way at the collaborative nature of the board. There's something about getting everyone—with their own perspective and their own interest—around that table talking openly about these problems that affect real patients, then having the consumers and patients at that same table. It changes the nature of the discussion. I sort of theoretically knew that was the case, but having come from this world where we were mostly doctors around the table, I was wowed by the power of the consensus model and the fact that people have very candid disagreements. It isn't the fact that people are being overly polite, but that at the end of the day having everyone there really allows the best in everyone to come forward.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Cite
Citation

In Conversation With… Christine Cassel, MD. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2015.