Thursday, March 24, 2011

Evidence Based Decisions and the Art of Medicine

“The evidence clearly shows……”

How often have we heard that argument made, trying to influence our choices or behaviors. Experts provide data that support their point of view, often in the face of conflicting information. We would like to believe that we always make rational decisions based on facts. But human nature proves otherwise. We will base our most recent decision on how the last situation played out. Negative outcomes, even if infrequent, have a greater influence on future behavior than their true weight. And as clinicians we are often told, “Judgment comes from experience, and experience comes from poor judgment.”

So it has been very interesting for me in my new position to work with clinicians and systems on best practices for various patient conditions. Unlike the “hard” sciences like physics, mathematics, and chemistry, soft sciences like medicine and pathophysiology have shades of grey. Even the best designed studies are inherently ambiguous, and the literature tends to further this bias by publishing studies where a beneficial difference is found, as opposed to equivalence or detriment. How do we sort through the data and try to come to a rational conclusion?

By analyzing how to analyze data.



Health service researchers have developed an elaborate grid that attempts to balance the potential benefit of an intervention with the strength of the data supporting that intervention. These decisions subsequently form the basis for clinical guidelines, and in many cases, whether the service will be paid for. The strongest recommendations, IA , are the most stringent , and should be generally accepted. III A findings strogly support more harm than good. You can see where the ambiguity comes in. A IC may well be good, but there’s not strong randomized data. By the same token, a IIIC recommendation not to do something is less of an indictment than a IIIA. As a surgeon, who has spent the last 30 years trying to do the best for my patients, often with limited available data, this is clearly a shift in thinking. We were trained to listen and examine the patient and then determine if they were sick and needed an operation. Subsequently, the specifics of the diagnosis would often be revealed in the operating room.

So why is this new paradigm important? Increasingly, with limited resources, we may be steered to provide or defer therapies based on into what evidence category the question falls. Could “Watson,” when he’s not winning at Jeopardy, calmly crunch all the potential permutations and risks to come to a definitive plan? I suppose he could, but I hope we don’t come to that alone.

When I was still a medical student, Arnie Rosenbaum, an internist in my hometown of Canton, took me on rounds. In each patient’s room, he would sit at the bedside, and take their blood pressure, while gently feeling their pulse. Afterwards, I asked why he would do this when the data were already on the clipboard. “Because I always touch my patients and look them in the eye. It really tells me how they’re doing. And when the time comes for me to be a patient, I hope my doctor makes the same human connection.”

My n=1 in this nonrandomized study of internists in Canton Ohio in 1979. But based on my experiences since then, that’s Level IA data in my eyes.

No comments:

Post a Comment