Perhaps you remember Sam, the chronic inebriate whose story I shared to discuss the pitfalls of basing doctor pay on patient satisfaction surveys.
Looking at his discharge papers, I wondered who helped Sam fill his survey out, and how much their “help” affected the results.
After all, millions upon millions of dollars are already now at stake for hospitals. And individual doctors’ Medicare payments are expected to be based on their satisfaction scores, as early as the year 2015.
Surely these surveys are validated and standardized, right? Surely there is policing to prevent “helping” people fill them out? You might be surprised by the answers to those questions.
For instance, when you’re talking about something like “satisfaction,” there are some regions where patients are less forthcoming with praise (check out the difference between, say, a quiet night hospital score in California versus Alabama).
These scores also lack variability. Westby Fisher, a clinical associate professor at University of Chicago’s Pritzker School of Medicine, calculated, with the Kaiser Foundation, the mean, median and standard deviation of hospital patient satisfaction data. Nationwide, there is just a two to six percent variation. In other words, the results vary arbitrarily, but very little. By statistical standards, it’s not a very good test.
Medicare’s review of these surveys showed there is no standard and the answer options are often biased to get better results. I’ve personally been given surveys that only offered me positive answer options.
There are even widely disseminated tips for doctors on how to get a “better” result, before your pay depends on it.
No one even seems to be watching how many people fill them these surveys out. As this story in support of surveys shows, a $1.6 million project funded by the Robert Wood Johnson Foundation resulted in a popular local doctor having 84 survey results. If that same doctor has 1,000 patients on his roster (which would be considered a low-to-normal panel size), that means 92% of his patients didn’t fill out a survey after a huge investment of time and money. An 8% return rate renders any survey completely invalid. You’re supposed to throw it out.
But we don’t. Not for this one survey.
Even our national hospital patient satisfaction results show there is no mention of how many people are giving their opinion on any of these scores. Eighty percent or one percent? Ten thousand people or 50? One person 800 times? You can’t know.
As a doctor, ten people’s surveys are all you need to get your board recertification. But for me, calling and writing the Board, trying to explain that my patients don’t fill these out, have no phone, no address, are often illiterate, met total silence. “Use existing surveys from your insured patients” was the first answer I finally got. When I explained that none of them were insured, “just find ten” was the final instruction. Clearly I could game it any way I wanted.
Even more surprising is the reporting, even by professionals who should know better, of individual doctors’ results as “above average” without any mention of the total numbers reported, or the range of answers. Is an 83 score actually different, in any way, from an 87? Who knows?
It is mind-boggling that we are implementing nationwide a test with less than 10 percent variability, using non-standardized surveys, that more than 90 percent of people DON’T even bother to fill out. Or, that one person can fill out 50 times. We don’t even have a numerator or denominator.
Statistically, that’s kind of like Lake Wobegon, where everyone is above average. Even your TV ratings are held to a much higher standard. And millions upon millions of our health dollars are already being spent on this.
The repercussions of paying people based on popularity don’t just affect doctors.
But don’t client satisfaction results tell us something important? As Kevin Pho, who blogs as KevinMD, pointed out, studies from both the Annals of Internal Medicine and the British Medical Journal did not find a strong correlation between patient satisfaction and the quality of care. As for whether tying compensation to popularity could help contain costs: it’s hard to imagine how using client satisfaction scores could do anything other than drive up costs, much less decrease them.
Don’t get me wrong.
I strongly believe that doctors should be listening to, and caring about, what their patients think and feel. In fact, I believe we may have brought this current insanity on ourselves.
How? At my own recent visit as a patient, my doctor never introduced herself. Although I had to strip down and wait half-naked in a gown, she never examined nor touched me. Her eyes never left the keyboard as she barked abrupt yes/no questions at me.
Talking to my doctor felt like I was talking to an angry, powerless, court stenographer. I spent 6 minutes with her and I could have done it over the phone and saved myself two hours. She left most of my care to a vague conglomeration of slightly confused mid-level providers and clerks, people who seemed to mingle around in the hallways, waiting for a patient to pop out of a room, to explain what just happened.
With that kind of treatment, who doesn’t want doctors to be more accountable for their patients’ satisfaction? But is it the individual doctor that creates this mess, or the system? Doctors have passively floated along as more and more of our roles became system-focused, and less patient-focused. Our patients want us to stand up for ourselves, and for them.
But popularity-based payment for doctors is not the answer — certainly not for the addicted, stigmatized, and disenfranchised patients I see. We are using our tax dollars to create a new, laughably flawed, multi-million-dollar “client satisfaction” industry to distract us from the forces that systematically caused these problems in the first place. We’re letting anyone and everyone game the system. And we’re paying people based on how well they do it.
Popularity surveys don’t put the focus back on the patient. They put the focus on what’s popular.
Doctor-blogger Jan Gurley writes for ReportingonHealth, a USC Annenberg School for Communication & Journalism online community for journalists and thinkers. Her blog explores the practice of medicine on the margins of society and what we can learn from it. You can see more of her posts here.
Disclaimer: Identifiable patients mentioned in this post were not served by R. Jan Gurley in her capacity as a physician at the San Francisco Department of Public Health, nor were they encountered through her position there. The views and opinions expressed by R. Jan Gurley are her own and do not necessarily reflect the official policies of the City and County of San Francisco; nor does mention of the San Francisco Department of Public Health imply its endorsement.