"It's an honor just to be nominated," crowed sex columnist Dan Savage in response to Andrew Sullivan's listing of Savage in his poll for the "Moore Award". Sullivan, in his Daily Beast blog "The Dish", has a variety of year-end awards, and his "Moore" award (named after the lefty agitpropster filmmaker Michael Moore) is for "divisive, bitter and intemperate left-wing rhetoric". Savage garnered a nomination this year, and has thus far worn that nomination like a badge. As of this writing, with just under 10,000 votes cast, Savage is comfortably in the lead with more than 58 percent of the vote in a field of ten candidates--a sum that Mitt Romney dreams about even more than Sugar Plum Fairies as we close in on the Iowa Caucuses. (What Simon Winchester, whose entire collected works I have nearly finished over the past few years, has said or written to be included in this group is unknown to me, but it must have been a doozy, since under normal circumstances one would not describe his utterances as remotely extreme.)
Savage's nomination came as a result of a dustup between him and soon-to-be-former-candidate-for-President Representative Michele Bachmann back in September during the Republican Presidential debates. At that time, Bachmann was indulging in one of her favorite political tactics in order to separate herself from her rivals and recover some of the mojo she had lost over the summer when she had been the darling of the party and led in the polls.
That tactic would involve departing entirely from reality and making up whatever stuff she deemed suitable to rile up sufficiently nitwitted partisans, as she did earlier in the campaign when she attacked Michelle Obama for advocating breastfeeding by supporting a tax break for breast pumps, turning the tax break into a right-wing fantasy that the government was "going out to buy my breast pump". In the September debate, Bachmann had decided to stake out the anti-vaccine territory to stick it to Texas Governor Rick Perry, who was then leading in the polls. In 2007, Perry had admirably issued an executive order mandating that Texas girls receive access to the HPV vaccine, a major cause of cervical cancer as well as other maladies. That order was later overturned by the Texas legislature, causing Perry to quip--correctly--that the bill's supporters were effectively killing women who would needlessly die from the cancer.
Not that he said it that bluntly, but he came close: "no lost lives will occupy the confines of their conscience, sacrificed on the altar of political expediency", was his rather eloquent retort at the time. Unfortunately, he may have wished he never uttered those words, as the HPV order came back to bite him rather fiercely as the primary season got more contentious and governmentophobic conservatives took a dim view of his actions. Bachmann, though, decided to go for the jugular, and took the almost-reasonable sounding "there are limits to government" argument and pushed much deeper into the Twilight Zone. At the debate, she merely parroted the usual lines about governments forcing people to do things against their will, but the following day, in an interview with the Today show's Matt Lauer, Bachmann noted that she had been approached by a mother who claimed that her daughter had "developed mental retardation" after receiving the vaccine, and asked the viewers to draw conclusions for themselves.
In response to this perceived bit of a politician's own mental retardation, advocacy groups rose up in unison to denounce Bachmann's position. "There is absolutely no scientific validity to this statement", wrote Dr. O. Marion Burton of the American Academy of Pediatrics. Doctors blogging on the subject blasted her, including one who chided Bachman for her "anti-vaccine porn". And a few bioethicists offered thousands of dollars to review the records searching for proof of the vaccine's harm. Bachmann, in the days to come, would disingenuously backtrack on the claim, noting that she herself never made a claim about the vaccine's harm, only that someone else had done so, in language and reasoning so slippery it invites comparison to any number of reptiles.
Enter Dan Savage.
Within two days of the blowup, Savage wrote a brief dispatch on the matter, noting that her "comments" on the HPV vaccine were much more accurately described as "lies". Then he let his savage pen loose, noting the following:
Bachmann and her ilk believe that woman [sic] who have sex—along with men who fail to purchase health insurance—deserve to die horrible deaths. That's why they hate the HPV vaccine, that's why they fought its introduction, that's why they tell lies about it now. Because they want women to die.
Presumably, Savage meant something more along the lines of "women who have premarital or extramarital sex". Regardless, Savage's characterization of Bachmann was unquestionably intemperate. It was obviously divisive. It was unambigously bitter. Thus, by Andrew Sullivan's criteria, an ideal nominee for his award!
Only one matter bears mention: Savage was almost certainly correct. And for that we name one of our favorite columnists, Dan Savage, for the Billy Rubin Blog Hero Of The Year, and his public foil, soon-to-be-just-Representative Michele Bachmann, for the Billy Rubin Blog Villain Of The Year. Happy 2011, y'all.
--br
PS--In other news, we've been catching up on our reading around here and want to give a special shout-out to the following books, most of which came out in 2010, but we're almost never that up to date on our reading until someone actually pays us to write this blog. Besides, these books will have a shelf life to come, so please do consider them if you want to read excellent books on medicine:
The Emperor of All Maladies--Siddhartha Mukherjee's phenomenal "biography" of cancer. Though the subject matter may seem intimidating and depressing, Mukherjee takes the reader along on a ride that is suffused with the insight of a great clinician, the wonder of a thoughtful scientist, and the humanity of a fine writer. For his work he won the Pulitzer Prize for general nonfiction, and appropriately so. (Readers wanting to delve further into cancer literature might consider watching a Japanese film that received almost zero attention in the US, 1778 Stories of Me and My Wife, detailing the struggles of a writer and his cancer-stricken spouse. Be forewarned, however, that it makes the phrase "gut-wrenching" seem inadequate. I watched it on a flight from Europe to the US, and by they end the Dutch people around me practically had to carry my sobbing ass out on a litter.)
The Panic Virus--Seth Mnookin's book about the vaccine-causes-autism movement. An excellent primer on vaccine hysteria, narrower in scope than Arthur Allen's Vaccine but no less important or readable.
Anatomy of an Epidemic--Robert Whittaker's compelling analysis of modern psychiatry, which I've written about before here.
Where a spiritual descendant of Sir William Osler and Abbie Hoffman holds forth on issues of medicine, media and politics. Mostly.
Thursday, December 29, 2011
Thursday, December 1, 2011
Lipitor Goes Generic, and Everyone Wins...Theoretically
I like to play a game with my med students, residents, and fellows--although really the game can only be played with residents and fellows as the students don't have enough medical mileage under their belts to fare well. I ask them this question: what do they think are the five greatest drugs of all time? After all, people routinely debate the greatest baseball player--I'm partial to Willie Mays--the greatest writer in the English language, the greatest movie, and the list goes on. Why not have a discussion about what makes a drug great?
So we talk about how drugs are used and what makes them good or not. I do this exercise to get them thinking about qualities that define particular drugs or entire classes of them, and why some may be preferable to others. Such qualities include "applicability" (i.e. how many people would benefit from its use, as Tysabri™ is an incredible drug that preserves quality of life, but only does so for people with advanced multiple sclerosis, a very small group), the magnitude of benefit (a drug that saves a life is more important than one that eases wrinkles, such as Botox™), ease of use, minimal side effects, and a proven track record (drugs that are new to the market often appear miraculous; most don't last, as the brief life of Xigris™ shows) among other things.
Lively debates ensue, but what I find most interesting is that the drugs that most housestaff end up agreeing on are ones that have been around a very long time and weren't developed by recent pharmaceutical company R&D programs. And by "recent" I mean the past 30 to 40 years. Aspirin may be the greatest drug of all, and has been around in its current form since the mid-19th century (and the active ingredient was found in folk remedies long before that); morphine and its narcotic siblings are likewise more than a century old; penicillin-class and sulfa antibiotics were developed before World War II; insulin was first used in the 1920's after decades of research; and beta-blockers were first developed in the 1960's.
That said, one class--relative teenagers compared to these elders--stands out, and one drug from that class stands out in particular.
The class of drugs are known as "statins", and the drug is Lipitor™, the signature product of Pfizer. Since its introduction in 1996, Lipitor has not only gone on to become a blockbuster drug--its total estimated gross is $100 billion--but has by any measure been proven to meet the definition of a wonder drug. It is reasonably safe, most people tolerate it, lots of people require it, and it saves lives. Lots of lives. It's usefulness has been proven over and over again in well-designed trials. Unlike so many other drugs, its initial promise has not begun to fade.
The biochemical pathway in which Lipitor and its fellow statins work disrupts cholesterol synthesis, but we're still learning about how it works its magic: other medications that lower cholesterol in different ways, such as ezetimibe (trade name of Zetia™), seem not to have the same benefit in terms of preventing heart attacks and death that the statins do. Moreover, while Lipitor wasn't the first statin to market, and there are six other members of the statin class, Lipitor has reigned supreme. This is due in part to a more profound reduction in "bad cholesterol" LDL and an elevation in "good cholesterol" HDL than others in the class, but also its "gentleness", as for instance rosuvastatin, whose trade name is Crestor™, lowers the LDL the most of any in the class, but does at the cost of more serious and more frequent side effects. (A useful consumer review on statins from Consumer Reports can be found here.)
Pfizer has seen an enormous windfall from Lipitor, and they have deserved every penny. It is, in other words, an "honest" drug: no ridiculous shenanigans, such as those seen in the marketing of the generally unimpressive drug Neurontin™ by the very same Pfizer corporation, or the introduction of the current #1 drug by sale, Nexium™, which is nothing more than a clever repackaging of Prilosec, whose patent was due to expire and would have deprived its maker Aztra Zeneca of billions of dollars. But today Lipitor is now open to the competition, as its patent expired on Wednesday, so generic atorvastatin can be made and marketed in the US, which should drop the price of atorvastatin considerably. Thus, although I make no claims to be an economist or an intellectual property law expert, it looks like the expiration on the patent of the greatest modern medical drug was a win-win for both consumers and the shareholders who brought the drug to market.
I say "looks like" only because Pfizer, as this article explains, still intends to protect Lipitor's brand name in some ways that defeat the entire purpose of the spirit of patent law. Some of their efforts, like direct mailings of "coupons" for lower copays for Lipitor, seem free-market legit. Others, however, have that unpleasant odor so frequently associated with Big Pharma these days. In particular, Pfizer appears to be cutting deals with so-called "Prescription Benefit Managers" to elbow out the competition. PBMs serve as third-party payers for insurance companies and administer drug formularies. Pfizer's goal in negotiating with the PBMs is to give Lipitor at a discounted price in exchange for the PBM not carrying other companies' generic atorvastatin, effectively cutting them out of large markets.
It is not an illegal practice, although I fail to understand how this benefits consumers tremendously. Nor does the CEO of Watson Pharmaceuticals, Paul Bisaro, who complained about Pfizer's tactics on CNBC's "Squawkbox"--not exactly the kind of haven for socialist ideologues. But the story is still in motion, the deals are taking place in the backrooms and boardrooms away from journalists, so time will have to tell about atorvastatin's future. Today, however, was a good day for medicine, for business, and ultimately, for patients.
--br
PS--we also note with great enthusiasm that Gary Schweitzer's Health News Review blog has adopted a new look. Go check out the makeover! It is among the most valuable resources on medicine, and comes awfully cheap.
So we talk about how drugs are used and what makes them good or not. I do this exercise to get them thinking about qualities that define particular drugs or entire classes of them, and why some may be preferable to others. Such qualities include "applicability" (i.e. how many people would benefit from its use, as Tysabri™ is an incredible drug that preserves quality of life, but only does so for people with advanced multiple sclerosis, a very small group), the magnitude of benefit (a drug that saves a life is more important than one that eases wrinkles, such as Botox™), ease of use, minimal side effects, and a proven track record (drugs that are new to the market often appear miraculous; most don't last, as the brief life of Xigris™ shows) among other things.
Lively debates ensue, but what I find most interesting is that the drugs that most housestaff end up agreeing on are ones that have been around a very long time and weren't developed by recent pharmaceutical company R&D programs. And by "recent" I mean the past 30 to 40 years. Aspirin may be the greatest drug of all, and has been around in its current form since the mid-19th century (and the active ingredient was found in folk remedies long before that); morphine and its narcotic siblings are likewise more than a century old; penicillin-class and sulfa antibiotics were developed before World War II; insulin was first used in the 1920's after decades of research; and beta-blockers were first developed in the 1960's.
That said, one class--relative teenagers compared to these elders--stands out, and one drug from that class stands out in particular.
The class of drugs are known as "statins", and the drug is Lipitor™, the signature product of Pfizer. Since its introduction in 1996, Lipitor has not only gone on to become a blockbuster drug--its total estimated gross is $100 billion--but has by any measure been proven to meet the definition of a wonder drug. It is reasonably safe, most people tolerate it, lots of people require it, and it saves lives. Lots of lives. It's usefulness has been proven over and over again in well-designed trials. Unlike so many other drugs, its initial promise has not begun to fade.
The biochemical pathway in which Lipitor and its fellow statins work disrupts cholesterol synthesis, but we're still learning about how it works its magic: other medications that lower cholesterol in different ways, such as ezetimibe (trade name of Zetia™), seem not to have the same benefit in terms of preventing heart attacks and death that the statins do. Moreover, while Lipitor wasn't the first statin to market, and there are six other members of the statin class, Lipitor has reigned supreme. This is due in part to a more profound reduction in "bad cholesterol" LDL and an elevation in "good cholesterol" HDL than others in the class, but also its "gentleness", as for instance rosuvastatin, whose trade name is Crestor™, lowers the LDL the most of any in the class, but does at the cost of more serious and more frequent side effects. (A useful consumer review on statins from Consumer Reports can be found here.)
Pfizer has seen an enormous windfall from Lipitor, and they have deserved every penny. It is, in other words, an "honest" drug: no ridiculous shenanigans, such as those seen in the marketing of the generally unimpressive drug Neurontin™ by the very same Pfizer corporation, or the introduction of the current #1 drug by sale, Nexium™, which is nothing more than a clever repackaging of Prilosec, whose patent was due to expire and would have deprived its maker Aztra Zeneca of billions of dollars. But today Lipitor is now open to the competition, as its patent expired on Wednesday, so generic atorvastatin can be made and marketed in the US, which should drop the price of atorvastatin considerably. Thus, although I make no claims to be an economist or an intellectual property law expert, it looks like the expiration on the patent of the greatest modern medical drug was a win-win for both consumers and the shareholders who brought the drug to market.
I say "looks like" only because Pfizer, as this article explains, still intends to protect Lipitor's brand name in some ways that defeat the entire purpose of the spirit of patent law. Some of their efforts, like direct mailings of "coupons" for lower copays for Lipitor, seem free-market legit. Others, however, have that unpleasant odor so frequently associated with Big Pharma these days. In particular, Pfizer appears to be cutting deals with so-called "Prescription Benefit Managers" to elbow out the competition. PBMs serve as third-party payers for insurance companies and administer drug formularies. Pfizer's goal in negotiating with the PBMs is to give Lipitor at a discounted price in exchange for the PBM not carrying other companies' generic atorvastatin, effectively cutting them out of large markets.
It is not an illegal practice, although I fail to understand how this benefits consumers tremendously. Nor does the CEO of Watson Pharmaceuticals, Paul Bisaro, who complained about Pfizer's tactics on CNBC's "Squawkbox"--not exactly the kind of haven for socialist ideologues. But the story is still in motion, the deals are taking place in the backrooms and boardrooms away from journalists, so time will have to tell about atorvastatin's future. Today, however, was a good day for medicine, for business, and ultimately, for patients.
--br
PS--we also note with great enthusiasm that Gary Schweitzer's Health News Review blog has adopted a new look. Go check out the makeover! It is among the most valuable resources on medicine, and comes awfully cheap.
Wednesday, November 9, 2011
Say It Ain't So, Joe
The breathtaking arrogance of Coach Joe Paterno's statement that he would continue to coach the Nittany Lions football team can only be met with a dropped jaw. While confessing to being "absolutely devastated by the developments in this case", Paterno nevertheless states that he will soldier on as head coach until season's end. Astonishingly, he manages to shoot a specific barb at the Board of Trustees, presuming to offer advice that they "should not spend a single minute discussing my status. They have far more important matters to address."
No, they really don't, and for Paterno to even think to throw his weight around indicates, alas, his complete inability to comprehend the magnitude of his errors. At best a case can be made that Paterno acted within the legal boundaries of behavior when confronted with accusations that his longtime assistant coach, Jerry Sandusky, had forcibly sodomized a ten year-old child on Penn State University grounds. But no legitimate case can be made that Paterno behaved in any way that anyone with a moral compass would regard as humane or decent. How this man could possibly have the nerve to think about taking the sideline against Nebraska this weekend in light of the week's revelations about his appalling role in enabling Sandusky's predatory instincts, words cannot summon the outrage. He should be wearing sackcloth and ashes, begging anyone willing to listen for forgiveness for having allowed a monster to run amok for at least a decade. Instead, he swaddles himself in the cocoon of supporters who appear to think the Kool-Aid tastes quite fine, thanks, as he shoots off press releases without staring the disbelieving in the face.
Regardless of whether Paterno does indeed rally the Happy Valley faithful for one victory lap after having become the winningest college coach, this is an ignominious end for a fine man, one who was arguably the last of a special breed in big-time college football: the coach who saw his mission as shaping and educating the minds of young men as much as winning national titles. To distant admirers--and I count myself in that group--Paterno stood for something that I fear large Universities embody less and less with each passing year, namely, a commitment to principle. When the Jim Tressell scandal at Ohio State broke this year, nobody who had been paying any attention to the corrupt state of college football could really have been surprised, except that the ensnared head coach was one who wore sweaters and projected an image of integrity.
Like the rest of big-time college football, it was only an image, a fig leaf covering a morally bankrupt system. There was more than a hint of wink-wink nudge-nudge in the bouncy collegiate career of Cam Newton, who despite being involved in a cheating scandal at Florida nevertheless managed to finish his career leading the Auburn Tigers to the national championship. Somehow Newton managed to play for three colleges during his NCAA eligibility despite clear evidence to anyone willing to pay attention that he likely deserved expulsion from the first school, and behaved in a manner unbecoming any University in offering up his services to the highest bidder in what has since been called the "pay for play" scandal. (Two scandals for one college athlete--not bad!)
Compare this to the NCAA position thirty years earlier on running back phenom Marcus Dupree, who had left the University of Oklahoma in 1983 in an attempt to break with head coach Barry Switzer. The NCAA ruled him ineligible for two full seasons; Dupree's awkward attempted leap to the pros never panned out, and his claim to fame is being the subject of an ESPN documentary, The Best That Never Was. Such an action today, along with the so-called "death penalty" levied against Southern Methodist University, is inconceivable. Everyone is in on the joke, and most serious college football fans appear not to care terribly much. Even the Miami Hurricanes scandal, along with the shenanigans at Ohio State and the unsavory behavior of Newton, seems not to have made a blip on anyone's ethical radar screen. Yes, they get paid indirectly. Yes, a good chunk of them don't belong in college. So what? Let's talk about the injustice of the BCS rankings instead.
All of which is lamentable, but the Paterno scandal is different, as the look-the-other-way behavior (or, in the case of two senior Penn State officials, outright perjury) didn't enable some coddled athletes but instead led to little boys being raped. Several little boys--the count stands at nine who have come forward, and it seems reasonable to suppose that these are not the only nine. According to the Grand Jury report, Paterno had been told the explicit details of the rape of "Victim #2" when informed by grad student Mike McQueary in 2002. Moreover, one thinks that the Coach must have heard, at the least, rumors of some odd behavior of Sandusky in 1998 involving showering with a child. As Andrew Rosenthal notes while scratching his head, these are not the actions of a man who should be allowed to script his own exit, whatever sterling reputation he may have had previous to November 2011.
The ESPN columnist Rick Reilly argues that this story isn't really about Paterno, but I would beg to differ. Stories of pedophiles being caught, however grotesque, are not centrally important to the national news of the United States. But when powerful people in a revered institution give a free pass to a pedophile due to whatever inexplicable reasons tied to the success of a football team, that is a statement about not only the abuse of power by those people, but also the screwed-up priorities that gave such people that kind of power in the first place.
--br
No, they really don't, and for Paterno to even think to throw his weight around indicates, alas, his complete inability to comprehend the magnitude of his errors. At best a case can be made that Paterno acted within the legal boundaries of behavior when confronted with accusations that his longtime assistant coach, Jerry Sandusky, had forcibly sodomized a ten year-old child on Penn State University grounds. But no legitimate case can be made that Paterno behaved in any way that anyone with a moral compass would regard as humane or decent. How this man could possibly have the nerve to think about taking the sideline against Nebraska this weekend in light of the week's revelations about his appalling role in enabling Sandusky's predatory instincts, words cannot summon the outrage. He should be wearing sackcloth and ashes, begging anyone willing to listen for forgiveness for having allowed a monster to run amok for at least a decade. Instead, he swaddles himself in the cocoon of supporters who appear to think the Kool-Aid tastes quite fine, thanks, as he shoots off press releases without staring the disbelieving in the face.
Regardless of whether Paterno does indeed rally the Happy Valley faithful for one victory lap after having become the winningest college coach, this is an ignominious end for a fine man, one who was arguably the last of a special breed in big-time college football: the coach who saw his mission as shaping and educating the minds of young men as much as winning national titles. To distant admirers--and I count myself in that group--Paterno stood for something that I fear large Universities embody less and less with each passing year, namely, a commitment to principle. When the Jim Tressell scandal at Ohio State broke this year, nobody who had been paying any attention to the corrupt state of college football could really have been surprised, except that the ensnared head coach was one who wore sweaters and projected an image of integrity.
Like the rest of big-time college football, it was only an image, a fig leaf covering a morally bankrupt system. There was more than a hint of wink-wink nudge-nudge in the bouncy collegiate career of Cam Newton, who despite being involved in a cheating scandal at Florida nevertheless managed to finish his career leading the Auburn Tigers to the national championship. Somehow Newton managed to play for three colleges during his NCAA eligibility despite clear evidence to anyone willing to pay attention that he likely deserved expulsion from the first school, and behaved in a manner unbecoming any University in offering up his services to the highest bidder in what has since been called the "pay for play" scandal. (Two scandals for one college athlete--not bad!)
Compare this to the NCAA position thirty years earlier on running back phenom Marcus Dupree, who had left the University of Oklahoma in 1983 in an attempt to break with head coach Barry Switzer. The NCAA ruled him ineligible for two full seasons; Dupree's awkward attempted leap to the pros never panned out, and his claim to fame is being the subject of an ESPN documentary, The Best That Never Was. Such an action today, along with the so-called "death penalty" levied against Southern Methodist University, is inconceivable. Everyone is in on the joke, and most serious college football fans appear not to care terribly much. Even the Miami Hurricanes scandal, along with the shenanigans at Ohio State and the unsavory behavior of Newton, seems not to have made a blip on anyone's ethical radar screen. Yes, they get paid indirectly. Yes, a good chunk of them don't belong in college. So what? Let's talk about the injustice of the BCS rankings instead.
All of which is lamentable, but the Paterno scandal is different, as the look-the-other-way behavior (or, in the case of two senior Penn State officials, outright perjury) didn't enable some coddled athletes but instead led to little boys being raped. Several little boys--the count stands at nine who have come forward, and it seems reasonable to suppose that these are not the only nine. According to the Grand Jury report, Paterno had been told the explicit details of the rape of "Victim #2" when informed by grad student Mike McQueary in 2002. Moreover, one thinks that the Coach must have heard, at the least, rumors of some odd behavior of Sandusky in 1998 involving showering with a child. As Andrew Rosenthal notes while scratching his head, these are not the actions of a man who should be allowed to script his own exit, whatever sterling reputation he may have had previous to November 2011.
The ESPN columnist Rick Reilly argues that this story isn't really about Paterno, but I would beg to differ. Stories of pedophiles being caught, however grotesque, are not centrally important to the national news of the United States. But when powerful people in a revered institution give a free pass to a pedophile due to whatever inexplicable reasons tied to the success of a football team, that is a statement about not only the abuse of power by those people, but also the screwed-up priorities that gave such people that kind of power in the first place.
--br
Monday, October 10, 2011
PSA and the Embattled US Preventive Services Task Force
The US Preventive Services Task Force is a teeny tiny little group of researchers, physicians and epidemiologists who can claim the privilege of issuing recommendations on a variety of health-related issues such as screening, counseling, and preventive medication use. They're meant to be independent of the Department of Health and Human Services so as to be as far from the taint of Washington politics as possible, but alas, they've had a habit of getting themselves into the spotlight in the past few years, most recently this past week with some new recommendations on the blood test that screens for prostate cancer known as the PSA (for "Prostate Specific Antigen").
I'm not blaming them for stirring the pot so much, mind you--the USPSTF's job is to evaluate the evidence for a given current health practice and decide whether that practice makes any sense. While that concept sounds simple in theory, it becomes exquisitely difficult to accomplish without wading into dangerous political waters in practice. It was just about two years ago that the USPSTF issued recommendations about mammography as a screening test for breast cancer: they advised that women between ages 50-74 should have mammograms every other year (unlike the then-current annual recommendation) and that women under 50 shouldn't have mammograms at all unless they were in a particularly high-risk group. This fairly understated document generated an enormous backlash (which I've described before here) and caught members of the Task Force by surprise.
But when you look at the numbers, the actual data that formed the basis of the recs, it's not hard to see that the Task Force was if anything being generous about mammography. I don't have the time to review all the data here but one stat may suffice. One typical mathematical model was used by the Task Force to estimate the number of lives saved versus the number of those who would go on to be diagnosed with possible breast cancer based on an erroneous read from a mammogram (these are known as "false positives"). In the model, if you annually screened 1000 women starting at age 40 and did so for 30 years, you would save eight lives. This came at the cost of one hundred fifty-eight false positive diagnoses, at least a group of which, presumably, would progress all the way to mastectomy and possibly even radiation or chemotherapy. If, however, you started the annual screen at age 50, you would save seven lives instead of eight, but you'd reduce the number of false positive mammograms from 158 to 95..."only" 95. (Again, the USPSTF advised against annual screens for women 50-74, and there are data that can be used showing a similar effect in the every-other-year scenario, but I thought these numbers were revealing.)
So when news came this past week of the new recommendations on the PSA screen, I wasn't completely surprised to learn that the panel--which, incidentally, is a different group of doctors than those who issued the mammography guidelines--advised against its use entirely. The evidence has been mounting for several years that PSA is a less than stellar test, and its interpretation can be especially slippery when the value of the test hovers just above the normal range. This leads to many false positive diagnoses with precisely the same problems found in the mammogram. Men with false positives sometimes undergo radical prostatectomy, a surgery that can leave one not only sexually debilitated but incontinent. The test works entirely differently than a mammogram but the principle of test interpretation and the problems of overdiagnosis remain the same.
Likewise I wasn't surprised to hear of a similar backlash against the Task Force and the exchanging of academic insult followed by counterinsult, or more heated comments outside the ivory tower walls. (One advocate for the PSA, the urologist Dr. James Mohler, described the chief medical officer of the American Cancer Society Dr. Otis Webb Brawley, a PSA skeptic, like this: "I have known Otis for over 20 years. He doesn't come off as being ignorant or stupid, but when it comes to prostate-cancer screening, he must not be as intelligent as he seems." That's about as close as one can get to saying, "hey, asshole, fuck you" in the subdued world of academia without actually doing so.) At the blog db's Medical Rants, a fairly innocuous post by db was met with at least one howl of indignation, with commenter Scott Orwig accusing db of being "irresponsible, unprofessional, and unethical". (db's follow-up post is here.)
Caveat emptor: I have not yet read the Task Force report so I don't want to take sides in this post. What I can say is that slogans impress me less than an explanation of complex data, and while the latter is less sexy and the former more emotionally comforting, it's usually an indicator of which argument is more likely to be right. In all the articles I've read so far, all I'm hearing from the advocates are slogans.
--br
I'm not blaming them for stirring the pot so much, mind you--the USPSTF's job is to evaluate the evidence for a given current health practice and decide whether that practice makes any sense. While that concept sounds simple in theory, it becomes exquisitely difficult to accomplish without wading into dangerous political waters in practice. It was just about two years ago that the USPSTF issued recommendations about mammography as a screening test for breast cancer: they advised that women between ages 50-74 should have mammograms every other year (unlike the then-current annual recommendation) and that women under 50 shouldn't have mammograms at all unless they were in a particularly high-risk group. This fairly understated document generated an enormous backlash (which I've described before here) and caught members of the Task Force by surprise.
But when you look at the numbers, the actual data that formed the basis of the recs, it's not hard to see that the Task Force was if anything being generous about mammography. I don't have the time to review all the data here but one stat may suffice. One typical mathematical model was used by the Task Force to estimate the number of lives saved versus the number of those who would go on to be diagnosed with possible breast cancer based on an erroneous read from a mammogram (these are known as "false positives"). In the model, if you annually screened 1000 women starting at age 40 and did so for 30 years, you would save eight lives. This came at the cost of one hundred fifty-eight false positive diagnoses, at least a group of which, presumably, would progress all the way to mastectomy and possibly even radiation or chemotherapy. If, however, you started the annual screen at age 50, you would save seven lives instead of eight, but you'd reduce the number of false positive mammograms from 158 to 95..."only" 95. (Again, the USPSTF advised against annual screens for women 50-74, and there are data that can be used showing a similar effect in the every-other-year scenario, but I thought these numbers were revealing.)
So when news came this past week of the new recommendations on the PSA screen, I wasn't completely surprised to learn that the panel--which, incidentally, is a different group of doctors than those who issued the mammography guidelines--advised against its use entirely. The evidence has been mounting for several years that PSA is a less than stellar test, and its interpretation can be especially slippery when the value of the test hovers just above the normal range. This leads to many false positive diagnoses with precisely the same problems found in the mammogram. Men with false positives sometimes undergo radical prostatectomy, a surgery that can leave one not only sexually debilitated but incontinent. The test works entirely differently than a mammogram but the principle of test interpretation and the problems of overdiagnosis remain the same.
Likewise I wasn't surprised to hear of a similar backlash against the Task Force and the exchanging of academic insult followed by counterinsult, or more heated comments outside the ivory tower walls. (One advocate for the PSA, the urologist Dr. James Mohler, described the chief medical officer of the American Cancer Society Dr. Otis Webb Brawley, a PSA skeptic, like this: "I have known Otis for over 20 years. He doesn't come off as being ignorant or stupid, but when it comes to prostate-cancer screening, he must not be as intelligent as he seems." That's about as close as one can get to saying, "hey, asshole, fuck you" in the subdued world of academia without actually doing so.) At the blog db's Medical Rants, a fairly innocuous post by db was met with at least one howl of indignation, with commenter Scott Orwig accusing db of being "irresponsible, unprofessional, and unethical". (db's follow-up post is here.)
Caveat emptor: I have not yet read the Task Force report so I don't want to take sides in this post. What I can say is that slogans impress me less than an explanation of complex data, and while the latter is less sexy and the former more emotionally comforting, it's usually an indicator of which argument is more likely to be right. In all the articles I've read so far, all I'm hearing from the advocates are slogans.
--br
Friday, August 19, 2011
When a Microbe "Eats" a Human
There they go again. My guess is that the science & health "editors" at the major television media outlets felt a frisson of excitement when they heard of the deaths of some teenage kids exposed to pond or lakewater from an extremely rare amoeba known as Nagleria fowleri. Why? That's lot's of eyes of worried parents zooming in to their website and passing it along to other worried parents. It's good for the news business. ABC News's piece is here; CBS's story, with a link to a piece giving tips on staying safe, is here; MSNBC's take is here. Of the majors, only CNN appears to have taken a pass at the time I write this; at Fair & Balanced, the story is buried in the "Children's Health" tab here.
That the media Bigs love a good scare story, particularly with respect to some spooky infection, isn't saying anything new (and is discussed thoroughly in Marc Siegel's great book False Alarm: The Truth About the Epidemic of Fear). Suffice it to say that, depending on how you slice the numbers, thousands of American children die every year and that the three deaths so far due to Nagleria hardly indicates that we need to take all of our children out of the lake. Indeed, about a thousand kids die annually due to drowning, but this substantially larger problem isn't grabbing headlines and isn't even being mentioned as a comparison in the Nagleria stories to give some sense of proportion. Yes, lakes can be dangerous places: but mostly because teenagers drink alcohol and do stupid things on boats, not because a microscopic beast lurks underwater.
Which is actually what the Rubin blog is preoccupied with at the moment: the description of Nagleria. "Microscopic beast" is something of a contradiction in terms, right? Nagleria is smaller than a speck of dust and almost pretty to look at under a microscope. Beasts, by contrast, are big. They look scary! They have big, giant...teeth. And with those teeth, they eat. No surprise then, that a sensationalistic news item indulges in a little sensationalist imagery, as every one of the news stories above refer to Nagleria as a brain-eating amoeba.
But it's nonsense for the most part. Humans are, for Nagleria, what we call an accidental host: it makes its living by hanging out in the water feeding on tiny little bacteria. Yes, it does consume brain cells once it finds itself inside a human head, but to call it "brain-eating" just amps up the raise-the-hair-on-the-back-of-your-neck factor. Why not just call it "lethal", as it is almost universally so?
While we're on the subject, "flesh-eating bacteria" is--are you at all surprised?--likewise a misnomer. There is no particular species of flesh-eating bacteria, as it could be any number of bacteria. The most common bug to cause the condition of necrotizing fasciitis (the phenomenon that is caused by so-called flesh-eating bacteria) is from the family streptococcus, which lives harmlessly in the nasal passages, mouth and gut of humans. The problem isn't the bacteria per se; the real problem is when the bacteria manage to get deep into the soft tissues of the body (usually the legs, sometimes the arms, less commonly the trunk or face). In the upper layers toward the skin, bacteria have lots of physical impediments in their way to cause infection, and by the time they've lumbered along to a new patch of tissue, the immune system usually kicks in and clears the infection. We call that cellulitis.
In rare cases, though, these bacteria can dive deep and get down to an area called the fascia. Once there, there are no physical impediments, and the bacteria can move rapidly and make people incredibly sick very quickly, and typically the only "cure" is to filet the person's limb, take out the dead tissue, and hope that they survive. Often the affected limb needs to be amputated, and there's a high mortality rate. But there's nothing special about the bacteria themselves, although you wouldn't know that from seeing the news stories put out by the august organizations noted above.
--br
That the media Bigs love a good scare story, particularly with respect to some spooky infection, isn't saying anything new (and is discussed thoroughly in Marc Siegel's great book False Alarm: The Truth About the Epidemic of Fear). Suffice it to say that, depending on how you slice the numbers, thousands of American children die every year and that the three deaths so far due to Nagleria hardly indicates that we need to take all of our children out of the lake. Indeed, about a thousand kids die annually due to drowning, but this substantially larger problem isn't grabbing headlines and isn't even being mentioned as a comparison in the Nagleria stories to give some sense of proportion. Yes, lakes can be dangerous places: but mostly because teenagers drink alcohol and do stupid things on boats, not because a microscopic beast lurks underwater.
Which is actually what the Rubin blog is preoccupied with at the moment: the description of Nagleria. "Microscopic beast" is something of a contradiction in terms, right? Nagleria is smaller than a speck of dust and almost pretty to look at under a microscope. Beasts, by contrast, are big. They look scary! They have big, giant...teeth. And with those teeth, they eat. No surprise then, that a sensationalistic news item indulges in a little sensationalist imagery, as every one of the news stories above refer to Nagleria as a brain-eating amoeba.
But it's nonsense for the most part. Humans are, for Nagleria, what we call an accidental host: it makes its living by hanging out in the water feeding on tiny little bacteria. Yes, it does consume brain cells once it finds itself inside a human head, but to call it "brain-eating" just amps up the raise-the-hair-on-the-back-of-your-neck factor. Why not just call it "lethal", as it is almost universally so?
While we're on the subject, "flesh-eating bacteria" is--are you at all surprised?--likewise a misnomer. There is no particular species of flesh-eating bacteria, as it could be any number of bacteria. The most common bug to cause the condition of necrotizing fasciitis (the phenomenon that is caused by so-called flesh-eating bacteria) is from the family streptococcus, which lives harmlessly in the nasal passages, mouth and gut of humans. The problem isn't the bacteria per se; the real problem is when the bacteria manage to get deep into the soft tissues of the body (usually the legs, sometimes the arms, less commonly the trunk or face). In the upper layers toward the skin, bacteria have lots of physical impediments in their way to cause infection, and by the time they've lumbered along to a new patch of tissue, the immune system usually kicks in and clears the infection. We call that cellulitis.
In rare cases, though, these bacteria can dive deep and get down to an area called the fascia. Once there, there are no physical impediments, and the bacteria can move rapidly and make people incredibly sick very quickly, and typically the only "cure" is to filet the person's limb, take out the dead tissue, and hope that they survive. Often the affected limb needs to be amputated, and there's a high mortality rate. But there's nothing special about the bacteria themselves, although you wouldn't know that from seeing the news stories put out by the august organizations noted above.
--br
Thursday, August 18, 2011
Media Overstatement on a Slow News Day
Right now one of the lead stories at the NY Times website deals with a potential new "miracle drug" called SRT-1720. With heavy emphasis on the scare-quotes. The article's title, "Drug Is Found to Extend Lives of Obese Mice," might be generating a huge buzz on the obese mouse circuit, but beyond this, I'm puzzled as to why this story is given such prominence in the Paper of Record. You could even argue that the story is barely worth running at all, even if placed deep in the science section of the website.
Bottom line is that this is a very preliminary study of an experimental drug. Studies like this are a dime a dozen, and it turns out that lots of fascinating things can be done in mice, but most of the time those fascinating things either don't work in humans, or end up having unacceptable risks compared to the benefits. I don't mean to belittle the experiment--it sounds very exciting--but I'm not sure that it's ready for primetime among laypeople just yet. Could it be part of a bigger article talking about strides that science is making in the field of aging? Sure: that's what the TV show NOVA is about, among other forms of popular science media. But there ain't no miracle drug coming down the pike that's going to extend the lives of obese people by 44 percent. So time to bury the story.
I can't wait to see what Gary Schweitzer is going to do to this story in his HealthNewsReview Blog. Go get 'em!
--br
Bottom line is that this is a very preliminary study of an experimental drug. Studies like this are a dime a dozen, and it turns out that lots of fascinating things can be done in mice, but most of the time those fascinating things either don't work in humans, or end up having unacceptable risks compared to the benefits. I don't mean to belittle the experiment--it sounds very exciting--but I'm not sure that it's ready for primetime among laypeople just yet. Could it be part of a bigger article talking about strides that science is making in the field of aging? Sure: that's what the TV show NOVA is about, among other forms of popular science media. But there ain't no miracle drug coming down the pike that's going to extend the lives of obese people by 44 percent. So time to bury the story.
I can't wait to see what Gary Schweitzer is going to do to this story in his HealthNewsReview Blog. Go get 'em!
--br
Saturday, August 6, 2011
"Owning" a Patient & Other Quick Medical Thoughts
a. The subspecialties of medicine each have their own slightly different personalities and subcultures, and although this is a gross overgeneralization I have long thought of surgeons--that is, general surgeons and their ilk, not orthopods or, say, urologists--as the Baddest Motherfuckers in the business. These guys & gals are the toughest & most reliable hombres: they work the longest hours and rightly take enormous pride in their work. When I was in medical school and a patient was admitted to surgery, the culture of the team was that nothing, absolutely nothing, would get in the way of caring for the patient--not sleep, not food, not any kind of distraction. Although I wandered down the internal medicine pathway, I have always admired the attitude with which surgeons owned their patients.
The word "ownership" is a term we use in medicine and while it sounds rather paternalistic, I have a fondness for it, as it signifies a kind of special level of responsibility. When you "own" a patient, it means that you consider yourself to be the most important of a team of doctors & nurses, that the buck stops with you. And as I've said, during my training I never saw a group that took ownership more seriously than general surgeons and their subspecialties such as cardiothoracic, colorectal, & vascular. At the risk of redundancy, these folks are tough.
Only I've seen some weird things happening at my academic medical center as well as at my little community hospital over the past year or two, and this week while attending as a consultant I witnessed something that I found quite surprising, and I'm wondering if that cast-iron sense of ownership is eroding amongst that hardcore group. I was asked to see a patient about a pre-operative infectious issue before the patient was due to get a mitral valve--should the patient be on antibiotics & if so how long, does the surgery have to go on hold, that sort of question. When I finished the consult I had my team get on the phone to talk to the intern, whom they dutifully paged. Only the intern who answered wasn't the surgical intern, it was the medicine intern.
"Wait, this lady's on the medicine team?" I said in frank astonishment. The patient had been admitted to the hospital specifically to get a valve replacement; that's purely a surgical issue. She didn't have a lot of medical problems that required an internist to be her primary doc in the hospital. And yet, somehow, she was sitting there on a medicine team with the cardiothoracic docs serving as consultants.
For laypeople out there this may be hard to grasp why this is a bad idea, but suffice it to say that you manage patients differently based on the kind of training you've had, as well as the kind of patients you care for. Surgeons are better taking care of patients undergoing surgery because, well, they do surgeries! And the surgical patient has a host of problems that internists don't encounter in the same way: fluid shift issues, mostly, which doesn't sound like much, but can be the difference between life and death if you misread the signals. This lady really did not belong on an internal medicine service, and search me as to why she was.
This isn't isolated, as I've seen pancreatitis patients, diverticulitis patients, cholecystitis patients all get turfed to medicine in the recent past. Some of these are borderline calls and could be taken care of adequately either way; some of these are what I would consider clear-cut surgical patients and I scratch my head when they are refused by the surgeon and sent to medicine (where I work, internal medicine does not have the luxury of refusing patients except in extreme circumstances).
Anyway, I'm happy to "own" such patients although I'm not sure that it's always in the patient's best interests for internal medicine doctors to be managing surgical cases. I'm also wondering if something's changed in the ethic of those surgeons whom I have held in such high esteem for so long.
b. Many months back I took my best shot at discussing a book before I had read it. The link, which can be found here, is a discussion about a book that had made a bit of a flap in the psychiatry community called Anatomy of An Epidemic by Robert Whitaker. Since I hadn't read the book, I did not venture to offer an opinion about it, but wondered about how the reviews framed what appeared to be a startling hypothesis: namely, that psychiatric drugs have, for at least a generation, made patients who suffer from psychiatric disease worse on the whole. Was the book worth reading? was my simple question, and I concluded it was and that I'd get around to it as soon as I could.
Well, I did, and my initial reaction is wow. Whitaker's book goes to the core of psychiatry and takes a sledgehammer to it, and he makes one hell of a powerful case that there's nothing behind the curtain. This is not the work of a pseudoscientific idiot who is raging against the machine; Whitaker supports his thesis by citing reputable scientific sources, and does so quite thoroughly. Unlike Celia Farber, an AIDS denialist who is short on facts and long on paranoia, Whitaker lays out his argument with the kind of precision and scientific grounding medical schools hope & pray they can impart to their students.
Whether Whitaker's contentions are completely right I cannot say; I just don't know the literature of psychiatry well enough. (A small quibble: I think he didn't portray Peter Kramer's excellent book Listenting to Prozac fairly, but it's been a long time since I read that book.) But he's without doubt persuasive, and has written a book that anyone who is seriously interested in the broad sweep of modern psychiatry should read. Next up on my reading list is Dr. Dan Carlat's Unhinged; the utterly awesome Dr. Marcia Angell, the former Editor In Chief of The New England Journal of Medicine and author of The Truth About Drug Companies, has a review about both books (as well as Irving Kirsch's The Emperor's New Drugs) in a recent New York Review of Books which can be found here.
Anyway, the point is simple: read this book.
c. Prior to her death, my only knowledge of Amy Winehouse was that she was a singer, and that her escapades with drug addiction were tabloid fodder. I had never listened to her music, but the comparison of her to Janis Joplin in the NYT obit, as well as the description of her as a jazz singer, caught my interest. I downloaded Frank and over the span of the next several days I heard her voice while driving to and from work, and suddenly shared in the collective frustration over a life that held such promise and exhibited such talent. I have not yet gotten around to listening to her signature album Back to Black, but I have become a fan.
Her place in music history is of course an open question, but even if I am blown away by Back to Black I think her troubles with addiction, which led to her decline and untimely death, will place her in that rank of singers whose talents we'll never really know. Joplin was the Times's point of comparison but I've spent some time thinking about Billie Holliday as I listen to her. Holliday gave more of her music to the world, surviving to 44 instead of Winehouse's 27, but she too represents the kind of talent that ventures too close to the flame. Match that against perhaps the greatest singer ever, Ella Fitzgerald, who kept care of herself her entire life (she died from diabetic complications, not drugs or alcohol), devoted it mostly to singing, and it's hard to listen to Holliday without some twinge of regret. Holliday in some way played the Charlie Parker to Fitzgerald's Dizzie Gillespie, the former dancing with demons on a nightly basis, the latter plugging away like a tortoise racing against a hare.
But Holliday, and Winehouse too, may have made the Faustian bargain of communing with the dark side in order to create their art, and it may not make sense at some level to shake our heads at their self-destructive recklessness. I love this clip of Holliday singing "My Man Don't Love Me" from a series that CBS television did called The Story of Jazz in 1957, two years before her death. (There's many things to love about this, actually: that one of the three major networks had a primetime series showcasing America's greatest and most serious artists; that Holliday is hardly the only important face in this ensemble, with a kind of Fania All Stars version of American jazz surrounding her like Coleman Hawkins, Lester Young, Gerry Mulligan and others; and that it's the last time Young & Holliday had a musical tete-a-tete before they both succumbed to their addictions.) The song is beautiful, but it's dark, jagged, and bloody; in short, it is the perfect song for Holliday at her peak. Would Winehouse have been able to step into that mix and take over for Lady Day? I think so. Would the First Lady of Song? I think not.
All of which is to say that Winehouse may have paid a price for her short-lived brilliance, but to judge that bargain as inherently wrong (or indeed, to understand it as anything other than a bargain) may be to fundamentally misunderstand her talents. I tell my med students when confonted with the peculiar vices of their patients: don't judge, just understand. Should we not do the same for Winehouse as her audience?
--br
The word "ownership" is a term we use in medicine and while it sounds rather paternalistic, I have a fondness for it, as it signifies a kind of special level of responsibility. When you "own" a patient, it means that you consider yourself to be the most important of a team of doctors & nurses, that the buck stops with you. And as I've said, during my training I never saw a group that took ownership more seriously than general surgeons and their subspecialties such as cardiothoracic, colorectal, & vascular. At the risk of redundancy, these folks are tough.
Only I've seen some weird things happening at my academic medical center as well as at my little community hospital over the past year or two, and this week while attending as a consultant I witnessed something that I found quite surprising, and I'm wondering if that cast-iron sense of ownership is eroding amongst that hardcore group. I was asked to see a patient about a pre-operative infectious issue before the patient was due to get a mitral valve--should the patient be on antibiotics & if so how long, does the surgery have to go on hold, that sort of question. When I finished the consult I had my team get on the phone to talk to the intern, whom they dutifully paged. Only the intern who answered wasn't the surgical intern, it was the medicine intern.
"Wait, this lady's on the medicine team?" I said in frank astonishment. The patient had been admitted to the hospital specifically to get a valve replacement; that's purely a surgical issue. She didn't have a lot of medical problems that required an internist to be her primary doc in the hospital. And yet, somehow, she was sitting there on a medicine team with the cardiothoracic docs serving as consultants.
For laypeople out there this may be hard to grasp why this is a bad idea, but suffice it to say that you manage patients differently based on the kind of training you've had, as well as the kind of patients you care for. Surgeons are better taking care of patients undergoing surgery because, well, they do surgeries! And the surgical patient has a host of problems that internists don't encounter in the same way: fluid shift issues, mostly, which doesn't sound like much, but can be the difference between life and death if you misread the signals. This lady really did not belong on an internal medicine service, and search me as to why she was.
This isn't isolated, as I've seen pancreatitis patients, diverticulitis patients, cholecystitis patients all get turfed to medicine in the recent past. Some of these are borderline calls and could be taken care of adequately either way; some of these are what I would consider clear-cut surgical patients and I scratch my head when they are refused by the surgeon and sent to medicine (where I work, internal medicine does not have the luxury of refusing patients except in extreme circumstances).
Anyway, I'm happy to "own" such patients although I'm not sure that it's always in the patient's best interests for internal medicine doctors to be managing surgical cases. I'm also wondering if something's changed in the ethic of those surgeons whom I have held in such high esteem for so long.
b. Many months back I took my best shot at discussing a book before I had read it. The link, which can be found here, is a discussion about a book that had made a bit of a flap in the psychiatry community called Anatomy of An Epidemic by Robert Whitaker. Since I hadn't read the book, I did not venture to offer an opinion about it, but wondered about how the reviews framed what appeared to be a startling hypothesis: namely, that psychiatric drugs have, for at least a generation, made patients who suffer from psychiatric disease worse on the whole. Was the book worth reading? was my simple question, and I concluded it was and that I'd get around to it as soon as I could.
Well, I did, and my initial reaction is wow. Whitaker's book goes to the core of psychiatry and takes a sledgehammer to it, and he makes one hell of a powerful case that there's nothing behind the curtain. This is not the work of a pseudoscientific idiot who is raging against the machine; Whitaker supports his thesis by citing reputable scientific sources, and does so quite thoroughly. Unlike Celia Farber, an AIDS denialist who is short on facts and long on paranoia, Whitaker lays out his argument with the kind of precision and scientific grounding medical schools hope & pray they can impart to their students.
Whether Whitaker's contentions are completely right I cannot say; I just don't know the literature of psychiatry well enough. (A small quibble: I think he didn't portray Peter Kramer's excellent book Listenting to Prozac fairly, but it's been a long time since I read that book.) But he's without doubt persuasive, and has written a book that anyone who is seriously interested in the broad sweep of modern psychiatry should read. Next up on my reading list is Dr. Dan Carlat's Unhinged; the utterly awesome Dr. Marcia Angell, the former Editor In Chief of The New England Journal of Medicine and author of The Truth About Drug Companies, has a review about both books (as well as Irving Kirsch's The Emperor's New Drugs) in a recent New York Review of Books which can be found here.
Anyway, the point is simple: read this book.
c. Prior to her death, my only knowledge of Amy Winehouse was that she was a singer, and that her escapades with drug addiction were tabloid fodder. I had never listened to her music, but the comparison of her to Janis Joplin in the NYT obit, as well as the description of her as a jazz singer, caught my interest. I downloaded Frank and over the span of the next several days I heard her voice while driving to and from work, and suddenly shared in the collective frustration over a life that held such promise and exhibited such talent. I have not yet gotten around to listening to her signature album Back to Black, but I have become a fan.
Her place in music history is of course an open question, but even if I am blown away by Back to Black I think her troubles with addiction, which led to her decline and untimely death, will place her in that rank of singers whose talents we'll never really know. Joplin was the Times's point of comparison but I've spent some time thinking about Billie Holliday as I listen to her. Holliday gave more of her music to the world, surviving to 44 instead of Winehouse's 27, but she too represents the kind of talent that ventures too close to the flame. Match that against perhaps the greatest singer ever, Ella Fitzgerald, who kept care of herself her entire life (she died from diabetic complications, not drugs or alcohol), devoted it mostly to singing, and it's hard to listen to Holliday without some twinge of regret. Holliday in some way played the Charlie Parker to Fitzgerald's Dizzie Gillespie, the former dancing with demons on a nightly basis, the latter plugging away like a tortoise racing against a hare.
But Holliday, and Winehouse too, may have made the Faustian bargain of communing with the dark side in order to create their art, and it may not make sense at some level to shake our heads at their self-destructive recklessness. I love this clip of Holliday singing "My Man Don't Love Me" from a series that CBS television did called The Story of Jazz in 1957, two years before her death. (There's many things to love about this, actually: that one of the three major networks had a primetime series showcasing America's greatest and most serious artists; that Holliday is hardly the only important face in this ensemble, with a kind of Fania All Stars version of American jazz surrounding her like Coleman Hawkins, Lester Young, Gerry Mulligan and others; and that it's the last time Young & Holliday had a musical tete-a-tete before they both succumbed to their addictions.) The song is beautiful, but it's dark, jagged, and bloody; in short, it is the perfect song for Holliday at her peak. Would Winehouse have been able to step into that mix and take over for Lady Day? I think so. Would the First Lady of Song? I think not.
All of which is to say that Winehouse may have paid a price for her short-lived brilliance, but to judge that bargain as inherently wrong (or indeed, to understand it as anything other than a bargain) may be to fundamentally misunderstand her talents. I tell my med students when confonted with the peculiar vices of their patients: don't judge, just understand. Should we not do the same for Winehouse as her audience?
--br
Tuesday, July 26, 2011
Free-Market Capitalism and Death Panels: The Musical
How long ago it seems. While we await the final negotiations in Washington to figure out some solution to the budgetary battles--which will likely produce a bill, endorsed by the President, that will be either "very right" or "extremely right" but will somehow be billed as "centrist"--we at the Billy Rubin Blog are feeling nostalgic tonight for those heady days of the summer of 2009, when a health care bill was slowly working its way through Congress.
Remember that? What passed for reasonable dialogue got hijacked by a very noisy rabble of Know Nothings, a group for whom the descriptions "willfully ignorant" and "anti-intellectual" are taken as praise, who screeched at town hall meetings and demanded that "government get its hands off my medicare". But the meme that took the cake at the time, the Chant Of Nitwits as it were, was that somehow the government was secretly planning to arrange "Death Panels," and the paranoia from the imbecilic mob sent the few remaining sensible Republican politicians running for cover and pandering to save their souls (as Chuck Grassley of Iowa did here).
Between the media, who largely buy into the myth of false equivalence that every story must have two equally valid sides and thus reported on the protests without pointing out the basic stupidity of the protesters, and the politicians, too many of whom were spineless in shouting down the nonsense, the silliness carried the day. The Obama administration backpedalled in the Public Relations game, giving up ground to its political foes (sound familiar?), and the consequence was that the country ended up with two presents. The first was a not especially progressive healthcare bill. The second was the Tea Party.
Which brings us up to the present, more or less. So what did ever happen to the Death Panels? Well, they never left, argues the massively awesome blog Health Care Renewal. HCR links Bloomberg News reporter Peter Waldman's investigation into the for-profit hospices now littering the landscape. It makes for grizzly reading. For instance, Waldman relates the story of former social worker Misty Wall, who alleges in a lawsuit against Gentiva Health Services, Inc. that she was "assigned to convince people who weren't dying that they were." (A spokesman for Gentiva said that the allegations predate Gentiva's ownership of the hospice at which Ms. Wall worked. She was fired from the hospice in 2005 for refusing to continue such practices.)
Perhaps even more troubling--and that is saying something--are the allegations that for-profit hospices gave "financial kickbacks" to "referral sources" (in English, that usually means money to doctors) and tied employee bonuses to "enrollment goals". This easily has the potential to induce some employees to move the goalposts a bit and encourage hospice for some patients inappropriately. How's that for a "Death Panel"?! No grim bureaucrats in Washington doling out the Number of The Beast, but rather a "Death for Dollars" in which the most successful recruiters walk home with a tidy cash sum at the end of the day, along with the corporation supervising it.
One notes more than a touch of righteous indignation as HCR writes,
There has been a lot of blather from politicians in the US about "death panels" in debates about health care reform. Many such politicians seem worried that the US government has or will have death panels under the new health care reform legislation. We have criticized that legislation for not addressing many important health care problems. No one, however, has convincingly demonstrated how its provisions would convene "death panels."
I couldn't agree more.
Lest I am misunderstood, hospice has been a tremendous step forward in American medicine. It allows people to die in greater comfort and with greater dignity than before. We need hospice, which is precisely why we shouldn't sully it by making it the object of some corporation's greed. But if people don't want their government involved in their health care, the business of dying will be overseen by for-profit businesses, and the Death Panels will be convened in elegant board rooms with oak tables, plush carpeting, and executives enjoying record salaries. Sound appealing?
--br
Remember that? What passed for reasonable dialogue got hijacked by a very noisy rabble of Know Nothings, a group for whom the descriptions "willfully ignorant" and "anti-intellectual" are taken as praise, who screeched at town hall meetings and demanded that "government get its hands off my medicare". But the meme that took the cake at the time, the Chant Of Nitwits as it were, was that somehow the government was secretly planning to arrange "Death Panels," and the paranoia from the imbecilic mob sent the few remaining sensible Republican politicians running for cover and pandering to save their souls (as Chuck Grassley of Iowa did here).
Between the media, who largely buy into the myth of false equivalence that every story must have two equally valid sides and thus reported on the protests without pointing out the basic stupidity of the protesters, and the politicians, too many of whom were spineless in shouting down the nonsense, the silliness carried the day. The Obama administration backpedalled in the Public Relations game, giving up ground to its political foes (sound familiar?), and the consequence was that the country ended up with two presents. The first was a not especially progressive healthcare bill. The second was the Tea Party.
Which brings us up to the present, more or less. So what did ever happen to the Death Panels? Well, they never left, argues the massively awesome blog Health Care Renewal. HCR links Bloomberg News reporter Peter Waldman's investigation into the for-profit hospices now littering the landscape. It makes for grizzly reading. For instance, Waldman relates the story of former social worker Misty Wall, who alleges in a lawsuit against Gentiva Health Services, Inc. that she was "assigned to convince people who weren't dying that they were." (A spokesman for Gentiva said that the allegations predate Gentiva's ownership of the hospice at which Ms. Wall worked. She was fired from the hospice in 2005 for refusing to continue such practices.)
Perhaps even more troubling--and that is saying something--are the allegations that for-profit hospices gave "financial kickbacks" to "referral sources" (in English, that usually means money to doctors) and tied employee bonuses to "enrollment goals". This easily has the potential to induce some employees to move the goalposts a bit and encourage hospice for some patients inappropriately. How's that for a "Death Panel"?! No grim bureaucrats in Washington doling out the Number of The Beast, but rather a "Death for Dollars" in which the most successful recruiters walk home with a tidy cash sum at the end of the day, along with the corporation supervising it.
One notes more than a touch of righteous indignation as HCR writes,
There has been a lot of blather from politicians in the US about "death panels" in debates about health care reform. Many such politicians seem worried that the US government has or will have death panels under the new health care reform legislation. We have criticized that legislation for not addressing many important health care problems. No one, however, has convincingly demonstrated how its provisions would convene "death panels."
I couldn't agree more.
Lest I am misunderstood, hospice has been a tremendous step forward in American medicine. It allows people to die in greater comfort and with greater dignity than before. We need hospice, which is precisely why we shouldn't sully it by making it the object of some corporation's greed. But if people don't want their government involved in their health care, the business of dying will be overseen by for-profit businesses, and the Death Panels will be convened in elegant board rooms with oak tables, plush carpeting, and executives enjoying record salaries. Sound appealing?
--br
Saturday, July 16, 2011
The Budget "Crisis" and Medical Residencies
Among the more amusing tidbits of news that came out of the state government shutdown in Minnesota were pieces like this in which certain voters expressed outrage over the stalemate and demanded their elected officials to just "get things done". "They could have talked more, but they get their feelings hurt a little bit, and act like a bunch of little kids," said one frustrated citizen.
Umm....no. What we witnessed in Minnesota, and what the media is playing up in Washington at this moment, is not merely a bunch of squabbling children, even if there are some childish antics involved. The problem, in Minny as in Washington, is that you have genuinely, truly divided government, with huge blocs of both parties with irreconcilable views about the proper function and structure of government, and to expect them to arrive at agreements on operating costs totally misunderstands the ideology of these blocs. You can't simply expect them to "go and get it done" because there's no consensus on even the most basic roles of government.
Despite what some commentators have said about the current inter-party spats being just more of the same-old same-old, the different visions being fought over today really are much more substantive than any other political fight since the early 20th century. For instance, Richard Nixon and a Democratic-led Congress, while political foils, really did agree on the basics: yes, Nixon presided over some unpleasantries in Vietnam and Cambodia, and obviously sought to limit government in ways that Democrats didn't. But Nixon bought into the concept that government could play a role in preventing drag on the economy, so much so that he proposed a comprehensive health care bill not dramatically unlike the one the current President finally got passed, the hysteria over which led to the election of a House so radically different in philosophy from Nixon (Richard Nixon!) that he would have blushed. Given the budgetary concessions that President Obama has already put on the table, and his breathtaking capitulations since the arrival of the new Congress, the fact that the House "Hell No" caucus won't budge in the current debt-ceiling discussions is evidence enough that this is not just your ordinary political dust-up.
The point here is: if it really does come to pass that the US defaults because of the political stalemate, don't blame them--because the voters were the ones who put Obama into office, and then two years later not only seated an opposition party, but an opposition who would legislate against the sunrise if the President said, "the sun will come out tomorrow" in his increasingly grating Annie-like naiveté. It doesn't make any sense to elect a Democrat like Obama (even one as willing to adopt right-wing talking points as him) and then vote for Republicans further to the right than what even George W. Bush could have dreamed of. At least the voters of my home state of Ohio were consistent when they seated a right-wing executive and legislature in the most recent elections, and now have a new budget crafted by people with a very particular view of the role of government in the lives of its people. Let's see how that one works out for you guys in the years to come; as Chrissie Hynde of The Pretenders once lamented, "Hey, way to go Ohio."
Anyway, this blog is geared toward commenting on such political machinations in relation to the world of medicine (with a jaundiced eye befitting its name), and needless to say the world of medicine is going to undergo serious revisions if the Tea Party really does get its way and something resembling the Ryan medicare proposal (see here if you want to read warm fuzzies about it, and here if you prefer cold pricklies) becomes federal law. (Side note: I think there's a pretty good chance that our "Democratic" President is going to avert this "crisis" by eventually signing legislation that passes with either zero, or very few, Democratic party votes in the House.) And to that end, here are two stories (NYT and Boston radio station WBUR) discussing the huge impact that deep medicare cuts are going to have on teaching hospitals.
To help lay readers understand the structure, Medicare is the government-run health insurance program for senior citizens. While you almost certainly knew this (though obviously some people are not too quick on the uptake), what you may not have realized is that Medicare is also responsible for financing the postgraduate medical training in the US, which costs, give or take, a little over $6 billion per year, although a proposal that even Obama himself endorses would cut that amount by an estimated 60 percent. The feds don't pay residents directly, but rather pay the hospitals running the programs, which tend to be large, academic medical centers. These places may also get hit by a decreasing NIH budget to fund research at those medical centers, as this November 2010 article indicates current House Whip Eric Cantor's philosophy.
While places like New York and Boston are going to be hit disproportionally by major cuts to the residency training budget, you can bet that smaller places like Peoria, Illinois, with its 10 residencies affiliated with two local hospitals, will smart as well. In fact, as residency programs in smaller, more rural communities tend to serve as feeders of physicians who would otherwise not move to such places, the damage to residency training programs is going to reverberate well beyond the hospital parking lots, and have a good chance of doing so well into the future.
Perhaps this will all work itself out and the changes will benefit everyone; perhaps all that extra money from the taxes that nobody seems to be paying will give taxpayers more money to afford...well, afford something. We shall see, though hope (© Senator Barack Obama, 2008) is becoming as thin as gruel. Dickens would appreciate the consistency.
Umm....no. What we witnessed in Minnesota, and what the media is playing up in Washington at this moment, is not merely a bunch of squabbling children, even if there are some childish antics involved. The problem, in Minny as in Washington, is that you have genuinely, truly divided government, with huge blocs of both parties with irreconcilable views about the proper function and structure of government, and to expect them to arrive at agreements on operating costs totally misunderstands the ideology of these blocs. You can't simply expect them to "go and get it done" because there's no consensus on even the most basic roles of government.
Despite what some commentators have said about the current inter-party spats being just more of the same-old same-old, the different visions being fought over today really are much more substantive than any other political fight since the early 20th century. For instance, Richard Nixon and a Democratic-led Congress, while political foils, really did agree on the basics: yes, Nixon presided over some unpleasantries in Vietnam and Cambodia, and obviously sought to limit government in ways that Democrats didn't. But Nixon bought into the concept that government could play a role in preventing drag on the economy, so much so that he proposed a comprehensive health care bill not dramatically unlike the one the current President finally got passed, the hysteria over which led to the election of a House so radically different in philosophy from Nixon (Richard Nixon!) that he would have blushed. Given the budgetary concessions that President Obama has already put on the table, and his breathtaking capitulations since the arrival of the new Congress, the fact that the House "Hell No" caucus won't budge in the current debt-ceiling discussions is evidence enough that this is not just your ordinary political dust-up.
The point here is: if it really does come to pass that the US defaults because of the political stalemate, don't blame them--because the voters were the ones who put Obama into office, and then two years later not only seated an opposition party, but an opposition who would legislate against the sunrise if the President said, "the sun will come out tomorrow" in his increasingly grating Annie-like naiveté. It doesn't make any sense to elect a Democrat like Obama (even one as willing to adopt right-wing talking points as him) and then vote for Republicans further to the right than what even George W. Bush could have dreamed of. At least the voters of my home state of Ohio were consistent when they seated a right-wing executive and legislature in the most recent elections, and now have a new budget crafted by people with a very particular view of the role of government in the lives of its people. Let's see how that one works out for you guys in the years to come; as Chrissie Hynde of The Pretenders once lamented, "Hey, way to go Ohio."
Anyway, this blog is geared toward commenting on such political machinations in relation to the world of medicine (with a jaundiced eye befitting its name), and needless to say the world of medicine is going to undergo serious revisions if the Tea Party really does get its way and something resembling the Ryan medicare proposal (see here if you want to read warm fuzzies about it, and here if you prefer cold pricklies) becomes federal law. (Side note: I think there's a pretty good chance that our "Democratic" President is going to avert this "crisis" by eventually signing legislation that passes with either zero, or very few, Democratic party votes in the House.) And to that end, here are two stories (NYT and Boston radio station WBUR) discussing the huge impact that deep medicare cuts are going to have on teaching hospitals.
To help lay readers understand the structure, Medicare is the government-run health insurance program for senior citizens. While you almost certainly knew this (though obviously some people are not too quick on the uptake), what you may not have realized is that Medicare is also responsible for financing the postgraduate medical training in the US, which costs, give or take, a little over $6 billion per year, although a proposal that even Obama himself endorses would cut that amount by an estimated 60 percent. The feds don't pay residents directly, but rather pay the hospitals running the programs, which tend to be large, academic medical centers. These places may also get hit by a decreasing NIH budget to fund research at those medical centers, as this November 2010 article indicates current House Whip Eric Cantor's philosophy.
While places like New York and Boston are going to be hit disproportionally by major cuts to the residency training budget, you can bet that smaller places like Peoria, Illinois, with its 10 residencies affiliated with two local hospitals, will smart as well. In fact, as residency programs in smaller, more rural communities tend to serve as feeders of physicians who would otherwise not move to such places, the damage to residency training programs is going to reverberate well beyond the hospital parking lots, and have a good chance of doing so well into the future.
Perhaps this will all work itself out and the changes will benefit everyone; perhaps all that extra money from the taxes that nobody seems to be paying will give taxpayers more money to afford...well, afford something. We shall see, though hope (© Senator Barack Obama, 2008) is becoming as thin as gruel. Dickens would appreciate the consistency.
Monday, July 11, 2011
Republican Primary Maneuvering and Hypocrisy About Science, Writ Large
I keep telling anyone who will listen that we all better get used to the phrase "President Bachmann" as the Republican candidate increasingly looks like a not-so-improbable contender for the nomination. Extreme though she may be, she doesn't come with Mitt Romney's troubling baggage of being both Mormon and the former governor of a liberal state whose signature legislation was the passage of a health care law remarkably similar to what nearly all conservatives refer to as "Obamacare"; she doesn't have to resort to a two-step to explain why she worked for President Obama's administration as John Huntsman does; she is a good deal more media savvy than former Alaska governor Sarah Palin; and she isn't Newt Gingrich. I'm no political expert, but in the age of the Tea Party dominated Republican primaries, I see her as having a legitimate shot, with her only substantive competition being former Minnesota Governor Tim Pawlenty, unless current Texas Governor Rick Perry gets into the mix. Maybe I'm misjudging Romney's chances, but I'd call Bachmann the favorite right now.
Representative Bachmann certainly has no fears about wading into controversy, and in doing so making herself a darling of the extreme right. Earlier in the week she became the first candidate to sign a pledge for the protection of marriage entitled "The Marriage Vow: A Declaration of Dependence on Marriage and Family," which as the NY Times noted in a blog post, puts her rivals--at least some of whom do not have such distinguished marital records themselves--into a tricky position. In a fairly short time, "The Marriage Vow" has managed to generate an uproar over whether or not it explicitly endorses banning pornography (it doesn't, as noted here); its not-so-subtle racism, as discussed here; and its religious fanaticism (see, for instance, here). In short, it highlights all of the qualities most commonly associated with the rightmost wing of the Republican Party, and quite possibly the group best positioned to put a candidate over the top for the 2012 nomination. And Bachmann got there first.
So let me pile on here and point out that one of its additional hypocrisies involves science: as part of the rationale for why the "Institution of Marriage in America is in great crisis," the Vow argues that "[the debasement of marriage continues due to an] anti-scientific bias which holds, in complete absence of empirical proof, that non-heterosexual inclinations are genetically determined, irresistible, and akin to other traits...as well as an anti-scientific bias that holds, against all empirical evidence, that homosexual behavior in particular, and sexual promiscuity in general, optimizes individual or public health." [my emphasis]
There's a lot to unpack in that pile of nonsense but here's a start: whether "non-heterosexual inclinations" are indeed genetically determined is an open question, but to say that those who posit the theory are "anti-scientific" and are making such assertions "complete absence of empirical proof" is patently false. The neuroanatomist Simon LeVay (author of the fascinating tome Queer Science--it proclaims its allegiance right on the cover!) pioneered studies on differences in brain structures between heterosexual and homosexual men, and while I'm skeptical of the results or even the meaning of the findings, there's no question that LeVay's work constitutes science--the empirical testing of hypotheses about the mechanisms of the world. He's hardly the only example, and the literature of scientific publications is rife with tests, theories and arguments about the origin and nature of human sexuality. Nobody's got a definitive answer, but The Vow label of "anti-scientific" is really just tossing out a phrase to make itself seem respectable.
An additional yuck can be had from the fact that the author of The Vow, Bob Vander Plaats, is...well, typically anti-scientific in his fundamentalist Christianity! While running for the position of Lieutenant Governor of Iowa, Vander Plaats endorsed the teaching of "intelligent design" as an adjunct to evolution. As the redoubtable Tara Smith at the blog Aetiology points out, intelligent design isn't a scientific theory at all, something even some of its proponents realize. The casual disregard for critical thought appears to be part and parcel of the document, so a disregard for science shouldn't really be a surprise. Nor should it be a surprise that Bachmann immediately signed on to it.
--br
PS--today's Times has a fascinating article on how some schools (including Billy's medical alma mater, the University of Cincinnati!) have changed their admission interview strategies in the hopes of finding future doctors who are better team players than those who may have stellar grades but are arrogant & condescending (and who in being this way may foster poor communication leading to medical errors). I ran this past a senior colleague who works on a med school admissions committee and he seemed skeptical: "what you do in your life = what you say in a mini-interview; however, grades = long term commitment" was his quick response. Count me tentatively among those hopeful for the new system.
Representative Bachmann certainly has no fears about wading into controversy, and in doing so making herself a darling of the extreme right. Earlier in the week she became the first candidate to sign a pledge for the protection of marriage entitled "The Marriage Vow: A Declaration of Dependence on Marriage and Family," which as the NY Times noted in a blog post, puts her rivals--at least some of whom do not have such distinguished marital records themselves--into a tricky position. In a fairly short time, "The Marriage Vow" has managed to generate an uproar over whether or not it explicitly endorses banning pornography (it doesn't, as noted here); its not-so-subtle racism, as discussed here; and its religious fanaticism (see, for instance, here). In short, it highlights all of the qualities most commonly associated with the rightmost wing of the Republican Party, and quite possibly the group best positioned to put a candidate over the top for the 2012 nomination. And Bachmann got there first.
So let me pile on here and point out that one of its additional hypocrisies involves science: as part of the rationale for why the "Institution of Marriage in America is in great crisis," the Vow argues that "[the debasement of marriage continues due to an] anti-scientific bias which holds, in complete absence of empirical proof, that non-heterosexual inclinations are genetically determined, irresistible, and akin to other traits...as well as an anti-scientific bias that holds, against all empirical evidence, that homosexual behavior in particular, and sexual promiscuity in general, optimizes individual or public health." [my emphasis]
There's a lot to unpack in that pile of nonsense but here's a start: whether "non-heterosexual inclinations" are indeed genetically determined is an open question, but to say that those who posit the theory are "anti-scientific" and are making such assertions "complete absence of empirical proof" is patently false. The neuroanatomist Simon LeVay (author of the fascinating tome Queer Science--it proclaims its allegiance right on the cover!) pioneered studies on differences in brain structures between heterosexual and homosexual men, and while I'm skeptical of the results or even the meaning of the findings, there's no question that LeVay's work constitutes science--the empirical testing of hypotheses about the mechanisms of the world. He's hardly the only example, and the literature of scientific publications is rife with tests, theories and arguments about the origin and nature of human sexuality. Nobody's got a definitive answer, but The Vow label of "anti-scientific" is really just tossing out a phrase to make itself seem respectable.
An additional yuck can be had from the fact that the author of The Vow, Bob Vander Plaats, is...well, typically anti-scientific in his fundamentalist Christianity! While running for the position of Lieutenant Governor of Iowa, Vander Plaats endorsed the teaching of "intelligent design" as an adjunct to evolution. As the redoubtable Tara Smith at the blog Aetiology points out, intelligent design isn't a scientific theory at all, something even some of its proponents realize. The casual disregard for critical thought appears to be part and parcel of the document, so a disregard for science shouldn't really be a surprise. Nor should it be a surprise that Bachmann immediately signed on to it.
--br
PS--today's Times has a fascinating article on how some schools (including Billy's medical alma mater, the University of Cincinnati!) have changed their admission interview strategies in the hopes of finding future doctors who are better team players than those who may have stellar grades but are arrogant & condescending (and who in being this way may foster poor communication leading to medical errors). I ran this past a senior colleague who works on a med school admissions committee and he seemed skeptical: "what you do in your life = what you say in a mini-interview; however, grades = long term commitment" was his quick response. Count me tentatively among those hopeful for the new system.
Sunday, June 5, 2011
Jack Kevorkian: Goodbye, Good Riddance
The news of Jack Kevorkian's death brought out a large number of laudatory comments in The New York Times (laudatory about the man rather than his death, natch). "I hope that one day the world will look back on the service Dr. Kevorkian provided and will be shocked and saddened to learn that he was ostracized and incarcerated for the practice of providing dignity and some control to those in the late stages of terminal illness," SteveBnh of Virginia wrote in a representative sample of the praise heaped on the crusader for physician-assisted suicide.
Count me among the ostracizers. As the warm comments from seemingly well-informed readers demonstrates, Kevorkian was widely perceived to be a fierce advocate for patient's rights, a promoter of death with dignity, and the victim of a hypocritical and vindictive profession hellbent on maintain its Godlike power over patients. His trial, conviction, and imprisonment in 1999 for second-degree murder has the flavor of martyrdom, reinforcing the admiration of his followers and inviting comparisons to various legendary civil-rights activists.
In reality, Kevorkian was none of these things, but rather a creepy zealot obsessed with death who knew nothing about actual patient care. (I am not using the word "creepy" lightly; read on.) Although he was trained at a bonafide medical school and thus was a "doctor" in the general sense of the term, his training and subsequent practice was in pathology, where his work involved autopsies and analysis of human tissues on slides rather than actually taking care of living, breathing souls with joys and fears--making his public persona as "doctor" a bit misleading, as if he were the same as Marcus Welby, M.D. Kevorkian's nickname, "Doctor Death", didn't come from the notoriety he generated in the 1980s and '90s, but rather from perplexed and amused housestaff during his early days in a wry observation about his peculiar fixation on photographing patients' eyes at the precise moment of death. (Various blogs and websites supportive of Kevorkian state that this is because he wanted the profession to be able to distinguish the moment so that resuscitation could be performed, or something to that effect. It's utter nonsense: even in the 1950's, which some might consider the Dark Ages by medical standards, there were EKGs, a considerably more precise tool to determine death than staring into people's eyes, which seems positively medieval. Whatever his stated justifications, his "death photography" was pure fetish.) Long before he took up physician-assisted suicide as his cause, he bounced from hospital to hospital, disturbing various medical staffs with his distinctly unconventional preoccupations.
He was praised for his compassion despite the fact that he had not only not taken care of living patients except during his internship, but had never received any training of any kind in treating patients with depression (common enough among the terminally ill), palliative care, or any of the diseases that he claimed to treat. His choices reflect this very poor training: among the 130 or more cases in which he was the prescriber of death, several had no terminal illnesses nor were suffering, such as the case of Janet Adkins, who had been recently diagnosed with Alzheimer's disease but aside from mild memory loss was in otherwise reasonably good health.
Even more disturbing were the reports of the death of Judith Curren, a 43 year-old woman who not only didn't have a clear-cut underlying disorder, she had reportedly been a victim of domestic violence. These are not the only cases, but even the inclusion of these two suggests at best a sloppiness in methods, and at worst a murderous instinct hidden under the guise of medical concern for suffering. ("How could I have known?" was Kevorkian's retort after being confronted with the news of the messy life of the Curren family. Perhaps if his only acquaintance with them had not been through a questionnaire, and had been based on caring for Judith Curren in a legitimate medical practice for several years, such surprises wouldn't have popped up.)
In short, Dr. Kevorkian-the-Caring was a total media fabrication. He was a murderer, and if anything was treated gently by the justice system.
Other, far more responsible doctors have spoken out in favor of physician-assisted suicide--doctors who personally knew and ministered to their patients before taking the terrifying power into their hands and helped patients end their lives, doctors who gave such power its proper due, only arriving at that moment after slow and careful deliberation, wholly unlike Dr. Kevorkian's quickie-in-a-Volkswagen butchery. Perhaps the most famous of these doctors is Timothy Quill, a practicing doc in New York who challenged the ban on physician-assisted suicide in the State of New York which was ultimately decided by the US Supreme Court; the court decided 9-0 against Dr. Quill. Even Quill, as forceful an advocate for physician-assisted suicide as could be, found Kevorkian's behavior troubling, saying that he "is very much on the edges of what ordinary doctors do."
I have heard Timothy Quill speak on two occasions and found him an eloquent man whose concerns are ultimately for the health and happiness of his patients. That said, I still believe that physician-assisted suicide is a terrible idea. Ironically, the two times I attended lectures by Dr. Quill mark dramatic shifts in opinion I have had on the subject: the first time happened before I started medical school and was strongly in favor of his ideas, while the second time was a few years ago, after I had undergone more than a decade of medical training, and my attitude had changed considerably.
Generally, the discussions about physician-assisted suicide revolve around two themes. The first is what bizzyblog refers to as "the euthanasia theme song," or having a life that is not worth living. The second deals with the scenario of unbearable and unremitting suffering, which the supporters of physician-assisted suicide regard as the ultimate justification for the practice. This is often where the accusations of "doctors playing God" come in--docs are so invested in keeping people alive that they consider it a personal affront to allow patients to die. (In general, my experience has been the opposite, not withstanding the rather regrettable final few days of my father's life, in which we attempted in vain for several days to have his life-support removed after an episode of sudden cardiac death. Based on what I've seen, it's usually the doctors, and not the families, who see little or no value and much suffering in store for families and patients with terminal illness requiring intubation, PEG tubes and the like, and often have difficulty explaining to families the benefits of "letting nature take its course.")
It turns out that, not unlike the public misperceptions of Dr. Kevorkian, the picture of frequent, unremitting suffering of the terminally ill is for the most part a fiction. Curiously, over the past 20 or so years attitudes about physician-assisted suicide and euthanasia haven't changed a great deal among the general public or physicians in general (those numbers are different from one another, but stable over time). However, one group in which attitudes have changed significantly is among oncologists, who have had a steep drop in approval for those practices.
Why? It's hard to say with complete certainty, but it's likely because oncologists are more aware of, and tuned into, the multiple ways in which terminally ill patients can remain pain-free and finish their lives with meaning and dignity, to paraphrase the article in the link. A telling statistic: among oncologists, surgical oncologists, who deal with the long-term care of their patients far less often, were twice as likely to support physician-assisted suicide as their medical oncologist colleagues. In other words, the further away one gets from the actual practice of death and dying, the greater the fear of pain and suffering among laypeople and physicians alike, and the corresponding increase in support of physician-assisted suicide.
As for judging whether a life is worth living, that's much more straightforward. Physician's have no business judging the worth of any of their patients' lives. That is playing God.
It is not hard to kill onself in the US: over 30,000 people do it each year, and do it in a multiplicity of ways ranging from relatively peaceful to gruesome. And while there are technically laws on the books against suicide and no Supreme Court recognition of a "right to suicide," the practice is tacitly accepted. Suicides are allowed to be buried with everyone else, and the state does not seize their assets. So given the ease by which people can commit suicide, the debate around physicians being involved in the taking of lives has increasingly for me had an odd ring about it. Why must physicians be present to sanctify this process? It has the feel of approval-seeking, and docs shouldn't be in the business of approving or disapproving anything about a patient's lifestyle, except maybe smoking. Even then: maybe.
Doctors cannot take lives; it's not our job and should never be so. If we administer comfort medications that may hasten death to a suffering patient as a side effect, that is more than acceptable. If doctors withdraw tubes or machines that "artificially" keep patients alive, that's fine as well. But there's a big difference between maintaining a morphine drip and injecting a bolus of potassium chloride into a patient. The former is a drug with legitimate medical uses; the latter is never used under any conditions except to kill. Morphine is an everyday drug in hospices across the US; the potassium bolus was a "medication" unique to Dr. Kevorkian. May there never be another one like him again.
--br
Count me among the ostracizers. As the warm comments from seemingly well-informed readers demonstrates, Kevorkian was widely perceived to be a fierce advocate for patient's rights, a promoter of death with dignity, and the victim of a hypocritical and vindictive profession hellbent on maintain its Godlike power over patients. His trial, conviction, and imprisonment in 1999 for second-degree murder has the flavor of martyrdom, reinforcing the admiration of his followers and inviting comparisons to various legendary civil-rights activists.
In reality, Kevorkian was none of these things, but rather a creepy zealot obsessed with death who knew nothing about actual patient care. (I am not using the word "creepy" lightly; read on.) Although he was trained at a bonafide medical school and thus was a "doctor" in the general sense of the term, his training and subsequent practice was in pathology, where his work involved autopsies and analysis of human tissues on slides rather than actually taking care of living, breathing souls with joys and fears--making his public persona as "doctor" a bit misleading, as if he were the same as Marcus Welby, M.D. Kevorkian's nickname, "Doctor Death", didn't come from the notoriety he generated in the 1980s and '90s, but rather from perplexed and amused housestaff during his early days in a wry observation about his peculiar fixation on photographing patients' eyes at the precise moment of death. (Various blogs and websites supportive of Kevorkian state that this is because he wanted the profession to be able to distinguish the moment so that resuscitation could be performed, or something to that effect. It's utter nonsense: even in the 1950's, which some might consider the Dark Ages by medical standards, there were EKGs, a considerably more precise tool to determine death than staring into people's eyes, which seems positively medieval. Whatever his stated justifications, his "death photography" was pure fetish.) Long before he took up physician-assisted suicide as his cause, he bounced from hospital to hospital, disturbing various medical staffs with his distinctly unconventional preoccupations.
He was praised for his compassion despite the fact that he had not only not taken care of living patients except during his internship, but had never received any training of any kind in treating patients with depression (common enough among the terminally ill), palliative care, or any of the diseases that he claimed to treat. His choices reflect this very poor training: among the 130 or more cases in which he was the prescriber of death, several had no terminal illnesses nor were suffering, such as the case of Janet Adkins, who had been recently diagnosed with Alzheimer's disease but aside from mild memory loss was in otherwise reasonably good health.
Even more disturbing were the reports of the death of Judith Curren, a 43 year-old woman who not only didn't have a clear-cut underlying disorder, she had reportedly been a victim of domestic violence. These are not the only cases, but even the inclusion of these two suggests at best a sloppiness in methods, and at worst a murderous instinct hidden under the guise of medical concern for suffering. ("How could I have known?" was Kevorkian's retort after being confronted with the news of the messy life of the Curren family. Perhaps if his only acquaintance with them had not been through a questionnaire, and had been based on caring for Judith Curren in a legitimate medical practice for several years, such surprises wouldn't have popped up.)
In short, Dr. Kevorkian-the-Caring was a total media fabrication. He was a murderer, and if anything was treated gently by the justice system.
Other, far more responsible doctors have spoken out in favor of physician-assisted suicide--doctors who personally knew and ministered to their patients before taking the terrifying power into their hands and helped patients end their lives, doctors who gave such power its proper due, only arriving at that moment after slow and careful deliberation, wholly unlike Dr. Kevorkian's quickie-in-a-Volkswagen butchery. Perhaps the most famous of these doctors is Timothy Quill, a practicing doc in New York who challenged the ban on physician-assisted suicide in the State of New York which was ultimately decided by the US Supreme Court; the court decided 9-0 against Dr. Quill. Even Quill, as forceful an advocate for physician-assisted suicide as could be, found Kevorkian's behavior troubling, saying that he "is very much on the edges of what ordinary doctors do."
I have heard Timothy Quill speak on two occasions and found him an eloquent man whose concerns are ultimately for the health and happiness of his patients. That said, I still believe that physician-assisted suicide is a terrible idea. Ironically, the two times I attended lectures by Dr. Quill mark dramatic shifts in opinion I have had on the subject: the first time happened before I started medical school and was strongly in favor of his ideas, while the second time was a few years ago, after I had undergone more than a decade of medical training, and my attitude had changed considerably.
Generally, the discussions about physician-assisted suicide revolve around two themes. The first is what bizzyblog refers to as "the euthanasia theme song," or having a life that is not worth living. The second deals with the scenario of unbearable and unremitting suffering, which the supporters of physician-assisted suicide regard as the ultimate justification for the practice. This is often where the accusations of "doctors playing God" come in--docs are so invested in keeping people alive that they consider it a personal affront to allow patients to die. (In general, my experience has been the opposite, not withstanding the rather regrettable final few days of my father's life, in which we attempted in vain for several days to have his life-support removed after an episode of sudden cardiac death. Based on what I've seen, it's usually the doctors, and not the families, who see little or no value and much suffering in store for families and patients with terminal illness requiring intubation, PEG tubes and the like, and often have difficulty explaining to families the benefits of "letting nature take its course.")
It turns out that, not unlike the public misperceptions of Dr. Kevorkian, the picture of frequent, unremitting suffering of the terminally ill is for the most part a fiction. Curiously, over the past 20 or so years attitudes about physician-assisted suicide and euthanasia haven't changed a great deal among the general public or physicians in general (those numbers are different from one another, but stable over time). However, one group in which attitudes have changed significantly is among oncologists, who have had a steep drop in approval for those practices.
Why? It's hard to say with complete certainty, but it's likely because oncologists are more aware of, and tuned into, the multiple ways in which terminally ill patients can remain pain-free and finish their lives with meaning and dignity, to paraphrase the article in the link. A telling statistic: among oncologists, surgical oncologists, who deal with the long-term care of their patients far less often, were twice as likely to support physician-assisted suicide as their medical oncologist colleagues. In other words, the further away one gets from the actual practice of death and dying, the greater the fear of pain and suffering among laypeople and physicians alike, and the corresponding increase in support of physician-assisted suicide.
As for judging whether a life is worth living, that's much more straightforward. Physician's have no business judging the worth of any of their patients' lives. That is playing God.
It is not hard to kill onself in the US: over 30,000 people do it each year, and do it in a multiplicity of ways ranging from relatively peaceful to gruesome. And while there are technically laws on the books against suicide and no Supreme Court recognition of a "right to suicide," the practice is tacitly accepted. Suicides are allowed to be buried with everyone else, and the state does not seize their assets. So given the ease by which people can commit suicide, the debate around physicians being involved in the taking of lives has increasingly for me had an odd ring about it. Why must physicians be present to sanctify this process? It has the feel of approval-seeking, and docs shouldn't be in the business of approving or disapproving anything about a patient's lifestyle, except maybe smoking. Even then: maybe.
Doctors cannot take lives; it's not our job and should never be so. If we administer comfort medications that may hasten death to a suffering patient as a side effect, that is more than acceptable. If doctors withdraw tubes or machines that "artificially" keep patients alive, that's fine as well. But there's a big difference between maintaining a morphine drip and injecting a bolus of potassium chloride into a patient. The former is a drug with legitimate medical uses; the latter is never used under any conditions except to kill. Morphine is an everyday drug in hospices across the US; the potassium bolus was a "medication" unique to Dr. Kevorkian. May there never be another one like him again.
--br
Thursday, May 5, 2011
Racism--The Gift That Keeps On Giving
My philosophy of bedside medicine is founded on trying to be aware of how patients view the world well before I bring that quarter million-dollar scientific education to bear upon their problems. I've met docs who can form a differential diagnosis with greater length, and in faster time, than me, but I've also observed a lot of docs of that ilk who don't have a clue about how to use their stellar clinical acumen to explain to patients what they are thinking. This inability frequently leads to all sorts of problems for patients and their families, either because they are anxious and don't understand what is being told to them, or because they don't understand instructions and are afraid to ask out of a desire not to appear stupid. Being simultaneously intimidating and clueless can have lethal consequences, even when a doc is just as smart as House.
Put yourself into their shoes, I tell my med students, think about how they're feeling when you're talking to them before you start with the technical talk. Imagine, I say to them, how it feels to be lying there, usually half-naked, while an army of White Coats is standing there at your bedside, looking down at you, speaking in a language that sounds vaguely like English but makes no sense at all. (See this clip here, for instance, from the British TV miniseries The Singing Detective, which illustrates this in simultaneous hilarious and exquisitely painful detail. I mean it--follow that link, team! Not only is it a fantastic, biting satire of academic medical culture, it features a much younger Michael "Dumbledore 2" Gambon as well as Imelda "Dolores Umbridge" Staunton. And though made in the 1980s, the medical language--indeed, the medications!--hasn't changed much.) One thing Billy always does when seeing his hospitalized patients is to sit in a chair at patient's-eye level, and if no chair is present, then he gets down on his knees to communicate. It's symbolic, but I think it means a lot.
Being exposed and vulnerable is a universal condition of patienthood, but there are other factors that influence the physician-patient interaction as well, and race is one of the biggies. I don't believe that every interaction between black and white Americans has to "devolve" to race by necessity, but I sure's hell think that it's something to be aware of when you step into a room as a white doc with an African-American patient. I, nor any of my ancestors, were ever involved in any overtly racist act, but that doesn't mean that I shouldn't at least be cognizant of the fact that African-Americans often are leery of white docs, and not unjustifiably so (more on this in a moment).
Race was on my mind today as I listened to a fascinating lecture about blood transfusions. Most Americans have at least a vague understanding that there are 4 major blood types (A, B, AB, and O) each of which can be described as "Rh positive" or "Rh negative"--thus 8 blood types in total. In order to have a blood transfusion safely, these types must be matched to prevent immune responses. The blood types are distributed across all races making "universal" transfusion a generally easy process. (Though some readers may recall an episode of M*A*S*H* taking the topic of race and blood head-on, when a white GI needing surgery tells the surgeons not to give him "any of that black blood"--no doubt reflecting the attitudes of real people in the 50's, when the scene was set, as well as the early 70's, when the episode was filmed. Plus there's the cultural convention in many Asian communities, especially in Japan, that the ABO blood types correlate with personality, and Japanese are even more keenly aware of their blood types than Americans are of, say, our zodiac signs. So we haven't eliminated this brand of nonsense from humanity just yet, but we're getting there.)
It turns out that the ABO and Rh+/- system is just the beginning, and the immune response to blood is a good deal more complicated than this. But in the majority of cases the model of eight blood types is sufficient to save people with the magic of transfusions. (This assumes that people have ready access to blood, which they often don't, for instance, in many northern Mexican communities, as described here, but in the US that's almost never a problem.) The exceptions to this, where patients have to have a host of other blood cross-typing done, are frequently found among patients of African ancestry, in particular among those who suffer from that quintessentially African disease, Sickle Cell Anemia. Whether this is due to the inherent genetic variation in Africans, or whether it's a process of sickle cell disease, is not fully clear to me, but from a clinician's standpoint it hardly makes any difference. Patients who have requirements beyond the eight common types need to be cross-matched for special blood, sometimes very rare blood indeed. One patient under discussion in the lecture today essentially had only one person who was known to have blood that she could accept--in the entire world. And since that patient's donor (a close relative) was still a child, not so much a help.
Anyway, the likelihood that you'll get a match for that special blood is increased if you have a large pool of donors who more closely resemble you genetically. Meaning: from your ethnic or racial group. So Africans and African-Americans--who constitute a major if not the major group of people with these rare reactions to blood transfusion--are the ones most in need of blood donors of African ancestry. And there's the rub, because African-Americans are far less likely to donate their blood, at the rate of 25 to 50 percent the rate of blood donation among whites. This is very much unlike the situation in a group of transfusion-dependent diseases called thalassemias, which sometimes afflict people of African descent, but more often are seen in Caucasians, who have a much larger pool of donors from which adequate matches can be found, and so there are far fewer transfusion crises and dilemmas.
Why is this? Well, if you don't think that American history, filled with its pernicious racism, is grasping with its fetid hands our modern system of blood donation, then you're missing just as much as the brilliant-but-clueless docs I've described above. Leave aside slavery and all of its ill consequences alone for a moment--just consider the treatment of African-Americans at the hands of doctors, some of whom were employed by the Federal Government of the United States, as they were prevented from being cured of syphilis or bombarded with radiation without awareness or consent. Or the story of Henrietta Lacks, whose ultimately fatal cancer cells became the first human cells cultured outside of the body, remaining the workhorse cells for biomedical scientists to this day, a major source of commerce in the scientific world, worth billions of dollars, while her descendants struggle to afford health insurance.
Think about that while you swallow the statistic on the poor rates of blood donation among African Americans when your next sickler needs a transfusion. Does blood donation cause syphilis? No, of course not--but would you trust a system that had treated your brothers and sisters like this for generations? As the Tuskegee Syphilis Study Legacy Committee Report wrote in 1996: "the [study] continues to cast its long shadow on the contemporary relationship between African Americans and the biomedical community."
Indeed. It has not only done that; it has cruelly deprived some members of its own community the lifeblood it so desperately needs. This is why doctors need to be as aware of history as they are of science.
--br
Put yourself into their shoes, I tell my med students, think about how they're feeling when you're talking to them before you start with the technical talk. Imagine, I say to them, how it feels to be lying there, usually half-naked, while an army of White Coats is standing there at your bedside, looking down at you, speaking in a language that sounds vaguely like English but makes no sense at all. (See this clip here, for instance, from the British TV miniseries The Singing Detective, which illustrates this in simultaneous hilarious and exquisitely painful detail. I mean it--follow that link, team! Not only is it a fantastic, biting satire of academic medical culture, it features a much younger Michael "Dumbledore 2" Gambon as well as Imelda "Dolores Umbridge" Staunton. And though made in the 1980s, the medical language--indeed, the medications!--hasn't changed much.) One thing Billy always does when seeing his hospitalized patients is to sit in a chair at patient's-eye level, and if no chair is present, then he gets down on his knees to communicate. It's symbolic, but I think it means a lot.
Being exposed and vulnerable is a universal condition of patienthood, but there are other factors that influence the physician-patient interaction as well, and race is one of the biggies. I don't believe that every interaction between black and white Americans has to "devolve" to race by necessity, but I sure's hell think that it's something to be aware of when you step into a room as a white doc with an African-American patient. I, nor any of my ancestors, were ever involved in any overtly racist act, but that doesn't mean that I shouldn't at least be cognizant of the fact that African-Americans often are leery of white docs, and not unjustifiably so (more on this in a moment).
Race was on my mind today as I listened to a fascinating lecture about blood transfusions. Most Americans have at least a vague understanding that there are 4 major blood types (A, B, AB, and O) each of which can be described as "Rh positive" or "Rh negative"--thus 8 blood types in total. In order to have a blood transfusion safely, these types must be matched to prevent immune responses. The blood types are distributed across all races making "universal" transfusion a generally easy process. (Though some readers may recall an episode of M*A*S*H* taking the topic of race and blood head-on, when a white GI needing surgery tells the surgeons not to give him "any of that black blood"--no doubt reflecting the attitudes of real people in the 50's, when the scene was set, as well as the early 70's, when the episode was filmed. Plus there's the cultural convention in many Asian communities, especially in Japan, that the ABO blood types correlate with personality, and Japanese are even more keenly aware of their blood types than Americans are of, say, our zodiac signs. So we haven't eliminated this brand of nonsense from humanity just yet, but we're getting there.)
It turns out that the ABO and Rh+/- system is just the beginning, and the immune response to blood is a good deal more complicated than this. But in the majority of cases the model of eight blood types is sufficient to save people with the magic of transfusions. (This assumes that people have ready access to blood, which they often don't, for instance, in many northern Mexican communities, as described here, but in the US that's almost never a problem.) The exceptions to this, where patients have to have a host of other blood cross-typing done, are frequently found among patients of African ancestry, in particular among those who suffer from that quintessentially African disease, Sickle Cell Anemia. Whether this is due to the inherent genetic variation in Africans, or whether it's a process of sickle cell disease, is not fully clear to me, but from a clinician's standpoint it hardly makes any difference. Patients who have requirements beyond the eight common types need to be cross-matched for special blood, sometimes very rare blood indeed. One patient under discussion in the lecture today essentially had only one person who was known to have blood that she could accept--in the entire world. And since that patient's donor (a close relative) was still a child, not so much a help.
Anyway, the likelihood that you'll get a match for that special blood is increased if you have a large pool of donors who more closely resemble you genetically. Meaning: from your ethnic or racial group. So Africans and African-Americans--who constitute a major if not the major group of people with these rare reactions to blood transfusion--are the ones most in need of blood donors of African ancestry. And there's the rub, because African-Americans are far less likely to donate their blood, at the rate of 25 to 50 percent the rate of blood donation among whites. This is very much unlike the situation in a group of transfusion-dependent diseases called thalassemias, which sometimes afflict people of African descent, but more often are seen in Caucasians, who have a much larger pool of donors from which adequate matches can be found, and so there are far fewer transfusion crises and dilemmas.
Why is this? Well, if you don't think that American history, filled with its pernicious racism, is grasping with its fetid hands our modern system of blood donation, then you're missing just as much as the brilliant-but-clueless docs I've described above. Leave aside slavery and all of its ill consequences alone for a moment--just consider the treatment of African-Americans at the hands of doctors, some of whom were employed by the Federal Government of the United States, as they were prevented from being cured of syphilis or bombarded with radiation without awareness or consent. Or the story of Henrietta Lacks, whose ultimately fatal cancer cells became the first human cells cultured outside of the body, remaining the workhorse cells for biomedical scientists to this day, a major source of commerce in the scientific world, worth billions of dollars, while her descendants struggle to afford health insurance.
Think about that while you swallow the statistic on the poor rates of blood donation among African Americans when your next sickler needs a transfusion. Does blood donation cause syphilis? No, of course not--but would you trust a system that had treated your brothers and sisters like this for generations? As the Tuskegee Syphilis Study Legacy Committee Report wrote in 1996: "the [study] continues to cast its long shadow on the contemporary relationship between African Americans and the biomedical community."
Indeed. It has not only done that; it has cruelly deprived some members of its own community the lifeblood it so desperately needs. This is why doctors need to be as aware of history as they are of science.
--br
Wednesday, April 27, 2011
Government and Medicine: Legislative and Judicial Follies
I try--I repeat, I try--to construct eloquent blog posts as often as I write them, taking care to choose my words as I tiptoe through the minefields of cyberspastic hyperbole and vitriol. That said, I could do little more than utter "blech" at this news piece that the Massachusetts State House has voted overwhelmingly, as part of an "economic development" bill, to repeal a ban on gift-giving from pharmaceutical companies to physicians that had passed in 2008. For those who have been trapped in solid ice since, say, the mid-1930s and the heyday of the Henry Cabot Lodges and William Morgan Butlers, in Massachusetts, the House belongs to the Democratic party. So how does such a law that seems to once again encourage the wink-wink nudge-nudge relationship between docs and the pill-pushers--particularly when the same legislative body clamps down on labor's bargaining rights in an effort to rein in spending costs--get passed?
Amazingly, the answer appears to lie in...the dining & entertainment lobby. According to Garrett Bradley (D) of Hingham, the sponsor of the measure, the ban stifles business, hurting convention centers and "restaurants where companies typically hosted physician events and dinners." (NB--the quote is from the article, not a direct of Mr. Bradley, though I doubt he'd quibble if the line were attributed to him.) Never mind the fact that this sector of the Massachusetts economy appears to be doing reasonably well, with an increase in overall revenue compared to last year, the measure's backers appear to be saying that it's perfectly fine if a payola-style arrangement is in place, as long as the palms continue to be greased and the filet mignon gets served with the Cabernet.
Blogger-doc Dan Carlat has already staked out the Swiftian rhetorical territory with a delightful skewering of the follies, leaving me and others to play the straight guy. So here goes my best effort: no self-respecting physician compromises the health of his or her patients by allowing themselves to be manipulated by claptrap. There is ample evidence that gift-giving induces an attitude of reciprocation, lucidly-but-luridly described in such books as The Truth About Drug Companies and White Coat, Black Hat, regardless of the actual quality of the product, and that drug reps know this and seize on the vanity of physicians to play them for dupes. Drug companies, however beneficial their societal effects may (or may not always) be, have a responsibility to shareholders, whose primary or sole interest is in the generation of wealth. To anyone in any state of mind other than that of abject denial, this is a primary objective that is in direct conflict with the caring of patients. Thus, doctors cannot accept gifts of any kind from those whose job it is to sell drugs.
A related theme is being played out in the judicial branch of government, as the US Supreme Court is hearing arguments on a Vermont law that bars the commercial use of physician prescription patterns. Based on the early returns, and noting previous Court decisions that take a fairly broad view of "free-speech" rights (at least if you are a corporation), it appears that the law is destined for being overturned. I can't claim to be a legal expert and thus won't even begin to take a crack at the wrangling over the First Amendment, other than to note a certain puzzlement at what passes for "free speech" these days among the Court's "strict constructionist" wing. Did the Founding Fathers really have the selling of a doctor's prescription habits in mind when crafting the First Amendment? I'm thinking not, but I await the peals of derision from my philosophico-legal foils (and loyal readers!) such as Ted Frank, a conservative maverick (I'm not sure if "conservative" is the right word for him; I'm certain that "maverick" is) and who ranks as the second smartest person I have ever had the pleasure of knowing in my life. (And in case anyone might misunderstand, I'm not implying that I occupy the top spot; I doubt I crack even the top 75, and I don't have that many friends.)
Regardless of the legal principles at stake, Chief Justice John Roberts made his contribution to the follies by appearing to frame this as an argument of "restricting the flow of information to doctors," as silly a line that can be uttered in such an august house as SCOTUS. How would withholding prescription info from drug companies prevent them from making their pitch for their drug? How does this even remotely "restrict the flow of information"? The naivete exhibited by the Chief Justice is pretty remarkable (and shared, without surprise, by Justices Scalia and Kennedy). The "it's all data, it's all protected by the First Amendment" argument seems to only work when big business benefits, but not the other way around. The recipe to Coke is just data, too; somehow I don't think the Coca-Cola Corporation considers that to be something that your local Joe can just barge on in and demand. Pray tell, what's the difference?
A hat-tip to Carey Goldberg at WBUR's Common Health Blog, especially for her humoring me in my response of "blech"!
--br
Amazingly, the answer appears to lie in...the dining & entertainment lobby. According to Garrett Bradley (D) of Hingham, the sponsor of the measure, the ban stifles business, hurting convention centers and "restaurants where companies typically hosted physician events and dinners." (NB--the quote is from the article, not a direct of Mr. Bradley, though I doubt he'd quibble if the line were attributed to him.) Never mind the fact that this sector of the Massachusetts economy appears to be doing reasonably well, with an increase in overall revenue compared to last year, the measure's backers appear to be saying that it's perfectly fine if a payola-style arrangement is in place, as long as the palms continue to be greased and the filet mignon gets served with the Cabernet.
Blogger-doc Dan Carlat has already staked out the Swiftian rhetorical territory with a delightful skewering of the follies, leaving me and others to play the straight guy. So here goes my best effort: no self-respecting physician compromises the health of his or her patients by allowing themselves to be manipulated by claptrap. There is ample evidence that gift-giving induces an attitude of reciprocation, lucidly-but-luridly described in such books as The Truth About Drug Companies and White Coat, Black Hat, regardless of the actual quality of the product, and that drug reps know this and seize on the vanity of physicians to play them for dupes. Drug companies, however beneficial their societal effects may (or may not always) be, have a responsibility to shareholders, whose primary or sole interest is in the generation of wealth. To anyone in any state of mind other than that of abject denial, this is a primary objective that is in direct conflict with the caring of patients. Thus, doctors cannot accept gifts of any kind from those whose job it is to sell drugs.
A related theme is being played out in the judicial branch of government, as the US Supreme Court is hearing arguments on a Vermont law that bars the commercial use of physician prescription patterns. Based on the early returns, and noting previous Court decisions that take a fairly broad view of "free-speech" rights (at least if you are a corporation), it appears that the law is destined for being overturned. I can't claim to be a legal expert and thus won't even begin to take a crack at the wrangling over the First Amendment, other than to note a certain puzzlement at what passes for "free speech" these days among the Court's "strict constructionist" wing. Did the Founding Fathers really have the selling of a doctor's prescription habits in mind when crafting the First Amendment? I'm thinking not, but I await the peals of derision from my philosophico-legal foils (and loyal readers!) such as Ted Frank, a conservative maverick (I'm not sure if "conservative" is the right word for him; I'm certain that "maverick" is) and who ranks as the second smartest person I have ever had the pleasure of knowing in my life. (And in case anyone might misunderstand, I'm not implying that I occupy the top spot; I doubt I crack even the top 75, and I don't have that many friends.)
Regardless of the legal principles at stake, Chief Justice John Roberts made his contribution to the follies by appearing to frame this as an argument of "restricting the flow of information to doctors," as silly a line that can be uttered in such an august house as SCOTUS. How would withholding prescription info from drug companies prevent them from making their pitch for their drug? How does this even remotely "restrict the flow of information"? The naivete exhibited by the Chief Justice is pretty remarkable (and shared, without surprise, by Justices Scalia and Kennedy). The "it's all data, it's all protected by the First Amendment" argument seems to only work when big business benefits, but not the other way around. The recipe to Coke is just data, too; somehow I don't think the Coca-Cola Corporation considers that to be something that your local Joe can just barge on in and demand. Pray tell, what's the difference?
A hat-tip to Carey Goldberg at WBUR's Common Health Blog, especially for her humoring me in my response of "blech"!
--br
Wednesday, March 23, 2011
Fukushima Daiichi, and the Perception of Radiation Risk
Evolutionarily speaking, we are as a species hardwired to analyze risk based off of information that's directly in front of us--immediately accessible to our five senses. We're designed not to trust food that smells funny, can instantly calculate how far away we should stay from large cats capable of having us for a snack, and do a host of other things that were very useful for us to eke out a living in Olduvai Gorge.
But we live in the 21st century, and nowadays our ability to perceive and estimate risk is hampered by the fact that many of today's risks are abstract, and require a resonably sophisticated understanding of statistics. Take, for example, a recent discussion found in the Paper Of Record about income distribution in the United States. True--it's not really a round table about "risk" per se, unless you consider radically unequal wealth distribution to be a risk to democracy, as Supreme Court Justice Louis Brandeis did when he said that "we can have democracy in this country, or we can have great wealth concentrated in the hands of a few, but we can't have both." Still, the NY Times roundtable was remarkable in that all of the contributors, whether approaching the issue from a left or right viewpoint, agreed that most Americans had vastly underestimated how much wealth is held by relatively few. In particular, a study by Michael Norton and Dan Ariely found that not only do Americans think that wealth distribution to be significantly more equitable than it actually is, but that they would prefer it to be even more equitable than what they (wrongly) perceive.
If this isn't a classic example of what George W. Bush would call "misunderestimation" then it's not clear what is, and moreover, it highlights the difficulties people have in making accurate estimates about things like the distribution of wealth in a hugely complex society: the information simply cannot be found by opening your eyes and looking around. In medicine, we see this all the time: people are often terrified of exotic diseases that pose little threat to them, while being utterly blithe to the daily assaults on their bodies--frequently self-inflicted--that are much more likely to send them six feet under. To wit: drinking, smoking, eating poorly and not exercising.
The recent events at the Fukushima Daiichi nuclear power plant have been a case study in this process of risk assessment, and not altogether surprisingly, we haven't done well collectively in harmonizing our level of panic to the actual threat that the reactors pose. Despite a good number of depressing news stories, some cataloging evasive action by non-Japanese governments, it is far from clear how huge an impact the nuclear accident is going to have. While it is already comparable to the Three Mile Island accident in 1979, and is not (yet) as catastrophic as the Chernobyl accident of 1986, the question still remains: just how dangerous is it? Though the story is far from over there with events taking dramatic swings in short periods, the short answer is something like dangerous, but not as dangerous as you think. Not nearly as dangerous as you think.
That point is neatly illustrated, both in sound and visual format, in this news story from Adam Ragusea of WBUR (Boston), and is described as the "Dread-to-Risk ratio" by Andrew Revkin of the Dot Earth blog at NYT. Both pieces have the same useful graphic to give you some sense of the relative levels of radiation that we're talking about. Live within 50 miles of a nuclear power plant for one full year? That will give you about 0.1 microSieverts (uSv) of radiation. (What a Sievert is, is a longer discussion, but we'll just shorthand it here and say that it's some relative value of radiation, and that the higher the number, the more dangerous it gets.) One flight from New York to LA buys you about 400 times that amount (40 uSv). That's not even the round trip! A standard chest x-ray, meanwhile, is worth about 20 uSv. But a mammogram is a whopper, clocking in at 3 milliSieverts--thus about 150 "standard" x-rays and just under forty roundtrip flights from NY to LA. A CT scan can be worth almost twice the amount of the mammogram (5.8 mSv).
(In case I've lost some people on this micro/milli distinction, you need 1000 "micros" to make 1 "milli." I'm going to flip back and forth but will point it out when I do.)
So how do these numbers stack up to the nuclear disasters? If you lived within 10 miles of the Three Mile Island plant during the accident and didn't make a run for it, the total dose of radiation you received was 80 microSieverts--far less than one mammogram. By contrast, one area near the Fukushima plant recorded a total dose in one day of 3.6 milliSieverts: less than a CT but more than a mammogram, though of course we're only talking about one day's worth of radiation. With Chernobyl, the radiation levels fluctuated wildly both in time and place so making a general statement about the radiation is essentially impossible, but had you been moved by some weird spirit to take a stroll on the grounds just last year, about 25 years after the accident, you would have gotten two mammograms' worth of radiation for your troubles: 6 milliSieverts. I won't reproduce the pic here out of respect for copyright but highly recommend it to anyone with the time; Ragusea of WBUR translates this into a tone equivalent, and the radiation from TMI is a blip, while the sound for "mammogram" is substantially longer.
I draw two conclusions from all of this. First, while the troubles at Fukushima are by no means trivial, and for that matter aren't yet finished, I think it's a bit premature to write the obituary for nuclear power. In terms of accidents, it's not nearly as dangerous as most people suppose. The problem with nukes, in the TMI and Chernobyl age as well as today, is what to do with the radioactive waste generated by the plants rather than the risks they pose viz. accidents. Second is that we should try to minimize mammograms! Do only the amount that will help save lives, and not one more after that. This was the logic behind the recently revised US Preventative Services Task Force, which recommended no mammograms to women under 50, and biannual ones to those over. Despite this entirely sensible approach--based on good research with a careful eye toward the risk/benefit ratio of the radiation, it should be noted--there were howls of indignation from people purportedly speaking for women, accusing the very bureaucrats who issued the new recs to be female-hostile, or something like that.
--br
(NB--the first draft of this version, which snuck out prematurely, posted some incorrect calculations with respect to x-rays, mammograms, and NY-LA flights. The corrected version is now present.)
But we live in the 21st century, and nowadays our ability to perceive and estimate risk is hampered by the fact that many of today's risks are abstract, and require a resonably sophisticated understanding of statistics. Take, for example, a recent discussion found in the Paper Of Record about income distribution in the United States. True--it's not really a round table about "risk" per se, unless you consider radically unequal wealth distribution to be a risk to democracy, as Supreme Court Justice Louis Brandeis did when he said that "we can have democracy in this country, or we can have great wealth concentrated in the hands of a few, but we can't have both." Still, the NY Times roundtable was remarkable in that all of the contributors, whether approaching the issue from a left or right viewpoint, agreed that most Americans had vastly underestimated how much wealth is held by relatively few. In particular, a study by Michael Norton and Dan Ariely found that not only do Americans think that wealth distribution to be significantly more equitable than it actually is, but that they would prefer it to be even more equitable than what they (wrongly) perceive.
If this isn't a classic example of what George W. Bush would call "misunderestimation" then it's not clear what is, and moreover, it highlights the difficulties people have in making accurate estimates about things like the distribution of wealth in a hugely complex society: the information simply cannot be found by opening your eyes and looking around. In medicine, we see this all the time: people are often terrified of exotic diseases that pose little threat to them, while being utterly blithe to the daily assaults on their bodies--frequently self-inflicted--that are much more likely to send them six feet under. To wit: drinking, smoking, eating poorly and not exercising.
The recent events at the Fukushima Daiichi nuclear power plant have been a case study in this process of risk assessment, and not altogether surprisingly, we haven't done well collectively in harmonizing our level of panic to the actual threat that the reactors pose. Despite a good number of depressing news stories, some cataloging evasive action by non-Japanese governments, it is far from clear how huge an impact the nuclear accident is going to have. While it is already comparable to the Three Mile Island accident in 1979, and is not (yet) as catastrophic as the Chernobyl accident of 1986, the question still remains: just how dangerous is it? Though the story is far from over there with events taking dramatic swings in short periods, the short answer is something like dangerous, but not as dangerous as you think. Not nearly as dangerous as you think.
That point is neatly illustrated, both in sound and visual format, in this news story from Adam Ragusea of WBUR (Boston), and is described as the "Dread-to-Risk ratio" by Andrew Revkin of the Dot Earth blog at NYT. Both pieces have the same useful graphic to give you some sense of the relative levels of radiation that we're talking about. Live within 50 miles of a nuclear power plant for one full year? That will give you about 0.1 microSieverts (uSv) of radiation. (What a Sievert is, is a longer discussion, but we'll just shorthand it here and say that it's some relative value of radiation, and that the higher the number, the more dangerous it gets.) One flight from New York to LA buys you about 400 times that amount (40 uSv). That's not even the round trip! A standard chest x-ray, meanwhile, is worth about 20 uSv. But a mammogram is a whopper, clocking in at 3 milliSieverts--thus about 150 "standard" x-rays and just under forty roundtrip flights from NY to LA. A CT scan can be worth almost twice the amount of the mammogram (5.8 mSv).
(In case I've lost some people on this micro/milli distinction, you need 1000 "micros" to make 1 "milli." I'm going to flip back and forth but will point it out when I do.)
So how do these numbers stack up to the nuclear disasters? If you lived within 10 miles of the Three Mile Island plant during the accident and didn't make a run for it, the total dose of radiation you received was 80 microSieverts--far less than one mammogram. By contrast, one area near the Fukushima plant recorded a total dose in one day of 3.6 milliSieverts: less than a CT but more than a mammogram, though of course we're only talking about one day's worth of radiation. With Chernobyl, the radiation levels fluctuated wildly both in time and place so making a general statement about the radiation is essentially impossible, but had you been moved by some weird spirit to take a stroll on the grounds just last year, about 25 years after the accident, you would have gotten two mammograms' worth of radiation for your troubles: 6 milliSieverts. I won't reproduce the pic here out of respect for copyright but highly recommend it to anyone with the time; Ragusea of WBUR translates this into a tone equivalent, and the radiation from TMI is a blip, while the sound for "mammogram" is substantially longer.
I draw two conclusions from all of this. First, while the troubles at Fukushima are by no means trivial, and for that matter aren't yet finished, I think it's a bit premature to write the obituary for nuclear power. In terms of accidents, it's not nearly as dangerous as most people suppose. The problem with nukes, in the TMI and Chernobyl age as well as today, is what to do with the radioactive waste generated by the plants rather than the risks they pose viz. accidents. Second is that we should try to minimize mammograms! Do only the amount that will help save lives, and not one more after that. This was the logic behind the recently revised US Preventative Services Task Force, which recommended no mammograms to women under 50, and biannual ones to those over. Despite this entirely sensible approach--based on good research with a careful eye toward the risk/benefit ratio of the radiation, it should be noted--there were howls of indignation from people purportedly speaking for women, accusing the very bureaucrats who issued the new recs to be female-hostile, or something like that.
--br
(NB--the first draft of this version, which snuck out prematurely, posted some incorrect calculations with respect to x-rays, mammograms, and NY-LA flights. The corrected version is now present.)
Subscribe to:
Posts (Atom)