We’re all doomed, was my internal head-shaking reaction, when I read this very unsettling book.
Ben Goldacre is a British physician with a wicked sense of humour. In Bad Pharma he takes aim at the unholy and uneasy alliance of pharma companies, regulatory bodies, journal editors and even physicians who conspire to sell drugs to patients. If the drugs are efficacious and safe, so much the better. If not: we have a pill for that.
Goldacre credibly presents the strategies deployed to pass off good results as bad. The first eye-opening point for me was: why are drugs always compared against placebo, instead of “standard of care”? Why is the accepted criterion “This drug is better than nothing” rather than “This drug is better than what already exists”. I’m reminded of the birth control pill “Yaz” , which was at the centre of a marketing frenzy in the early 2000s. I asked a relative of mine who is a physician for her opinion of the pill, and she very wisely said she had no reason to prescribe it: it was similar to a drug already on the market and conferred no new ostensible benefits. She preferred to prescribe the old, off-patent birth control pills whose effects and side-effects were well-known.
How prescient that was. Yaz was a slick marketing job but it was never clear why this pill was good for women. As it turns out, it wasn’t: 20 young women died of blood clots that were attributed to Yaz.
Another strategy: conduct multiple trials, and only publish the one with favourable results. Goldacre cites trials of reboxetine (an antidepressant), in which 3 small studies with 507 patients showed that reboxetine was as good as any other drug. Goldacre himself had prescribed the drug. In 2010 a group of researchers unearthed several other studies of unpublished data of 1657 patients, which showed that patients on reboxetine did *worse* than those on other drugs.
The results published in the academic literature looked perfectly credible: those in the unpublished data were more likely to have side effects.
Goldacre also talks about a problem that infects all of science, not only medicine: the case of the missing data. Journals are keen to publish the latest hot results: what about the non-results? What about publishing things that *don’t* work? Goldacre cites the example of an anti-arrythmic drug called lorcainide, which was tested in ~100 men in 1980. 9/48 men on the drug died, compared with 1/47 men on placebo. The trial was stopped immediately but the results were never published. In the 1980s, it became standard to prescribe anti-arrythmic drugs to all patients who’d had heart attacks. This practice almost certainly killed people, and it might have been avoided had the lorcainide data been published rather than shelved.
I could go on and on. There is an entire chapter on marketing, with a case study on a new affliction called “female sexual dysfunction”. Pfizer, which was gearing up to launch Viagara for women, tried really hard in the 1990s to make “female sexual dysfunction” happen through seemingly innocuous education campaigns and conducted dubious surveys that concluded 43% of women suffer from this formerly unknown disease. Procter and Gamble, which was developing testosterone patches to boost female libido, planned a $100MM marketing push to raise awareness of “FSD” and even got a teaching program accredited by the American Medical Association. When P&G’s product failed to get a license, the accredited teaching program vanished with it. As Goldacre says, if the AMA believed that “FSD” is a serious medical problem affecting 43% of women, then this teaching program should have been viewed as a valuable resource, worth preserving. Instead, when it was clear that there was no money to be made, the educational resources were killed along with the drug.
Journals are presented as especially complicit, as are physicians who take large sums of drug company money without disclosing their financial interests. Clinical studies must be registered with pre-defined endpoints; Goldacre makes a credible argument that drug companies should not be allowed to move the goalposts mid-trial, but rather should be held to testing the originally defined set of outcomes. Journals have the power to be more fastidious on this point, but generally decline to do so.
And it goes on, and on, for 300 pages. It’s all bad news. We’re all doomed.