• Drugs are tested by the people who manufacture them, in poorly designed trials, on hopelessly small numbers of weird, unrepresentative patients, and analysed using techniques which are flawed by design, in such a way that they exaggerate the benefits of treatments. (loc. 43-45)
  • Drug companies around the world have produced some of the most amazing innovations of the past fifty years, saving lives on an epic scale. But that does not allow them to hide data, mislead doctors and harm patients. (loc. 63-65)
  • Industry-funded trials are more likely to produce a positive, flattering result than independently-funded trials. (loc. 152-53)
  • 85 per cent of the industry-funded studies were positive, but only 50 per cent of the government-funded trials were. That’s a very significant difference. (loc. 158-59)
  • In fact, it is so deep-rooted that even if we fixed it today – right now, for good, forever, without any flaws or loopholes in our legislation – that still wouldn’t help, because we would still be practising medicine, cheerfully making decisions about which treatment is best, on the basis of decades of medical evidence which is – as you’ve now seen – fundamentally distorted. But there is a way ahead. (loc. 213-16)
  • In the published data, reboxetine was a safe and effective drug. In reality, it was no better than a sugar pill, and worse, it does more harm than good. As a doctor I did something which, on the balance of all the evidence, harmed my patient, simply because unflattering data was left unpublished. (loc. 240-42)
  • Evidence is the only way we can possibly know if something works – or doesn’t work – in medicine. We proceed by testing things, as cautiously as we can, in head-to-head trials, and gathering together all of the evidence. (loc. 250-51)
  • When we don’t share the results of basic research, such as a small first-in-man study, we expose people to unnecessary risks in the future. (loc. 293-94)
  • It may sound like a simple idea, but systematic reviews are extremely rare outside clinical medicine, and are quietly one of the most important and transgressive ideas of the past forty years. (loc. 350-51)
  • Francis Bacon explained in 1620 that we often mislead ourselves by only remembering the times something worked, and forgetting those when it didn’t. (loc. 443-44)
  • There was one clear thing we should do about this: start a registry of all clinical trials, demand that people register their study before they start, and insist that they publish the results at the end. That was 1986. Since then, a generation later, we have done very badly. (loc. 470-72)
  • The most current systematic review on publication bias, from 2010, from which the examples above are taken, draws together the evidence from various fields. Twelve comparable studies follow up conference presentations, and taken together they find that a study with a significant finding is 1.62 times more likely to be published. For the four studies taking lists of trials from before they started, overall, significant results were 2.4 times more likely to be published. Those are our best estimates of the scale of the problem. They are current, and they are damning. (loc. 524-28)
  • In fact, the cost of proving that a finding was wrong is vastly greater than the cost of making it in the first place, because you need to run the experiment many more times to prove the absence of a finding, simply because of the way that the statistics of detecting weak effects work; and you also need to be absolutely certain that you’ve excluded all technical problems, to avoid getting egg on your face if your replication turns out to have been inadequate. These barriers to refutation may partly explain why it’s so easy to get away with publishing findings that ultimately turn out to be wrong. (loc. 590-95)
  • We should give more incentives to academics for publishing negative results; but we should also give them more opportunity. (loc. 603-4)
  • We will see that many of the very people and organisations we would have expected to protect patients from the harm inflicted by missing data have, instead, shirked their responsibilities; and worse than that, we will see that many of them have actively conspired in helping companies to withhold data from patients. (loc. 679-81)
  • Patients are specifically told when they sign up to participate that the data will be used to inform future decisions. If this isn’t true, and the data can be withheld at the whim of a researcher or a company, then the patients have been actively lied to. (loc. 690-91)
  • It is entirely normal for researchers and academics conducting industry-funded trials to sign contracts subjecting them to gagging clauses which forbid them to publish, discuss or analyse data from the trials they have conducted, without the permission of the funder. (loc. 695-97)
  • So that, in total, is why I regard the ICMJE, the FDA and the EU’s claims to have addressed this problem as ‘fake fixes’. In fact, they have done worse than fail: they have given false reassurance that the problem has been fixed, false reassurance that it has gone away, and they have led us to take our eyes off the ball. For half a decade now, people in medicine and academia have talked about publication bias as if it was yesterday’s problem, discovered in the 1990s and early 2000s, and swiftly fixed. (loc. 923-26)
  • The output of a regulator is often simply a crude, brief summary: almost a ‘yes’ or ‘no’ about side effects. This is the opposite of science, which is only reliable because everyone shows their working, explains how they know that something is effective or safe, shares their methods and their results, and allows others to decide if they agree with the way they processed and analysed the data. (loc. 1052-55)
  • The field of missing data is a tragic and strange one. We have tolerated the emergence of a culture in medicine where information is routinely withheld, and we have blinded ourselves to the unnecessary suffering and death that follows from this. The people we should have been able to trust to handle all this behind the scenes – the regulators, the politicians, the senior academics, the patient organisations, the professional bodies, the universities, the ethics committees – have almost all failed us. (loc. 1619-23)
  • At this moment you might be thinking: what kind of reckless maniac gives their only body over for an experiment like this? I’m inclined to agree. (loc. 1701-2)
  • Participants generally have few economic alternatives, especially in the US, and are frequently presented with lengthy and impenetrable consent forms, which are hard to navigate and understand. (loc. 1727-28)
  • This raises several serious problems, the first of which is ethical. It’s obviously wrong to put patients in a trial where half of them will be given a placebo, if there is a currently available option which is known to be effective, because you are actively depriving half of your patients of treatment for their disease. (loc. 2075-77)
  • When you come to get your drug approved to go on the market, regulators will often permit you to show proof of effectiveness only on surrogate outcomes. (loc. 2147-48)
  • For many decades, for example, the FDA’s performance was measured by how many drugs it managed to approve in each calendar year. 25 This led to a phenomenon known as the ‘December Effect’, whereby a very large proportion of the year’s approvals were rushed through in a panic during the last few weeks around Christmas. (loc. 2163-65)
  • This personal testimony was in all likelihood a combination of the placebo effect and the natural fluctuation in symptoms that all patients experience. (loc. 2253-54)
  • Accelerated approval is not used to get urgent drugs to market for emergency use and rapid assessment. Follow-up studies are not done. These accelerated approval programmes are a smokescreen. (loc. 2271-72)
  • One thing is clear from all the stories in this book: drug companies respond rationally to incentives, and when those incentives are unhelpful, so are drug companies. (loc. 2276-77)
  • The most expensive doctors in the world don’t know any better than anyone else, since any trained person can critically read the best systematic reviews on a given drug, what it will do to your life expectancy, and there is no hack, no workaround, for this broken system. (loc. 2377-79)
  • Database studies give us information on what drugs do in real-world patients, under real-world conditions (loc. 2473-74)
  • When it comes to the secrecy of regulators, it is clear that there is an important cultural issue that needs to be resolved. I’ve spent some time trying to understand the perspective of public servants who are clearly good people, but still seem to think that hiding documents from the public is desirable. The best I can manage is this: regulators believe that decisions about drugs are best made by them, behind closed doors; and that as long as they make good decisions, it is OK for these to then be communicated only in summary form to the outside world. (loc. 2516-20)
  • A regulator is deciding whether it’s in the interests of society overall that a particular drug should ever be available for use in its country, even if only in some very obscure circumstance, such as when all other drugs have failed. Doctors, meanwhile, are making a decision about whether they should use this drug right now, for the patient in front of them. Both are using the safety and efficacy data to which they have access, but they both need access to it in full, in order to make their very different decisions. (loc. 2524-28)
  • Regulators frequently approve drugs that are only vaguely effective, with serious side effects, on the off-chance that they might be useful to someone, somewhere, when other interventions aren’t an option. (loc. 2531-32)
  • Drugs are approved on weak evidence, showing no benefit over existing treatments, and sometimes no benefit at all. This gives us a market flooded with drugs that aren’t very good. We then fail to collect better evidence on them once they’re available, even when we have legislative power to force companies to do better trials, and even when they’ve promised to do so. Lastly, side-effects data is gathered in a slightly ad hoc fashion, behind closed doors, with secret documents and ‘risk management plans’ that are hidden from doctors and patients for no good reason. The results of this safety monitoring are communicated inconsistently, through mechanisms that are uninformative and are therefore used infrequently, and which are, in any case, vulnerable to spectacular delays imposed by drug companies. (loc. 2606-11)
  • Drug companies should be required to provide data showing how their new drug compares against the best currently available treatment, for every new drug, before it comes onto the market. (loc. 2622-23)
  • Regulators and healthcare funders should use their influence to force companies to produce more informative trials. (loc. 2625-26)
  • All information about safety and efficacy that passes between regulators and drug companies should be in the public domain, as should all data held by national and international bodies about adverse events on medications, unless there are significant privacy concerns on individual patient records. (loc. 2633-35)
  • We should aim to create a better market for communicating the risks and benefits of medications. The output of regulators is stuffy, legalistic and impenetrable, and reflects the interests of regulators, not patients or doctors. (loc. 2641-42)
  • We need more trials. Wherever there is true uncertainty about which treatment is best, we should simply compare them, see which is best at treating a condition, and which has worse side effects. (loc. 2645-47)
  • So, fraud: it happens, it’s not clever, it’s just criminal, and is perpetrated by bad people. But its total contribution to error in the medical literature is marginal when compared to the routine, sophisticated and – more than anything – plausibly deniable everyday methodological distortions which fill this book. (loc. 2719-21)
  • As we have seen, patients in trials are often nothing like real patients seen by doctors in everyday clinical practice. Because these ‘ideal’ patients are more likely to get better, they exaggerate the benefits of drugs, and help expensive new medicines appear to be more cost effective than they really are. (loc. 2725-27)
  • Drugs are often compared with something that’s not very good. We’ve already seen this in companies preferring to test their drugs against a dummy placebo sugar pill that contains no medicine, as this sets the bar very low. But it is also common to see trials where a new drug is compared with a competitor that is known to be pretty useless; or with a good competitor, but at a stupidly low dose, or a stupidly high dose. (loc. 2785-87)
  • Trials are often brief, as we have seen, because companies need to get results as quickly as possible, in order to make their drug look good while it is still in patent, and owned by them. This raises several problems, including ones that we have already reviewed: specifically, people using ‘surrogate outcomes’, like changes in blood tests, instead of ‘real-world outcomes’, like changes in heart attack rates, which take longer to emerge. But brief trials can also distort the benefits of a drug simply by virtue of their brevity, if the short-term effects are different to the long-term ones. (loc. 2818-22)
  • If you stop a trial early, or late, because you were peeking at the results as it went along, you increase the chances of getting a favourable result. This is because you are exploiting the random variation that exists in the data. It is a sophisticated version of the way someone can increase their chances of winning in a coin toss by using this strategy: ‘Damn! OK, best of three…Damn! Best of five?…Damn! OK, best of seven…’ (loc. 2844-47)
  • Sometimes a trial can be prolonged for entirely valid reasons, but sometimes, prolonging a trial – or including the results from a follow-up period after it – can dilute important findings, and make them harder to see. (loc. 2890-92)
  • A small trial is fine, if your drug is consistently life-saving in a condition that is consistently fatal. But you need a large trial to detect a small difference between two treatments; and you need a very large trial to be confident that two drugs are equally effective. (loc. 2947-49)
  • Blood tests are easy to measure, and often respond very neatly to a dose of a drug; but patients care more about whether they are suffering, or dead, than they do about the numbers printed on a lab report. (loc. 2973-75)
  • Sometimes, the way you package up your outcome data can give misleading results. For example, by setting your thresholds just right, you can turn a modest benefit into an apparently dramatic one. And by bundling up lots of different outcomes, to make one big ‘composite outcome’, you can dilute harms; or allow freak results on uninteresting outcomes to make it look as if a whole group of outcomes are improved. (loc. 2987-90)
  • There is a terrifying reality revealed by this study: rumours, oversimplifications and wishful thinking can spread through the academic literature, just as easily as they do through any internet discussion forum. (loc. 3032-33)
  • Sometimes patients leave a trial altogether, often because they didn’t like the drug they were on. But when you analyse the two groups in your trial you have to make sure you analyse all the patients assigned to a treatment. Otherwise you overstate the benefits of your drug. (loc. 3035-37)
  • You’re going to use the results of a trial to inform your decision about whether to ‘give someone some tablets’, not ‘force some tablets down their throat compulsorily’. So you want the results to be from an analysis that looks at people according to what they were given by their doctor, rather than what they actually swallowed. (loc. 3049-51)
  • If you measure a dozen outcomes in your trial, but cite an improvement in any one of them as a positive result, then your results are meaningless. Our tests for deciding if a result is statistically significant assume that you are only measuring one outcome. By measuring a dozen, you have given yourself a dozen chances of getting a positive result, rather than one, without clearly declaring that. Your study is biased by design, and is likely to find more positive results than there really are. (loc. 3062-65)
  • To be clear: if you switch your pre-specified primary outcome between the beginning and the end of your trial, without a very good explanation for why you’ve done so, then you’re simply not doing science properly. Your study is broken by design. It should be a universal requirement that all studies report their pre-specified primary outcome as the primary outcome. This should be enforced by all journals, and things should have been done this way since trials began. (loc. 3120-23)
  • If your drug didn’t win overall in your trial, you can chop up the data in lots of different ways, to try and see if it won in a subgroup: maybe it works brilliantly in Chinese men between fifty-six and seventy-one. This is as stupid as playing ‘Best of three…Best of five…’ And yet it is commonplace. (loc. 3145-47)
  • You can draw a net around a group of trials, by selectively quoting them, and make a drug seem more effective than it really is. When you do this on one use of one drug, it’s obvious what you’re doing. But you can also do it within a whole clinical research programme, and create a confusion that nobody yet feels able to contain. (loc. 3211-14)
  • Sometimes, trials aren’t really trials: they’re viral marketing projects, designed to get as many doctors prescribing the new drug as possible, with tiny numbers of participants from large numbers of clinics. (loc. 3243-44)
  • At the end of your trial, if your result is unimpressive, you can exaggerate it in the way that you present the numbers; and if you haven’t got a positive result at all, you can just spin harder. (loc. 3301-2)
  • Research has shown that if you present benefits as a relative risk reduction, people are more likely to choose an intervention. (loc. 3328-29)
  • First they looked in the abstracts. These are the brief summaries of an academic paper, on the first page, and they are widely read, either because people are too busy to read the whole paper, or because they cannot get access to it without a paid subscription (a scandal in itself). (loc. 3364-66)
  • Perhaps the greatest problem is that many of those who read the medical literature implicitly assume that such precautions are taken by all journal editors. But they are wrong to assume this. There is no enforcement for any of what we have covered, everyone is free to ignore it, and so commonly – as with newspapers, politicians and quacks – uncomfortable facts are cheerfully spun away. (loc. 3388-91)
  • So, we have established that there are some very serious problems in medicine. We have badly designed trials, which suffer from all kinds of fatal flaws: they’re conducted in unrepresentative patients, they’re too brief, they measure the wrong outcomes, they go missing if the results are unflattering, they get analysed stupidly, and often they’re simply not done at all, simply because of expense, or lack of incentives. These problems are frighteningly common, both for the trials that are used to get a drug on the market, and for the trials that are done later, all of which guide doctors’ and patients’ treatment decisions. (loc. 3403-7)
  • It feels as if some people, perhaps, view research as a game, where the idea is to get away with as much as you can, rather than to conduct fair tests of the treatments we use. (loc. 3407-9)
  • Instead, we have occasional, small, brief trials, in unrepresentative populations, testing irrelevant comparisons, measuring irrelevant outcomes, with whole trials that go missing, avoidable design flaws, and endless reporting biases that only persist because research is conducted chaotically, for commercial gain, in spuriously expensive trials. The poor-quality evidence created by this system harms patients around the world. And if we wanted, we could fix it. (loc. 3634-38)
  • This education is expensive, and the state is unwilling to pay, so it is drug companies that pay for talks, tutorials, teaching materials, conference sessions, and whole conferences, featuring experts who they know prefer their drug. (loc. 3713-14)
  • Marketing, therefore, exists for no reason other than to pervert evidence-based decision-making in medicine. (loc. 3734-35)
  • So we pay for products, with a huge uplift in price to cover their marketing budget, and that money is then spent on distorting evidence-based practice, which in turn makes our decisions unnecessarily expensive, and less effective. (loc. 3739-41)
  • Direct-to-consumer drug advertising has been banned in almost all industrialised countries since the 1940s, for the simple reason that it works: adverts distort doctors’ prescribing behaviour – by design – and increase costs unnecessarily. The USA and New Zealand (along with Pakistan and South Korea) changed their minds in the early 1980s, and permitted a resurgence of this open marketing. (loc. 3751-54)
  • I’ll rephrase that for something that’s coming later in this chapter: a lot of people have been convinced that they’re patients. (loc. 3783-84)
  • So the evidence shows that adverts change behaviour, and they change it for the worse. (loc. 3784-85)
  • When you take a step back from pharmaceutical industry marketing, it is simply a process whereby patients pay money to drug companies, in order for them to produce biased information, which then distorts treatment decisions, making them less effective. (loc. 3798-99)
  • It’s patients and the public who pay for the industry’s expensive marketing campaigns. (loc. 3807)
  • The story of the serotonin hypothesis for depression, and its enthusiastic promotion by drug companies, is part of a wider process that has been called ‘disease-mongering’ or ‘medicalisation’, where diagnostic categories are widened, whole new diagnoses are invented, and normal variants of human experience are pathologised, so they can be treated with pills. (loc. 3918-20)
  • Patient groups perform a vital and admirable role: they bring patients together, disseminate information and support, and can help to lobby on behalf of people with the condition they represent. (loc. 4047-48)
  • The cost of manufacturing these drugs is often a tenth of the price for which they are sold, that we pay high prices in part for marketing (some of which goes directly to the patient groups), and that when we spend money on one thing, we can’t spend it on something else. (loc. 4104-6)
  • Repeatedly, we come back to the same circle: we pay high prices for drugs; a quarter of what we pay goes on marketing; our money is then spent on things like patient groups; who in turn insist that we should pay very high prices for these drugs, undermining the very groups, like NICE, that try to determine the best choices for patients overall. (loc. 4107-9)
  • So, drugs are used more after their advertising programmes start, and less when they stop. Doctors who recognise the advert for a drug are more likely to prescribe it. (loc. 4120-22)
  • Econometric models – as far as any mortal can follow them through – suggest that marketing has more influence on drug-usage patterns than the publication of new evidence, and so on. (loc. 4122-23)
  • Am I cherry-picking? The best current systematic review is free to read, well worth the time, and found twenty-four similar studies. 47 Overall, it found that only 67 per cent of the claims in adverts are supported by a systematic review, a meta-analysis or a randomised-control trial. (loc. 4147-49)
  • This is naïve arrogance. From the most current systematic review, there have been twenty-nine studies looking at the impact of drug rep visits. 58 Seventeen of those twenty-nine studies found that doctors who see drug reps are more likely to prescribe the promoted drug (six had mixed results, the rest show no difference, and none show a drop in prescribing). Doctors who see drug reps also tend to have higher prescribing costs, and are less likely to follow best-practice prescribing guidelines. (loc. 4174-78)
  • Since most drug reps cover a number of doctors, and aim to see each one every three months or so, this level of monitoring and refutation is fairly easy to arrange. They also have flash-cards or iPad shows, with the company branding, key words about their drug, and misleading graphs. Sometimes these graphs will play the same games that newspapers and political pamphlets do: a vertical axis that doesn’t start at zero, for example, exaggerating a modest difference. But sometimes they will be smarter: a graph that shows a huge difference on a bar chart between people having the rep’s drug, for example, and people on another treatment, but where the ‘other treatment’, on close examination, is something rubbish. (loc. 4254-58)
  • Social scientists writing on the culture of drug reps suggest that by giving gifts, they become part of the social landscape; and also that doctors develop an unconscious sense of obligation, a debt to be repaid, especially when stronger relationships are built through social events. (loc. 4268-70)
  • Don’t see drug reps! If you’re a doctor, or a prescribing nurse, or a medical student, don’t see drug reps. The evidence shows that they will influence your practice, and that you are wrong to believe that they won’t. (loc. 4324-26)
  • Ban drug reps from your clinic or hospital. Drug reps increase costs and work against evidence-based medicine. (loc. 4327)
  • Encourage people to declare all gifts and hospitality to their patients. (loc. 4335)
  • Ban drug reps from your medical school. If you’re a medical student, and you believe, as I think I’ve shown, that drug reps are harmful, you could move to ban them from educational activities. (loc. 4339-40)
  • Train medical students and doctors about the dangerous influence drug reps can have on medical practice. To my mind, this is not a political act, but rather a legitimate part of training in evidence-based medicine. (loc. 4347-49)
  • So, the person conducting a study, analysing the data, writing a paper, steering it into the hands of a journal, and even writing your medical textbook, may not be quite who you imagine. (loc. 4507-9)
  • Is there a solution? Yes: it’s a system called ‘film credits’, where everyone’s contribution is simply described at the end of the paper: ‘X designed the study, Y wrote the first draft, Z did the statistical analysis,’ and so on. (loc. 4552-53)
  • In reality, the systems used by journals to select articles are brittle, and vulnerable to exploitation. (loc. 4593-94)
  • A quarter of the pharmaceutical industry’s revenue is spent on marketing, twice as much as it spends on research and development, and this all comes from your money, for your drugs. We pay 25 per cent more than we need to, an enormous extra mark-up in price, so that tens of billions of pounds can be spent every year producing material that actively confuses doctors, and undermines evidence-based medicine. This is a very odd state of affairs. (loc. 4708-11)
  • Journals should publish all advertising revenue from each individual drug company annually, and for each individual issue. (loc. 4712-13)
  • Editors should declare their own conflicts of interest, funding sources if they are working academics, stocks, and so on. (loc. 4716-17)
  • Doctors need to learn about new drugs all the time, but we leave them to get on with it by themselves. (loc. 4726-27)
  • It was established that the most senior doctors in the profession were receiving money to give talks that were, in effect, promotional, under the guise of educational activity; it was established that this distorted content changed prescribing behaviour; and then we just left it alone. (loc. 4786-88)
  • Doctors around the world – except in Norway – are taught which drugs are best by the drug companies themselves. The content is biased, and that’s why companies pay for it. For decades people have stood up, shown that the content is biased, written reports against it, demonstrated that weak guidelines fail to police it; and still it continues. (loc. 4857-59)
  • In general, the most common approach to conflict of interest is that it should be declared, rather than outlawed, and there are two reasons for pursuing this policy. Firstly, we hope it will allow the reader to decide whether someone is biased; and secondly, it is hoped that it might change behaviour. (loc. 4959-61)
  • Evidence in medicine is not an abstract academic preoccupation. Evidence is used to make real-world decisions, and when we are fed bad data, we make the wrong decision, inflicting unnecessary pain and suffering, and death, on people just like us. (loc. 5162-64)
  • So, if we’re to make any sense of the mess that the pharmaceutical industry – and my profession – has made of the academic literature, then we need an amnesty: we need a full and clear declaration of all the distortions, on missing data, ghostwriting, and all the other activity described in this book, to prevent the ongoing harm that they still cause. There are no two ways about this, and there is no honour in dodging the issue. (loc. 5308-11)
  • It’s hard to imagine a betrayal more elaborate, or more complete, across so many institutions and professions. This is a story of pay-offs, of course, but more than that, it’s a story of complacency, laziness, banal self-interest and people feeling impotent. You have been failed by the people at the very top of my profession, for decades now, on matters of life and death, and as with the banks, we’re suddenly discovering a terrifying reality. Nobody took responsibility, nobody was in control, but everybody knew something was wrong. (loc. 5375-79)
  • Selling ineffective sugar pills is not a meaningful policy response the dangerous regulatory failure in the pharmaceutical industry. (loc. 5409-10)