Although Hamilton succeeded in conducting the first randomized clinical trial on the effects of bloodletting, he failed to publish his results. In fact, we know of Hamilton’s research only because his documents were rediscovered in 1987 among papers hidden in a trunk lodged with the Royal College of Physicians in Edinburgh. Failure to publish is a serious dereliction of duty for any medical researcher, because publication has two important consequences. First, it en courages others to replicate the research, which might either reveal errors in the original research or confirm the result. Second, publication is the best way to disseminate new research, so that others can apply what has been learned.
Lack of publication meant that Hamilton’s bloodletting trial had no impact on the widespread enthusiasm for the practice. Instead, it would take a few more years before other medical pioneers, such as the French doctor Pierre Louis, would conduct their own trials and confirm Hamilton’s conclusion. These results, which were properly published and disseminated, repeatedly showed that bloodletting was not a lifesaver, but rather it was a potential killer. In light of these findings, it seems highly likely that bloodletting was largely responsible for the death of George Washington.
Unfortunately, because these anti‑bloodletting conclusions were contrary to the prevailing view, many doctors struggled to accept them and even tried their best to undermine them. For example, when Pierre Louis published the results of his trials in 1828, many doctors dismissed his negative conclusion about bloodletting precisely because it was based on the data gathered by analysing large numbers of patients. They slated his so‑called ‘numerical method’ because they were more interested in treating the individual patient lying in front of them than in what might happen to a large sample of patients. Louis responded by arguing that it was impossible to know whether or not a treatment might be safe and effective for the individual patient unless it had been demonstrated to be safe and effective for a large number of patients: ‘A therapeutic agent cannot be employed with any discrimination or probability of success in a given case, unless its general efficacy, in analogous cases, has been previously ascertained…without the aid of statistics nothing like real medicine is possible.’
And when the Scottish doctor Alexander MacLean advocated the use of medical trials to test treatments while he was working in India in 1818, critics argued that it was wrong to experiment with the health of patients in this way. He responded by pointing out that avoiding trials would mean that medicine would for ever be nothing more than a collection of untested treatments, which might be wholly ineffective or dangerous. He described medicine practised without any evidence as ‘a continued series of experiments upon the lives of our fellow creatures.’
Despite the invention of the clinical trial and regardless of the evidence against bloodletting, many European doctors continued to bleed their patients, so much so that France had to import 42 million leeches in 1833. But as each decade passed, rationality began to take hold among doctors, trials became more common, and dangerous and useless therapies such as bloodletting began to decline.
Prior to the clinical trial, a doctor decided his treatment for a particular patient by relying on his own prejudices, or on what he had been taught by his peers, or on his misremembered experiences of dealing with a handful of patients with a similar condition. After the advent of the clinical trial, doctors could choose their treatment for a single patient by examining the evidence from several trials, perhaps involving thousands of patients. There was still no guarantee that a treatment that had succeeded during a set of trials would cure a particular patient, but any doctor who adopted this approach was giving his patient the best possible chance of recovery.
Lind’s invention of the clinical trial had triggered a gradual revolution that gained momentum during the course of the nineteenth century. It transformed medicine from a dangerous lottery in the eighteenth century into a rational discipline in the twentieth century. The clinical trial helped give birth to modern medicine, which has enabled us to live longer, healthier, happier lives.
Evidence‑based medicine
Because clinical trials are an important factor in determining the best treatments for patients, they have a central role within a movement known as evidence‑based medicine. Although the core principles of evidence‑based medicine would have been appreciated by James Lind back in the eighteenth century, the movement did not really take hold until the mid‑twentieth century, and the term itself did not appear in print until 1992, when it was coined by David Sackett at McMaster University, Ontario. He defined it thus: ‘Evidence‑based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.’
Evidence‑based medicine empowers doctors by providing them with the most reliable information, and therefore it benefits patients by increasing the likelihood that they will receive the most appropriate treatment. From a twenty‑first‑century perspective, it seems obvious that medical decisions should be based on evidence, typically from randomized clinical trials, but the emergence of evidence‑based medicine marks a turning point in the history of medicine.
Prior to the development of evidence‑based medicine, doctors were spectacularly ineffective. Those patients who recovered from disease were usually successful despite the treatments they had received, not because of them. But once the medical establishment had adopted such simple ideas as the clinical trial, then progress became swift. Today the clinical trial is routine in the development of new treatments and medical experts agree that evidence‑based medicine is the key to effective healthcare.
However, people outside the medical establishment sometimes find the concept of evidence‑based medicine cold, confusing and intimidating. If you have any sympathy with this point of view, then, once again, it is worth remembering what the world was like before the advent of the clinical trial and evidence‑based medicine: doctors were oblivious to the harm they caused by bleeding millions of people, indeed killing many of them, including George Washington. These doctors were not stupid or evil; they merely lacked the knowledge that emerges when medical trials flourish.
Recall Benjamin Rush, for example, the prolific bleeder who sued for libel and won his case on the day that Washington died. He was a brilliant, well‑educated man and a compassionate one, who was responsible for recognizing addiction as a medical condition and realizing that alcoholics lose the capacity to control their drinking behaviour. He was also an advocate for women’s rights, fought to abolish slavery and campaigned against capital punishment. However, this combination of intelligence and decency was not enough to stop him from killing hundreds of patients by bleeding them to death, and encouraging many of his students to do exactly the same.
Rush was fooled by his respect for ancient ideas coupled with the ad hoc reasons that were invented to justify the use of bloodletting. For example, it would have been easy for Rush to mistake the sedation caused by bloodletting for a genuine improvement, unaware that he was draining the life out of his patients. He was also probably confused by his own memory, selectively remembering those of his patients who survived bleeding and conveniently forgetting those who died. Moreover, Rush would have been tempted to attribute any success to his treatment and to dismiss any failure as the fault of a patient who in any case was destined to die.