Выбрать главу

But although placebos have many benefits, they are also surrounded by ethical issues.

In essence, a placebo is a sham treatment, often little more than a sugar pill. When sick patients are given one in a trial, it could be the case that they miss out on vital treatment and actually get worse.

For example, between 1932 and 1972, the US Public Health Service left 399 poor black men with syphilis under the impression that they were receiving treatment – without actually providing any – just to see what would happen. Sadly, they only got worse, and the government didn’t even apologize until 1997.

Flaws in the design of scientific studies can massively affect the results.

We tend to place a lot of trust in the results of medical trials. Why shouldn’t we? As it turns out, some evidence suggests that they might not always be fair tests.

For example, some trials don’t report how they randomize which participants are filtered into the treatment group or the non-treatment group.

In every medical trial there are two groups of patients with a specific disorder: one receives the treatment and the other doesn’t. This approach allows researchers to meaningfully test the effectiveness of the drug.

But not all participants are equal! For example, some participants are known asheartsinks: these patients constantly complain about unspecific symptoms that never improve, and are more likely to drop out or not respond to treatment.

If the next available place in the trial is for the treatment group, the experimenter, wanting a positive outcome for her experiment, might decide that this heart sink shouldn’t participate in the trial – with the result that they test their treatment only on those with a greater chance for recovery.

This has serious consequences: unclear randomization in patient selection can overstate the efficacy of the treatment by 30 percent or more.

For example, one study of homeopathic treatment for muscle soreness in 42 women showed positive results. However, because the study didn’t describe its method of randomization, we can’t be certain that the trial was fair.

Furthermore, patients, doctors or experimenters sometimes know which patient is getting the treatment and which is getting the placebo. In order to be effective, tests need to have something called blinding– i.e., the tester shouldn’t know in which group an individual patient belongs.

Testers can influence their results through conscious or subconscious communication with patient, just as knowledge of which drugs you are taking can influence the way your body responds to treatment.

For example, trials conducted without proper blinding showed acupuncture to be incredibly beneficial, while other tests with proper blinding proved that the benefit of acupuncture was in fact “statistically insignificant.” The difference isn’t trivial!

Statistics can be powerful scientific tools, but they must be used responsibly.

Nothing is certain. That’s why we use statistics– the analysis of numbers and data – to determine something’s probability, such as the effectiveness of a treatment or the likelihood that certain crimes are going to happen. When used correctly, they can be incredibly useful.

For example, statistics can be used inmeta-analysis, in which the results from many similar studies with few patients are combined into a larger, and therefore more robust and accurate test of whether a treatment is effective.

For example, between 1972 and 1981, seven trials were conducted to test whether steroids reduced the rate of infant mortality in premature births, and each showed no strong evidence to support their hypothesis.

However, in 1989 the results were combined and analyzed through meta-analysis, which found very strong evidence that steroids did in fact reduce the risk of infant mortality in premature births!

So wherein lies the discrepancy? The patterns in small studies are sometimes only visible when the data is aggregated.

Yet for all their worth, statistics can be misunderstood and misused, leading to bogus evidence and even injustice.

For example, a solicitor named Sally Clark had two babies who both died suddenly at different times. She was then charged with their murder and sent to jail because of the statistical improbability that two babies in the same family could die of Sudden Infant Death Syndrome (SIDS).

In fact, one key piece of evidence against her was the prosecutor’s calculation that there was only a “one in 73 million” chance that both deaths could be attributed to SIDS. However, this analysis overlooked environmental and genetic factors, which suggest that if one child dies from SIDS, the chances of another SIDS-related death in the family are more likely.

Not only that, but the chance that Clark committed double murder was actually twice as unlikely as both her children dying of SIDS, which, when considered with the rest of the evidence, meant that statistics themselves were simply not enough to convict her.

We are prone to delusions and biases about the information we come across.

Can you remember the first time you drank coffee? Probably not. But can you remember your first kiss? I bet you can! So why is it easier to remember one event, but not the other?

This is because we have been conditioned to pick up and remember unusual events and forget everything else. Hence the way we remember and process information is necessarily biased, because we don’t treat all information equally.

But it’s not only our memory that influences our biases; other factors can lead to mistakes in our thinking and decision making.

One such flaw in our thinking is our tendency to invent relationships between events where none actually exist.

For example, consider the fact that improvements in medical conditions can often be attributed not to a treatment, but to the natural progression of an illness, or regression to the mean. So if you had an illness and your symptoms were at their peak, and then went to, say, a homeopath for treatment, you would soon be getting better.

We naturally assume that the visit caused the improvement, but in reality our treatment simply coincided with the natural return from extreme illness to normal health.

In addition, we are prejudiced by our prior beliefs and those of the “herd.” This was made explicit in one US study that brought together and examined people who supported the death penalty and those who opposed it. In the experiment, one half of each group was given a piece of evidence that supported their belief and a piece of evidence that challenged it, while the other half in each group received contrary evidence.

Interestingly, every single group identified flaws in the research methods for the evidence that challenged their pre-existing beliefs, but dutifully ignored the flaws in the research that supported their view! What’s more, this experiment didn’t just study irrational buffoons: the results suggest that we all behave this way.

Now that you have the knowledge to understand what qualifies as good science, the last few blinks will explore the ways in which science is misused in the media and the drastic repercussions.

News stories about science research are dumbed down or sensational, leading to public misunderstanding of science.

You’ve probably seen “scientific” stories in the newspaper about things like the “happiest day of the year.” The media is full of fluffy stories like these which are passed off as the real deal, while stories about genuine scientific inquiry hardly ever make it into the news at all. Why is this?