The changing-American-diet argument is invariably used to support the proposition that Americans should eat more grain, less fat, and particularly less saturated fat, from red meat and dairy products. But the same food-disappearance reports used to bolster this low-fat, high-carbohydrate diet also provided trends for vegetables, fruits, dairy products, and the various fats themselves. These numbers tell a different story and might have suggested a different definition entirely of a healthy diet, if they had been taken into account. During the decades of the heart-disease “epidemic,” vegetable consumption increased dramatically, as consumption of flour and grain products decreased. Americans nearly doubled (according to these USDA data) their consumption of leafy green and yellow vegetables, tomatoes, and citrus fruit.
This change in the American diet was attributed to nutritionists’ emphasizing the need for vitamins from the fruits and green vegetables that were conspicuously lacking in our diets in the nineteenth century. “The preponderance of meat and farinaceous foods on my grandfather’s table over fresh vegetables and fruits would be most unwelcome to modern palates,” wrote the University of Kansas professor of medicine Logan Clendening in The Balanced Diet in 1936. “I doubt if he ever ate an orange. I know he never ate grapefruit, or broccoli or cantaloup or asparagus. Spinach, carrots, lettuce, tomatoes, celery, endive, mushrooms, lima beans, corn, green beans and peas—were entirely unknown, or rarities…. The staple vegetables were potatoes, cabbage, onions, radishes and the fruits—apples, pears, peaches, plums and grapes and some of the berries—in season.”
From the end of World War II, when the USDA statistics become more reliable, to the late 1960s, while coronary heart-disease mortality rates supposedly soared, per-capita consumption of whole milk dropped steadily, and the use of cream was cut by half. We ate dramatically less lard(13 pounds per person per year, compared with 7 pounds) and less butter(8.5 pounds versus 4) and more margarine (4.5 pounds versus 9 pounds), vegetable shortening (9.5 pounds versus 17 pounds), and salad and cooking oils (7 pounds versus 18 pounds). As a result, during the worst decades of the heart-disease “epidemic,” vegetable-fat consumption per capitain America doubled (from 28 pounds in the years 1947–49 to 55 pounds in 1976), while the average consumption of all animal fat (including the fat in meat, eggs, and dairy products) dropped from 84 pounds to 71. And so the increase in total fat consumption, to which Ancel Keys and others attributed the “epidemic” of heart disease, paralleled not only increased consumption of vegetables and citrus fruit, but of vegetable fats, which were considered heart-healthy, and a decreased consumption of animal fats.
In the years after World War II, when the newspapers began talking up a heart-disease epidemic, the proposition that cholesterol was responsible—the “medical villain cholesterol,” as it would be called by the Chicago cardiologist Jeremiah Stamler, one of the most outspoken proponents of the diet-heart hypothesis—was considered hypothetical at best. Cholesterol itself is a pearly-white fatty substance that can be found in all body tissues, an essential component of cell membranes and a constituent of a range of physiologic processes, including the metabolism of human sex hormones.
Cholesterol is also a primary component of atherosclerotic plaques, so it was a natural assumption that the disease might begin with the abnormal accumulation of cholesterol. Proponents of the hypothesis then envisioned the human circulatory system as a kind of plumbing system. Stamler referred to the accumulation of cholesterol in lesions on the artery walls as “biological rust” that can “spread to choke off the flow [of blood], or slow it just like rust inside a water pipe so that only a dribble comes from your faucet.” This imagery is so compelling that we still talk and read about artery-clogging fats and cholesterol, as though the fat of a greasy hamburger were transported directly from stomach to artery lining.
The evidence initially cited in support of the hypothesis came almost exclusively from animal research—particularly in rabbits. In 1913, the Russian pathologist Nikolaj Anitschkow reported that he could induce atherosclerotic-type lesions in rabbits by feeding them olive oil and cholesterol. Rabbits, though, are herbivores and would never consume such high-cholesterol diets naturally. And though the rabbits did develop cholesterol-filled lesions in their arteries, they developed them in their tendons and connective tissues, too, suggesting that theirs was a kind of storage disease; they had no way to metabolize the cholesterol they were force-fed. “The condition produced in the animal was referred to, often contemptuously, as the ‘cholesterol disease of rabbits,’” wrote the Harvard clinician Timothy Leary in 1935.
The rabbit research spawned countless experiments in which researchers tried to induce lesions and heart attacks in other animals. Stamler, for instance, took credit for first inducing atherosclerotic-type lesions in chickens, although whether chickens are any better than rabbits as a model of human disease is debatable. Humanlike atherosclerotic lesions could be induced in pigeons, for instance, fed on corn and corn oil, and atherosclerotic lesions were observed occurring naturally in wild sea lions and seals, in pigs, cats, dogs, sheep, cattle, horses, reptiles, and rats, and even in baboons on diets that were almost exclusively vegetarian. None of these studies did much to implicate either animal fat or cholesterol.
What kept the cholesterol hypothesis particularly viable through the prewar years was that any physician could measure cholesterol levels in human subjects. Correctly interpreting the measurements was more difficult. A host of phenomena will influence cholesterol levels, some of which will also influence our risk of heart disease: exercise, for instance, lowers total cholesterol. Weight gain appears to raise it; weight loss, to lower it. Cholesterol levels will fluctuate seasonally and change with body position. Stress will raise cholesterol. Male and female hormones will affect cholesterol levels, as will diuretics, sedatives, tranquilizers, and alcohol. For these reasons alone, our cholesterol levels can change by 20 to 30 percent over the course of weeks (as Eisenhower’s did in the last summer of his presidency).
Despite myriad attempts, researchers were unable to establish that patients with atherosclerosis had significantly more cholesterol in their bloodstream than those who didn’t. “Some works claim a significant elevation in blood cholesterol level for a majority of patients with atherosclerosis,” the medical physicist John Gofman wrote in Science in 1950, “whereas others debate this finding vigorously. Certainly a tremendous number of people who suffer from the consequences of atherosclerosis show blood cholesterols in the accepted normal range.”
The condition of having very high cholesterol—say, above 300 mg/dl—is known as hypercholesterolemia. If the cholesterol hypothesis is right, then most hypercholesterolemics should get atherosclerosis and die of heart attacks. But that doesn’t seem to be the case. In the genetic disorder familial hypercholesterolemia, cholesterol is over 300 mg/dl for those who inherit one copy of the defective gene, and as high as 1,500 mg/dl for those who inherit two. One out of every two men and one out of every three women with this condition are likely to have a heart attack by age sixty, an observation that is often evoked as a cornerstone of the cholesterol hypothesis. But certain thyroid and kidney disorders will also cause hypercholesterolemia; autopsy examinations of individuals with these maladies have often revealed severe atherosclerosis, but these individuals rarely die of heart attacks.