This commonsense view goes something like this: We are all born with a fixed amount of intelligence. It’s a trait, like blue or green eyes, or long or short limbs. Intelligence shows itself in certain types of activity, especially in math and our use of words. It’s possible to measure how much intelligence we have through pencil‐and‐paper tests, and to express this as a numerical grade. That’s it.
Put as bluntly as this, I trust this definition of intelligence sounds as questionable as it is. But essentially this definition runs through much of Western culture, and a good bit of Eastern culture as well. It is at the heart of our education systems and underpins a good deal of the multibillion‐dollar testing industries that feed off public education throughout the world. It’s at the heart of the idea of academic ability, dominates college entrance examinations, underpins the hierarchy of subjects in education, and stands as the foundation for the whole idea of IQ.
This way of thinking about intelligence has a long history in Western culture and dates back at least to the days of the great Greek philosophers, Aristotle and Plato. Its most recent flowering was in the great period of intellectual advances of the seventeenth and eighteenth centuries that we know as the Enlightenment. Philosophers and scholars aimed to establish a firm basis for human knowledge and to end the superstitions and mythologies about human existence that they believed had clouded the minds of previous generations.
One of the pillars of this new movement was a firm belief in the importance of logic and critical reasoning. Philosophers argued that we should not accept as knowledge anything that could not be proved through logical reasoning, especially in words and mathematical proofs. The problem was where to begin this process without taking anything for granted that might be logically questionable. The famous conclusion of the philosopher René Descartes was that the only thing that he could take for granted was his own existence; otherwise, he couldn’t have these thoughts in the first place. His thesis was, “I think, therefore I am.”
The other pillar of the Enlightenment was a growing belief in the importance of evidence in support of scientific ideas— evidence that one could observe through the human senses— rather than superstition or hearsay. These two pillars of reason and evidence became the foundations of an intellectual revolution that transformed the outlook and achievements of the Western world. It led to the growth of the scientific method and an avalanche of insights, analysis, and classification of ideas, objects, and phenomena that have extended the reach of human knowledge to the depths of the earth and to the far ends of the known universe. It led too to the spectacular advances in practical technology that gave rise to the Industrial Revolution and to the supreme domination of these forms of thought in scholarship, in politics, in commerce, and in education.
The influence of logic and evidence extended beyond the “hard” sciences. They also shaped the formative theories in the human sciences, including psychology, sociology, anthropology, and medicine. As public education grew in the nineteenth and twentieth centuries, it too was based on these newly dominant ideas about knowledge and intelligence. As mass education grew to meet the growing demands of the Industrial Revolution, there was also a need for quick and easy forms of selection and assessment. The new science of psychology was on hand with new theories about how intelligence could be tested and measured. For the most part, intelligence was defined in terms of verbal and mathematical reasoning. These were also processes that were used to quantify the results. The most significant idea in the middle of all this was IQ.
So it is that we came to think of real intelligence in terms of logical analysis: believing that rationalist forms of thinking were superior to feeling and emotion, and that the ideas that really count can be conveyed in words or through mathematical expressions. In addition, we believed that we could quantify intelligence and rely on IQ tests and standardized tests like the SAT to identify who among us is truly intelligent and deserving of exalted treatment.
Ironically, Alfred Binet, one of the creators of the IQ test, intended the test to serve precisely the opposite function. In fact, he originally designed it (on commission from the French government) exclusively to identify children with special needs so they could get appropriate forms of schooling. He never intended it to identify degrees of intelligence or “mental worth.” In fact, Binet noted that the scale he created “does not permit the measure of intelligence, because intellectual qualities are not superposable, and therefore cannot be measured as linear surfaces are measured.”
Nor did he ever intend it to suggest that a person could not become more intelligent over time. “Some recent thinkers,” he said, “[have affirmed] that an individual’s intelligence is a fixed quantity, a quantity that cannot be increased. We must protest and react against this brutal pessimism; we must try to demonstrate that it is founded on nothing.”
Still, some educators and psychologists took—and continue to take—IQ numbers to absurd lengths. In 1916, Lewis Terman of Stanford University published a revision of Binet’s IQ test. Known as the Stanford‐Binet test, now in its fifth version, it is the basis of the modern IQ test. It is interesting to note, though, that Terman had a sadly extreme view of human capacity. These are his words, from the textbook The Measurement of Intelligence: “Among laboring men and servant girls there are thousands like them feebleminded. They are the world’s ‘hewers of wood and drawers of water.’ And yet, as far as intelligence is concerned, the tests have told the truth.… No amount of school instruction will ever make them intelligent voters or capable voters in the true sense of the word.”
Terman was an active player in one of the darker stages of education and public policy, one there is a good chance you are unaware of because most historians choose to leave it unmentioned, the way they might a crazy aunt or an unfortunate drinking incident in college. The eugenics movement sought to weed out entire sectors of the population by arguing that such traits as criminality and pauperism were hereditary, and that it was possible to identify these traits through intelligence testing. Perhaps most appalling among the movement’s claims was the notion that entire ethnic groups, including southern Europeans, Jews, Africans, and Latinos fell into such categories. “The fact that one meets this type with such frequency among Indians, Mexicans, and Negroes suggests quite forcibly that the whole question of racial differences in mental traits will have to be taken up anew and by experimental methods,” Terman wrote.
“Children of this group should be segregated in special classes and be given instruction which is concrete and practical. They cannot master, but they can often be made efficient workers, able to look out for themselves. There is no possibility at present of convincing society that they should not be allowed to reproduce, although from a eugenic point of view they constitute a grave problem because of their unusually prolific breeding.”
The movement actually managed to succeed in lobbying for the passage of involuntary sterilization laws in thirty American states. This meant that the state could neuter people who fell below a particular IQ without their having any say in the matter. That each state eventually repealed the laws is a testament to common sense and compassion. That the laws existed in the first place is a frightening indication of how dangerously limited any standardized test is in calculating intelligence and the capacity to contribute to society.