90
Measuring Intelligence: Standardization and the Intelligence Quotient
The goal of most intelligence tests is to measure “g”, the general intelligence factor. Good intelligence tests are reliable, meaning that they are consistent over time, and also demonstrate validity, meaning that they actually measure intelligence rather than something else. Because intelligence is such an important part of individual differences, psychologists have invested substantial effort in creating and improving measures of intelligence, and these tests are now considered the most accurate of all psychological tests.
Intelligence changes with age. A 3-year-old who could accurately multiply 183 by 39 would certainly be intelligent, but a 25-year-old who could not do so would be seen as unintelligent. Thus understanding intelligence requires that we know the norms or standards in a given population of people at a given age. The standardization of a test involves giving it to a large number of people at different ages and computing the average score on the test at each age level.
Once the standardization has been accomplished, we have a picture of the average abilities of people at different ages and can calculate a person’s mental age, which is the age at which a person is performing intellectually. If we compare the mental age of a person to the person’s chronological age, the result is the Intelligence Quotient (IQ), a measure of intelligence that is adjusted for age. A simple way to calculate IQ is by using the following formula:
IQ = mental age ÷ chronological age × 100.
Thus a 10-year-old child who does as well as the average 10-year-old child has an IQ of 100 (10 ÷ 10 × 100), whereas an 8-year-old child who does as well as the average 10-year-old child would have an IQ of 125 (10 ÷ 8 × 100). Most modern intelligence tests are based on the relative position of a person’s score among people of the same age, rather than on the basis of this formula, but the idea of intelligence “ratio” or “quotient” provides a good description of the score’s meaning.[1]
The Flynn Effect
It is important that intelligence tests be standardized on a regular basis, because the overall level of intelligence in a population may change over time. The Flynn effect refers to the observation that scores on intelligence tests worldwide have increased substantially over the past decades (Flynn, 1999). Although the increase varies somewhat from country to country, the average increase is about 3 IQ points every 10 years. There are many explanations for the Flynn effect, including better nutrition, increased access to information, and more familiarity with multiple-choice tests (Neisser, 1998). But whether people are actually getting smarter is debatable (Neisser,1997).[2]
The Value of IQ Testing
IQ tests have sometimes been used as arguments in support of insidious purposes, such as the eugenics movement, which was the science of improving a human population by controlled breeding to increase desirable heritable characteristics. However, the value of this test may be in its ability to help those in need.
The value of IQ testing is most evident in educational or clinical settings. Children who seem to be experiencing learning difficulties or severe behavioral problems can be tested to ascertain whether the child’s difficulties can be partly attributed to an IQ score that is significantly different from the mean for her age group. Without IQ testing—or another measure of intelligence—children and adults needing extra support might not be identified effectively. People also use IQ testing results to seek disability benefits from the Social Security Administration.[3]
Video reviews the definition of intelligence tests, how they have changed over time, and how they have been used in and misused in their history.
Intelligence Tests and Those Who Created Them
Alfred Binet & Théodore Simon – Stanford- Binet Intelligence Test
From 1904- 1905 the French psychologist Alfred Binet (1857–1914) and his colleague Théodore Simon (1872–1961) began working on behalf of the French government to develop a measure that would identify children who would not be successful with the regular school curriculum.
The goal was to help teachers better educate these students (Aiken, 1994).
Binet and Simon developed what most psychologists today regard as the first intelligence test, which consisted of a wide variety of questions that included the ability to name objects, define words, draw pictures, complete sentences, compare items, and construct sentences. Binet and Simon (Binet, Simon, & Town, 1915; Siegler, 1992) believed that the questions they asked the children all assessed the basic abilities to understand, reason, and make judgments.
Soon after Binet and Simon introduced their test, the American psychologist Lewis Terman at Stanford University (1877–1956) developed an American version of Binet’s test that became known as the Stanford- Binet Intelligence Test. The Stanford-Binet is a measure of general intelligence made up of a wide variety of tasks including vocabulary, memory for pictures, naming of familiar objects, repeating sentences, and following commands.[4]
Video reviews the format of the test and shows two sample items.
David Wechsler- Wechsler-Bellevue Intelligence Scale
In 1939, David Wechsler, a psychologist who spent part of his career working with World War I veterans, developed a new IQ test in the United States. Wechsler combined several subtests from other intelligence tests used between 1880 and World War I. These subtests tapped into a variety of verbal and nonverbal skills, because Wechsler believed that intelligence encompassed “the global capacity of a person to act purposefully, to think rationally, and to deal effectively with his environment” (Wechsler, 1958, p. 7). He named the test the Wechsler- Bellevue Intelligence Scale (Wechsler, 1981). This combination of subtests became one of the most extensively used intelligence tests in the history of psychology.
Today, there are three intelligence tests credited to Wechsler, the Wechsler Adult Intelligence Scale-fourth edition (WAIS-IV), the Wechsler Intelligence Scale for Children (WISC-V), and the Wechsler Preschool and Primary Scale of Intelligence—Revised (WPPSI-III) (Wechsler, 2002). These tests are used widely in schools and communities throughout the United States, and they are periodically normed and standardized as a means of recalibration.[5]
Video reviews what is assessed with the WISC and the assessment’s reliability and validity. In discussing validity, the video briefly touches on Gardner and Sternberg’s theories of intelligence. While a more holistic view of intelligence may be beneficial neither of these theories have been empirically validated.
Bias of IQ Testing
Intelligence tests and psychological definitions of intelligence have been heavily criticized since the 1970s for being biased in favor of Anglo-American, middle-class respondents and for being inadequate tools for measuring non-academic types of intelligence or talent. Intelligence changes with experience, and intelligence quotients or scores do not reflect that ability to change. What is considered smart varies culturally as well, and most intelligence tests do not take this variation into account. For example, in the West, being smart is associated with being quick. A person who answers a question the fastest is seen as the smartest, but in some cultures being smart is associated with considering an idea thoroughly before giving an answer. A well- thought out, contemplative answer is the best answer.[6]
A Spectrum of Intellectual Development
The results of studies assessing the measurement of intelligence show that IQ is distributed in the population in the form of a Normal Distribution (or bell curve), which is the pattern of scores usually observed in a variable that clusters around its average. In a normal distribution, the bulk of the scores fall toward the middle, with many fewer scores falling at the extremes. The normal distribution of intelligence shows that on IQ tests, as well as on most other measures, the majority of people cluster around the average (in this case, where IQ = 100), and fewer are either very smart or very dull (see below).[7]
The normal distribution of IQ scores in the general population shows that most people have about average intelligence, while very few have extremely high or extremely low intelligence.[8] This means that about 2% of people score above an IQ of 130, often considered the threshold for giftedness, and about the same percentage score below an IQ of 70, often being considered the threshold for an intellectual disability.[9]
- Child Growth and Development by Jennifer Paris, Antoinette Ricardo, & Dawn Rymond ↵
- Introduction to Psychology - Measures of Intelligence references Psychology by OpenStax CNX, licensed under CC BY 4.0 (modified by Dawn Rymond) ↵
- Introduction to Psychology - Measures of Intelligence references Psychology by OpenStax CNX, licensed under CC BY 4.0 (initially modified by Dawn Rymond; further modified by Courtney Boise) ↵
- Introduction to Psychology - Measures of Intelligence references Psychology by OpenStax CNX, licensed under CC BY 4.0 ↵
- Child Growth and Development by Jennifer Paris, Antoinette Ricardo, & Dawn Rymond ↵
- Sociology: Brief Edition – Agents of Socialization by Steven E. Barkan is licensed under CC BY-NC-SA 3.0 Introduction to Psychology - Measures of Intelligence references Psychology by OpenStax CNX, licensed under CC BY 4.0(sections modified by Courtney Boise) ↵
- Child Growth and Development by Jennifer Paris, Antoinette Ricardo, & Dawn Rymond ↵
- Introduction to Psychology - Measures of Intelligence references Psychology by OpenStax CNX, licensed under CC BY 4.0 (modified by Dawn Rymond) ↵
- Child Growth and Development by Jennifer Paris, Antoinette Ricardo, & Dawn Rymond ↵