Yes. But, it’s not that simple.
Variation in IQ scores mainly consists of g, group factors and specific abilities, and measurement error. But, these parts of IQ scores are not equally useful for prediction of outcomes. The evidence suggests the predictive validity of IQ lies in its g-loading.
The easiest way to test this is by doing Jensen’s method of correlated vectors. The gist of this is compiling a factor matrix typically from an IQ test, and seeing if there is a correlation between g-loading of the subtests and whatever other variable, such as economic life outcomes or group differences. If there is, there is a Jensen effect. Since Jensen effects seem to occur on the predictive validity of IQ (Jensen, 1998; Reeve et al., 2013; Hu, 2013; also, this seems to be the findings in the job performance literature as well), and generally not on programs like Headstart nor adoption gains (te Nijenhuis et al., 2014; Jensen, 1997; Melby-Lervag and Hulme, 2013), any sort of stimulus program should be looked at with caution. If it increases specific abilities, that is great, but that doesn’t mean it will help as much as we might think.
Let’s look at the most popular, and most recent, research article on the effect of an extra year on IQ. This meta-analysis was conducted by Richie and Tucker Drob (2018) and the authors found an extra year of education is associated with an increase of up to 5 IQ points. They also found that policy changes are associated with IQ gains. It’s a great meta-analysis; very well done. Unfortunately, the authors weren’t able to see if the gains were on g, but they split up the tests used in the meta-analysis in such a way to allow us to make an assumption. The authors wrote,
|To do this, we classified every test that would likely have involved content that was directly taught at school (including reading, arithmetic, and science tests) as “achievement,” and the remaining tests, which generally involved IQ-type measures (ranging from processing speed to reasoning to vocabulary), as “other” tests.|
IQ tests are probably better measures of g than typical achievement tests like the Iowa Assessments and SATs as they are more likely to measure Level II abilities. Level II abilities are considered to be more g-loaded (Jensen, 1982). There was no significant difference in policy change on achievement and other scores, but the difference was small anyways. For an extra year of education, the effect was twice the size for achievement tests than for other tests. Still, there was nearly a four point increase in other tests due to an extra year of education. Still, we should be cautious because as I said earlier IQ is not purely consisting of g and the effects may not always be lasting (see te Nijenhuis et al. 2014; Gwern, 2018).
A study by Ritchie et al. (2015) used structural equation modeling on a longitudinal sample to see if the effects of education on IQ are actually on g. The first model tested was that extra education was purely associated with increases in g. The second model was that extra education was associated with increases in g as well as other, more specific abilities. The third model was that extra education was only associated with IQ through specific abilities rather than g. The authors found the last model was the best fit. They also ran other analyses to confirm these results. They showed that no matter what the third model, where education has no impact on g, was the best fit.
Similar results were shown by Ritchie et al. (2013). The authors in this study took longitudinal data on education and IQ and tested if the gains were associated with increase in various reaction time tests. This is mainly important because reaction times generally tell us about processing speed and reasoning ability in the brain. They found the effects of education were not on reaction times after controlling for a number of variables. While the authors argue this does not tell us if the education gains are on g or not (Ritchie et al., 2015), it is worth noting the effect of education on reaction times after controlling for other variables was larger on simple reaction times than on choice reaction times which is the more g-loaded test (Der and Deary, 2017).
A similar method we can use to test this is by seeing if fluid intelligence is increased by education. Fluid intelligence has to do with reasoning abilities whereas crystallized intelligence is the accumulation of knowledge and skills over time. One study of about 1,400 eighth graders in Boston public schools found that while schools were able to increase the achievement test scores in the schools, the programs for the former were not able to increase fluid intelligence skills like working memory capacity and info processing (Finn et al., 2014). More related, Ceci (1991) concluded from a review of 200 studies that the evidence that education improved efficiency of cognitive processing was uncompelling.
Flynn (2019) argued based on some thought experiments that certain training gains could yield Jensen effects if they were done in a proper way. te Nijenhuis et al. (2019) replied to Flynn with a small meta-analysis to see if this was the case. They only found four studies with twelve total data points on schooling which met their requirements, yielding a still large sample of n = 60,993. This led to a correlation of g-loading and education gains of 0.13, showing no significant Jensen effect. This shows that education gains are not actually on g.
Other longitudinal models show g variation causes educational achievement differences. These are pretty straight-forward studies. Basically, take data on IQ and abilities at two points. Then they do a cross-lagged panel analysis. They take a cross-lagged path from g at time 1 and educational achievement at time 2 and another path from educational achievement at time 1 and g at time 2. They compare these and make a causal inference based on which is stronger. Both of the studies done on this show the path of g to educational achievement is stronger than the latter (Watkins et al., 2007; Watkins and Styck, 2017). Indeed, the path of educational achievement to g is not even statistically significant.
Furthermore, when looking at, say, class differences and racial/ethnic differences in IQ, the argument of education makes really no sense. The common argument is that since education is funded through property taxes, lower income people will automatically get worse schools. This just does not pan out in the data. Just because the schools are funded with property taxes, does not mean school districts distribute that money in such a way so that poor neighborhoods receive less funding. A simple example of this actually comes from the left-leaning Brookings Institute, which shows school districts with a greater proportion of poor students receive more funding than the opposite (Chingos, 2017).
In combination of the facts that other interventions tend not to be on g nor lasting, that education gains seem to not be on g nor are on cognitive reasoning and mental processing tasks like reaction times, and that g predicts educational achievement in longitudinal studies, it seems clear that education gains are not all that impressive for the most important part of IQ – the g factor.