Take a sneak peek of the product with an interactive tour | View now
Is it the end of traditional cognitive testing as we know it?
For many years, it has been considered as de facto truth in the world of assessment that despite their potential for adverse impact, cognitive tests have the highest validity of any individual assessment method so are a key part of a robust hiring process, especially for early hires with limited work experience.
However, recent research has called this into question on a number of fronts. A recently published analysis by Stephen Woods and Fiona Patterson has raised significant challenges to the status quo that has persisted for decades.
Firstly, many of the studies included in meta-analytic review (where multiple studies are grouped and analysed together), had various statistical adjustments made for ‘restriction of range’. This essentially inflated the supposed predictive validity of an assessment; on the assumption you are only analysing the data for the best candidates who were hired. However, if we do this for some assessments, but not others, we risk no longer comparing apples with apples. So perhaps unsurprising that cognitive testing might come out top of the list.
Additionally, many of the studies date back over several decades and the world of work has changed enormously. Firstly, the well documented Flynn Effect shows that since the mid-20th century, average population scores on cognitive tests have gradually improved, due largely to improvements in overall education and learning at societal level.
Secondly, the nature of work has changed significantly due to the ubiquitous use of computing across the workplace. And in the latest stage of technological development Gen AI tools have already become a go-to resource for white collar workers across the globe.
As a result, the gap between the criticality of traditional cognitive reasoning and the availability of those capabilities in the labour market has narrowed. Conversely, new skills have come to the fore, such as complex thinking across multiple domains (rather than in a linear way like cognitive tests), and skills around collaboration, creativity, innovation and adaptability.
These changes mean the assumed primacy of cognitive reasoning may no longer apply, and recent review of the meta-analytic data is indeed showing this. In fact, when you use a range of other assessment methods together, the incremental value of cognitive testing can be more nuanced than previously imagined.
So, what does this mean for how we assess?
Cognitive testing is still a valid tool in the assessment arsenal, but the risks for adverse impact need to be balanced with the benefits. It still has a role to play in a rounded assessment of a person but we need to mitigate the negative risks of adverse impact to maximise diversity, equity and inclusion.
In practical terms, the large-scale use of cognitive tests on their own creates the risk of systemic adverse impact. Even when cut off are within reasonable limits to avoid breaking the four-fifths rule, that is still not zero adverse impact. There is often legally permissible, but nevertheless pervasive and persistent adverse impact.
The key to addressing this is building assessment process which are multimethod within one assessment, intentionally maximise prediction but also minimising adverse impact. This is why since its foundation Sova has pioneered the use of Blended Assessment to solve exactly this issue, with clear and demonstrable benefits for many enterprise organisations.
Now we are seeing emergent new research as mentioned above and leading industry best practice demonstrate that for high volume hiring, using cognitive tests to screen on their own is no longer best practice. Instead, a blended approach which is focused on creating the fairest and most objective process possible is the new baseline all organisations can work to.
No longer is there a need to compromise fairness, the cornerstone of any high quality modern assessment journey.