Evaluate graduates and apprentices based on their skills and potential | NEW Early Careers Assessment

Candidates’ use of GPT in the hiring process: Cheat code or a power up?

AIDigital AssessmentFuture of Work
calendarJuly 24, 2023
calendar Dr. Charles Handler
Candidates’ use of GPT in the hiring process: Cheat code or a power up?

These days it seems like all anyone is talking about is Large Language Models (LLMs), such as Chat GPT, and pretty much any subject under the sun. From cooking to bible study, to Dungeons and Dragons, it seems like there is nothing LLMs can’t do. 

Creating debate, controversy, and speculation about the future are also part of LLM’s magic touch. It comes with the territory. While LLMs are still in their infancy, their impact on life as we know it is huge now and growing rapidly.  

Think about the ramifications in terms of mobile phone technology. It took 35 years for the first mobile phones to evolve into smartphones. While the tech was happening in the background, in 2007 the first iPhone represented a magical and transformative leap forward. It is hard to fully comprehend the things that AI based assistants will be doing 35 years from now, but rest assured it will seem like magic by today’s standards. 

One of the most interesting things about LLMs is that they have created a duality of sorts when it comes to the exploration of their current use cases and game changing potential. 

This overarching duality centers around the direction in which the soul of these machines will take humanity. There is a continuum here that ranges from Luddite like opinions of doom and gloom, to full on embracement of LLMs as creators of a bright shiny future. The truth most likely lies somewhere in-between. 

When it comes to humans and work, some of the most commonly discussed dualities include: 

  • Are LLMs the AI version of quiet quitting, supporting laziness and selfishness? Or are they a new frontier in human productivity that will allow us to free up our time and resources to achieve great things? 
  • Will LLMs cost us our jobs?- or will they allow us the chance to excel at our current jobs, and/or create entirely new jobs that will be game changers for our careers?  

Talent acquisition is not immune to LLMs’ dualities. Currently the primary duality centers around cheating. Do LLMs promote or support cheating within the hiring process that will result in hiring unqualified applicants? Or can they be used to support better, more efficient, and accurate hiring? 

When it comes to kerfuffle about cheating during the hiring process, history is repeating itself. Back around the turn of the century, internet employment testing was the single biggest change in its 100 plus year history. Academics and testing professionals hemmed and hawed about what to do about the perceived threat. But entrepreneurs did not think twice, they saw the potential and created an online testing industry worth billions. Progress waits for no one, and the many benefits created by online testing left those trying to guard its purity figuring out how to get the toothpaste back into the tube.  

We are now facing a new set of circumstances (i.e., technologies) that make the debate of the early 2000s look like child's play. But in some sense the song remains the same, the costs and benefits of technology-based hiring have both a positive and a negative side. Exploring this duality more closely begins with a definition of cheating within the context of the recruitment and hiring process. 

As my friend Neil Morelli of Codility, an IT coding assessment provider, defines it, cheating when applying for a job is when, 

--- “a no or low-knowledge candidate who wouldn't be successful otherwise is now using this to basically impute knowledge that they don't have and signal that they're qualified for a job that they're not qualified for.” 

With this definition in hand, let’s take a look at the major ways candidates are using GPT as a tool to help increase their odds of getting hired to see how much they are contributing to structural problems in the hiring paradigm. 

Writing resumes and cover letters - This is the lowest hanging fruit of the bunch. To begin with, technology based or not, resumes are the poster child for self-promotional BS. While stories of candidates using GPT to get past automated resume screeners abound, the threat level here is pretty low. As I wrote in a previous blog I actually see this as beneficial because it magnifies the value of good down the funnel tools such as assessments. 

Answering interview questions - Believe it or not, candidates have found ways to use GPT to provide answers during the interview session. This level of deceit must take some pretty good logistics and creativity that in itself may say a lot about a candidate’s street smarts. I would wager this phenomenon is not very common, but all it takes is one viral tik tok how to video to make it mainstream. So, currently the threat level here is relatively low, but it could end up being a thing that requires countermeasures. 

Gaming pre-hire assessment tests - In an earlier blog I wrote about the fact that based on their many formats, traditional assessments are actually quite immune at present to cheating via GPT. There are several reasons for this, all centering around the fact that their typical response formats are not easy for GPT to handle because the questions are not factually based, they are often timed, and they often include visually based items.  

A recent study by researchers at Stanford found that GPT actually lost a great deal of accuracy when answering questions. This includes failure of visually based pattern recognition tests, which are coincidentally a very common item on pre-hire tests. Suggesting that anyone who does try to use GPT to cheat in this way risks poor results, strengthening the position that using GPT to complete pre-hire tests is not going to ruin hiring anytime soon. 

But what about pre-hire knowledge and skills tests? Since these tests use factual information, is the threat of cheating via GPT something to worry about?  

Coding assessment providers seem like a major target for GPT based cheating. GPT is great at coding, even creating speculation that it is already costing coders their jobs. 

In a recent episode of my podcast focusing on this subject my guest Neil Morelli of Codility, explains how his firm turned lemons into lemonade. 

Neil pointed out that using outside resources, such as GitHub’s Copilot, to help with coding is an accepted and very common practice. So, instead of fearing GPT as a cheat code, it is possible to embrace it as an opportunity to evaluate how well a candidate uses LLMs to do their job better.  

And isn’t this the crux of it all? We want to hire those who have the best chances of success on the job. If GPT can help do the job better, then evaluating GPT skills is an important part of the equation. 

This also taps into an important philosophical point about GPT. Its greatest value is not in replacing what humans do, but rather in creating synergies between humans and machines that add incremental value over what each party is capable of independently. 

After looking at these examples, I feel confident in my opinion that GPT based cheating by completely unqualified applicants is not currently a cancerous issue. While no hiring process is perfect, even a subpar one should present multiple opportunities for evaluating candidates’ suitability. So, the deep fake would have to extend across multiple hurdles and evaluators.  

It would not be fair to discuss the risk of GPT to good hiring without noting the many potential upsides it can provide to suppliers of predictive hiring tools. In a recent blog, Alan Bourne, Sova’s Chief Scientist, provided a great deal of optimism about the value GTP can add to talent assessment. The crux of this positivity centers around use cases where GPT can create efficiencies in the development of candidate evaluation and feedback frameworks.  

So, let’s not fear that GPT will result in an unqualified workforce who have taken the “fake it till you make it” approach. Instead, we should embrace LLMs as yet another way that humans and technology working together can create positive and meaningful results.