Book a Demo
SOVA science4hire feat image 867 548 px

Will ChatGPT steal our humanity and our jobs? With Dr. Tomas Chamorro-Premuzic

My guest for this episode is Dr. Tomas Chamorro-Premuzic. Besides his role as the Chief Innovation Officer at Manpower Group, Dr. Tomas is a world-renowned IO psychologist, educator, entrepreneur and author. He joins me today to discuss his new book, I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique.

We use the big picture principles of this book to frame up an invigorating conversation about ChatGPTand its impact on humankind, jobs, and hiring. If you have been tuned in to the hype around ChatGPT and have been wondering if it is good or bad for humans, jobs, and work our discussion is definitely worth a listen.

Focusing on Humans Rather Than Machines

Dr. Tomas describes his new book as:

a call to action for humans to wake up and not be downgraded in an age where technology keeps getting smarter and AI continues to upgrade itself.”

With the book’s focus being

the human side of artificial intelligence to examine the behavioral impact or effects that AI is having or has had so far. So, it's a book on AI that focuses on the present, not on the future, and focuses on humans rather than machines. And the main argument is that even though we spent the past 10 years worrying a lot and sometimes paranoid about AI's potential to automate work and destroy jobs and replace humans in various aspects of life and across different industries and careers, in that process, we kind of automated ourselves by becoming more boring, more predictable, less creative, less curious, more biased, and more narcissistic.”

I could not agree more with these ideas, and it is quite serendipitous that I reached out to invite Dr. Tomas to talk about ChatGPT before I knew about the book!

What are the Implications of ChatGPT?

I begin by asking the question, “Is ChatGPT the Emperor's New Clothes?, a tempest in a teapot?, future shock, or the savior of humankind?” And prime the pump by throwing out my two cent’s worth that ChatGPT, while being extremely helpful in many ways, is not going to help complete tasks that require deep human knowledge, real intelligence, and personality.

Dr. Tomas is quick to share his concept that it is actually just an incremental improvement, that is consistent with the evolution of technology-based search, saying...

“I think it's an important and perhaps significant, but incremental, improvement in the evolution of what is no doubt, the defining technology of our times, which is AI. And I mean, the reason why I think it is the defining technology of our times, not ChatGPT, but AI is because it's everywhere and it has already permeated any job industry. I mean, mostly it has systematically influenced our decision making in everyday life, whether that's simple things like what movies to watch or what music to listen, and sometimes more significant things like where to work, where to study, and to marry or divorce. I would put it at the level that Google search had when it came out. It wasn't the first search engine, but it was significantly better than Alta Vista and the previous kind of engines, including Netscape that were already there.”

Unfounded AI Fears

When it comes to the fear that ChatGPT will steal our jobs, we both agree that this fear is unfounded. But it can and will have a large impact on work by increasing efficiency and reducing the mental effort required in a lot of the tasks that are part of many jobs.

And this idea that AI tools like ChatGPT have a great role in helping us be human, is really the meat of our conversation and the crux of Dr. Tomas’ book.

Helping us be Uniquely Human

With so much negativity and hype around AI and tools like ChatGPT replacing mankind, the idea that technology actually enhances the things that make us uniquely human is not a common topic of conversation. But it should be! Dr. Tomas’ take is pretty profound:

And what I worry about Charles, is that when we optimize everything for technologies and technological efficiencies, we actually fall into the dangerous and problematic area of trying to optimize humanity in a way that makes us more like machines.

To illustrate this point, we need to look no further than one of the most unique expressions of humanity, the arts.

Dr. Tomas explains-

“And maybe that will be the difference between human art, whether it's music or paintings or other, and one produced by machines. It is the fact that even if AI can create an improvise like Miles Davis, they will not be Miles Davis. What is left? What is left from Miles when you subtract AI's ability to copy? Well, his sense of style, his sense of humor, his cranky voice and his hair, and even the flaws and polemical or controversial aspects of his personality that machines will probably not even want to pick up on.”

He continues.

“But I think instead of saying, oh my God, there's not going to be human composers or human painters or human artists, we need to make art with these tools. And ChatGPT is no exception. Instead of checking whether it can be funny, creative, or curious, our curiosity and creativity and humor could be on display if we use this tool to actually create something that wasn't there just with the tool. That's, that's why I think AI can be an enhancer of human ingenuity and imagination, but it requires you to play with it, study it, and understand it.”

ChatGPT Within Recruitment

And of course, our conversation turns to the implications of ChatGPT for jobs and recruiting.

Dr. Tomas summarizes a lot of the ways that these technologies can help free us up to do more high value work within the hiring funnel.

“If you think about talent acquisition or recruitment, there are a lot of tasks there that are pretty repetitive, standardized and writing job ads, communicating with candidates, translating text from candidates into a profile or a model of who they are matching candidate features to job openings.”

A not so Negative Bias

And what conversation about tech and hiring would be complete without including the topic of bias. The view that AIs can help us be better at being human opens the door to thinking about AI and bias in a positive light. Dr. Tomas’ positive take on it is refreshing.

“Because the minute you introduce the human, biases are inevitable. And no unconscious bias training can help a human forget that the person in front of them is either male or female or seems like a male or a female, older, young, poor or rich, et cetera. So, I think, but that's also part of the reason we're doing this because we are maybe so smart that we can create technologies that can de-bias as an increase meritocracy.”

Our conversation touches on many more really interesting and nuanced points. All of which lead to an important conclusion that we need to keep in mind when we bash AI for having a bad influence on humankind.

“I think, again, whether we turn this tool into something useful or practical depends on our intentions and our capabilities. I can't remember who said that technology is neither good nor bad, nor neutral, which is a good way of putting it. But I think what's important is that we try to keep humans in the equation, in the spectrum, and that we don't dehumanize either work or life just because technologies are becoming more human-like.”

People in This Episode

Catch Dr. Tomas Chamorro-Premuzic on LinkedIn and at Manpower Group.

Read the Transcript

Announcer:

Welcome to Science 4-Hire with your host, Dr. Charles Hander. Hiring is hard. Pre-hire talent assessments can help you ease the pain. Whether you don't know where to start or you just want to stay on top of the trends, Science 4-Hire provides 30 minutes of enlightenment on best practices and news from the front lines of the employment testing universe. So, get ready to learn as Dr. Charles Handler and his all-star guests blend old-school knowledge with new wave technology to educate and inform you about all things talent assessment.

Dr. Charles Handler:

Hello and welcome to the latest edition of Science 4-Hire. Today we have someone who has a relatively rare distinction, and that is being a guest for the second time on the show. It's been a while. We've been doing this for three years at least now. But I'd like to welcome back Dr. Tomas Chamorro-Premuzic. And he is an amazing author and person who shares kind of the background of good, solid psychometrics and all. And I'll let him introduce himself and his title and company and then we'll get into a really interesting conversation today that's going to take a journey kind of starting with some of the more contemporary, immediate things we're seeing about ais and what they've been, I guess, accused of potentially doing <laugh>. So go ahead, Tomas.

Dr. Tomas Chamorro-Premuzic:

Thank you, Charles. It's great to be back, especially now hearing that I, I'm in that kind of a small group of people, I guess with Oprah, the Dalai Lama and Barack Obama. Yeah. Are we the ones that have been to us? Yeah.

Dr. Charles Handler:

Yeah. I'm not David Letterman, but

Dr. Tomas Chamorro-Premuzic:

Maybe some, yes. Well, one day. And so yes. So, I'm Tomas, born and raised in Argentina. I'm an organizational psychologist who has specialized in all things assessments, much like you. And my job is, my main job is I'm the Chief Innovation Officer at Manpower Group, and I'm still also in academia. I'm a professor of business and organizational psychology at UCL and Colombia, albeit very part-time these days.

Dr. Charles Handler:

Yeah. Well, we were lucky to pin you down for a conversation. I know you have got lots going on. So, I think the first thing we want to do is give you a quick opportunity to frame the conversation a little by letting us know about your latest book. And I'm sure you'll be drawn from that in the conversation, but it's important to kind of hear about what you've been working on and then we'll get into how that drives some good back and forth

Dr. Tomas Chamorro-Premuzic:

Here. Thank you. Yes, love to do that. So, the latest book is I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique. So, it's very relevant to what we're going to be discussing. And it's a book that focuses on the human side of artificial intelligence and tries to examine the behavioral impact or effects that AI is having or has had so far. So, it's a book on AI that focuses on the present, not on the future, and focuses on humans rather than machines. And the main argument is that even though we spent the past 10 years worrying a lot and sometimes paranoid about AI's potential to automate work and destroy jobs and replace humans in various aspects of life and across different industries and careers, in that process, we kind of automated ourselves by becoming more boring, more predictable, less creative, less curious, more biased, and more narcissistic. So, these have been the main, at least measurable effects that AI and the platforms it inhabits have had on humanity. So, the book is really a call to action for humans to wake up and not be downgraded in an age where technology keeps getting smarter and AI continues to upgrade itself.

Dr. Charles Handler:

Awesome. Well, that's a great framework. I mean, it hits directly on the head of why I reached out to you, because we've been seeing a lot very almost all of a sudden about chat GTP three. So, my questions are really, and I've done a good amount of looking into this, following it in the newsfeed and thought, and I have some blogs and stuff coming out soon that I think I'm really proud of. But my question is this thing, the Emperor's New Clothes, is it a 10 in a teapot or is it future shock? Is it some of the things that you mentioned? And I feel like as I've looked at it, if you dig in, it kind of exposes itself a little bit as, okay, this is really cool, it can be helpful, but man, if you think this is going to be replacing your job or doing a lot of stuff for you at a deep level that requires human knowledge and real intelligence and personality, you're barking up the wrong tree. What do you think?

Dr. Tomas Chamorro-Premuzic:

Yeah, so I think it's none of those things. I think it's an incremental, I think it's an important and perhaps significant, but incremental improvement in the evolution of what is no doubt, the defining technology of our times, which is ai. And I mean, the reason why I think it is the defining technology of our times, not chat G P T, but AI is because it's everywhere and it has already permeated any job industry. And I mean, mostly it has systematically influenced our decision making in everyday life, whether that's simple things like what movies to watch or what music to listen, and sometimes more significant things like where to work, where to study, and to marry or divorce. Yeah. Now I, I would put it at the level that Google search had when it came out. It wasn't the first search engine, but it was significantly better than Alta Vista and the previous kind of engines, including Netscape that were there.

Cause it actually gave you what you were looking for. And maybe it's not dramatically more impressive than Wikipedia, which is still flawed, but it has made it very easy for people with very limited knowledge on something to become somewhat knowledgeable. And I think it's also frameable in it could potentially replace surge engines or represent the evolution of surge engines. No surprise that Microsoft has already integrated it with Bing and invested money. And let's face it, I mean, it has wiped out a hundred billion market cap of Google, which is no small feed. So, I think I would put it there. And with that, I would say that mostly I agree with you, I think its ability to fully replace jobs within the knowledge economy, at least for now, is very limited. However, its ability to increase the efficiencies and reduce the mental effort required in a lot of the tasks that are included in those jobs is very large.

And what I find most impressive about it is it's almost her, the movie, her sensation or experience, its ability to interpret questions is far better than anything we've seen in the past. And it has this conversational feel that is very human-Like now, when people say, yeah, but it doesn't have curiosity, sense of humor, EQ or self-awareness, my answer is like, well, that makes it very human because the majority of people, I know those traits as well, not you and I about everybody else, and of course not our listeners who are very charming, smart, funny, and self-aware. Exactly.

Dr. Charles Handler:

Well, I love that perspective. I hadn't thought about it that way a little bit. So, we're simpatico. I actually thought of, I don't know if you remember ask Js, right? Ask GS was a search engine and I think at bullion search, but once, yes, Jeeps came along. So, there's these simplifications, I remember in grad school I was doing horribly in stats because I had to program SPSS using actual programming language, which I'm not very good at to be honest. But as soon as the gooey came out, the first windows gooey for spss, I went from struggling to excelling in stats because I took that layer out. So, the simplifications that AI and technology make for us allow it to scale really fast because everybody wants to do things as quickly as possible for the most part besides vacations and hobbies and stuff. So, when it comes to the MS thing too, the Microsoft thing, I think it's interesting.

I just read an article yesterday about people who are creating an evil side of it by actually going in there and dialoguing and saying, how do I yell at this person? Or why shouldn't I like this person? And really cajoling it in a way into exposing some weird personality. And then dating profiles, I read a really good one about, I let it write my dating profile and we see it, it, it's just, it's conversationally feels right, but then some of the stuff it says back to you is off base. Now that's a different use case than looking up information easily and digesting it and putting it right in your face. So there's kind of that intended purpose, which is pretty utilitarian and doesn't have to have the character, et cetera. But when you start to give it a persona and start to act like it's real, you expect, like you said, all these other things out of it, and it may not be able to deliver those in the way that you thought it would, which is kind of humorous and entertaining. But really, yeah,

Dr. Tomas Chamorro-Premuzic:

And I think, well I'm quite surprised that it has caught on as it has gaining followers and users at a high rate on Instagram and TikTok. I mean, I confess I'm spending a lot of time with it myself, and it's sort of like, it's quite entertaining. And I think maybe it's intriguing in some ways you mentioned personality and the sort of sadistic element or approach of teasing it and almost like annoying it and seeking to be canceled by or censored by it. We have a paper coming out with the guys that holistic ai a startup that I'm associated with on the personality of chat, G B T and other chatbots. And it's interesting, people described chat, G P T as mansplaining as a service. I think it's a lot more woke than the average mansplainer. I mean, it's actually very politically correct and it's always like, well, people say this and oh, I don't know.

And very apologetic much more than the average mansplainer. Yes. But what's interesting is that almost makes it immoral or at least immoral, right? So yeah, I was chatting to it earlier on and I said, do you think some people are more ethical than others? Well, yes, but it's very difficult to say for sure, et cetera. And then I said, well, do you think Pablo Escobar was as ethical as Mother Teresa? And then it said, well, I can't say because I'm just a large language model and I don't have any views, but one killed all these people and did this and the other did all this. And the charity, they said, well, but the organization that Teresa is affiliated with has track record for child abuse. And I mean, I was just teasing it, right? And then, yeah, Pablo was mostly trying to legalize drugs and haunted by a country that has really failed at the war on drugs and has created more.

And then it actually really understood where I was going and started to offer counter-arguments for it and saying, almost agreeing with me. So I think the trainability aspect of it is quite interesting. And imagine if this thing can get into the social media footprint of somebody or scrape everything there is on one person and actually adopt that personality or mimic that person or construct profiles of that person, which it does. I mean, I did my own personality, obviously reputation based crowdsource personality profile through chat G P T asking it what my personality is. You can ask it to profile Elon Musk or any famous person. Okay. And it's actually, it's accurate enough to the point that maybe 50 or 60% of people who are employed in the executive assessment industry might have to up their A game.

Dr. Charles Handler:

Yeah. Well, yeah. I think it kind of falls under the category now of the alarm bell on that one is the deep fake. We could potentially have this thing impersonating other people or something like it. And that could be used for weaponized, it could be used in a lot of crazy ways. But it's interesting. I haven't asked it to profile people or myself, but as soon as we

Dr. Tomas Chamorro-Premuzic:

Conclude

Dr. Charles Handler:

Here, of course I'm going to start. But one of the things that we found, and this is a little bit more pedestrian or whatever, but I started to ask it about do a job analysis for a particular Oh yes, job that. And so I asked it to do, I kind of set it up for a potential failure or for a real test was we have a job that appears to be a really normal job in a contact center, but it has some different elements that we came in and did a bespoke study to understand what those were and built it into the assessment. So when I said, please do a job analysis or profile the job of this job X at company X, it came back with a very accurate base basis, like a foundation of a lot of stuff that's relatively generic, but if you are not a trained person like us, it would be very helpful.

But it missed the fact that this is also actually tacitly a sales job, and we want to hire people who have certain characteristics that allow them to be good at that. And it's a tech support job where we've validated that people who love technology excel in this job and stay in this job longer than those who don't really care. So we have some scales in there, well chat GTP completely wit on those. It doesn't have that depth of experience or knowledge. So it goes to show that it's a good foundation, it's a good partner, but if you rely on it to do everything, and to me, that's a microcosm of AI in general right now. Right? It's good partner. It can help you be efficient. It gives you some good advice, but boy, don't just blindly follow it or think it's giving you everything.

Dr. Tomas Chamorro-Premuzic:

Yeah, I, yeah, I agree. And I guess the only point I would make is that I generally feel better not dismissing new technology. And so I'm a bit uncomfortable when people say, oh, it's inaccurate. Oh, because let's not forget, this is version three and it's already pretty good. What about version five, 10, or 20? So I'd rather, as the godfather said, keep my enemies closer, and if this is going to compete, I want to really know it and understand it. And then look, I think the fundamental implication for me is that if we're going to collaborate with this technology, then it requires a little bit of ingenuity and thinking. And what's clear to me, whether it's recruitment or other areas of HR or any kind of job or industry, if you ignore it, there's a high or significant probability that your competitors don't and become better at their job because of that.

And you mentioned recruitment is a good example, writing job ads or even we've put resumes through it to assess job fit with certain openings, and it's pretty good at doing that, which of course, everybody's using applicant tracking systems that have search and match functionality there. And then as was listening to you, I asked Chad g p t to write non-specific job at using all the meaningless work jargon you can include, which is a nice example of what I can do. Creative it though. Sure. Results-oriented, innovative and proactive. Rockstar ninja forward thinking out of the box. And it's like it churn out 800 words of a job ad that you could probably put out there and lots of people would apply. And so the time saving potential of it is significant. I wrote about this recently. I think the main implication for human expertise is that it makes the U s P of humans more focused on asking questions rather than answering questions on actually verifying whether the information or the insights is produces are accurate or not.

I think that is a good measure to assess our own expertise or benchmark or just like if you go to Wikipedia and you check anything about assessment, HR io, et cetera, you will be able to point out the errors within seconds. Yeah. But somebody who doesn't know anything about that will say, wow, this is really interesting. So again, having the ability to find errors is the other one. And then the third one is knowing how to go from insights to action because it is like no different from reading any predictions about the price of Bitcoin, the housing market, the economy, et cetera. Well, there's a lot of information out there with more information comes noise. If what's a signal and you actually have the impetus to act on it and make better decisions, then it's useful. And if you don't, then it's just metaphysics.

Dr. Charles Handler:

And I mean, there's no substitute for doing your homework essentially. And then even if we did a research effort that was highly manualized back in the days when I was in school, going to the card catalog to the library, whatever, and pulling this stuff out, you're going to get a variety of different opinions and you're going to have to make it your own based on what resonates with you and what you feel is real. So failure to do that, I feel like is really just laziness on someone's part. Yes. And so there's going to be some natural selection there too, probably. And people who are just relying on this entirely aren't really doing their best work. I would say. Again, there's some base layer things that are probably good. I think it's an interesting parallel too to this and something that has been more hyped, because I'm going to put this also, you mentioned earlier why I was everybody talking about it.

We love to have something to focus on. It's so interesting. Yeah. There's so many stories just you've just come up with several almost unlimited amount of fun things you can do to come back and report on and say, ah. And I liken that a little bit to AI art generators. I'm really I'm an artist in my spare time sometimes. And I jumped on one of those relatively early on and started playing with it. And it was so fun and fascinating to do. And now you're, in some sense it was really literal, and then you could toggle things and make it less literal. That was very interesting. But now we're seeing people even saying, well, this thing's infringing on my rights as an artist. It's copying me. So there's all these same things that are happening with it, and it's so different in that this is kind of unstructured more abstract things. It's dealing with the art side of it, but still very interesting and one of the many houses of being human that these kind of things are going to enter into.

Dr. Tomas Chamorro-Premuzic:

And I think it's a really good analogy with art. Again, I think obviously there's no shortage of people that when an AI generated painting sells at Sotheby's for half a million dollars say, oh, this is disgraceful is discussing, et cetera. Right. Listen, this is just footnotes to Andy Warhol, he invented this in the fifties when he created pop art to exactly ridicule and most of contemporary art. The only difference between my bed by Tracy Emon in the Tate Modern, and actually my messy bet is that hers is an artwork and it's there. And she came up with the idea. So I think, yeah, I'm also a music fan, and I think it's been impressive to see, for example, AI improvising like Miles Davis and fooling 90% of judges, and they can't tell the difference or AI running out of a smartphone finishing Schubert's unfinished symphony.

But I think instead of saying, oh my God, there's not going to be human composers or human painters or human artists, we need to make art with these tools. And church, BT is no exception. Instead of checking whether it can be funny, creative, or curious, our curiosity and creativity and humor could be on display if we use this tool to actually create something that wasn't there just with the tool. That's, that's why I think AI can be an enhancer of human ingenuity and imagination, but it requires you to play with it, study it, and understand it.

Dr. Charles Handler:

Yeah, I think it's interesting. So I thought immediately when you're talking about artists that I would say 90% of people who look at a Jackson Pollock say, ah, I can do that. What is that? Anybody could do that, but go try to do it. Well, now you've seen the framework, maybe you can because you know split, but nobody's going to do it to that level, and it is so innovative. So that was just a thought I had. But another thought, I think about this analogy a lot, if you think about computers playing chess, you're at the point now where a computer can beat anybody at chess, but a chess master with a computer can slay any computer like that combination. So exactly. I think this duality of these things are evil because they do things that seem absolutely otherworldly and more and more so now.

But at the end of the day, if you just break it down to what it's actually doing, you can understand that it's probably not delivering the hype level that we're thinking. And as far as singularities and all that kind of stuff go, I believe it will eventually happen. I'm going to be completely honest here, and people might think I'm, and I'll never know the answer, but part of me feels like humans were invented to create the machines that will eventually be running things we may not even be around. And I'm not dystopian about it, not fearful or scared, but I do feel like there may be this grand scheme evolutionary thing where it was our job to build these things. Yeah. And they're the next thing. I don't know. I don't know.

Dr. Tomas Chamorro-Premuzic:

But I mean, philosophically, it's just a tool. And technology in itself is a set or range or family of tools mostly. You have to qualify it by talking about digital technologies and AI in that family. But it's no different from other tools that we invented. We've always ensured our cultural evolution through the use and the adoption of tools. I think you're right, maybe taking this or stretching this into dystopian scenarios of synthetic cybernetic, cyborgs, et cetera, or you put a hardware to the software and it could be quite dangerous already if you look out for automated drones that can go into places and be very kind of lethal and destructive is there. At the same time, I think humans are perfectly capable of self destroying without ai, whether it's climate or civil wars or nuclear wars. So I wouldn't blame AI for our dark side on potential for instructing ourselves.

I think, again, whether we turn this tool into something useful or practical depends on our intentions and our capabilities. I can't remember who said that technology is neither good nor bad, nor neutral, which is a good way of putting it. But I think what's important is that we try to keep humans in the equation, in the spectrum, and that we don't dehumanize either work or life just because technologies are becoming more human-like. And what I worry about Charles, is that when we optimize everything for technologies and technological efficiencies, we actually fall into the dangerous and problematic area of trying to optimize humanity in a way that makes us more like machines. It's like, yeah. And maybe that will be the difference between human art, whether it's music or paintings or other. And one produce by machines is the fact that even if AI can create an improvise like Miles Davis, they will not be Miles Davis. No, exactly. What is left? What is left from miles when you subtract AI's ability to come? Well, his sense of style, his sense of humor, his cranky voice and his hair, and even the flaws and polemical or controversial aspects of his personality that machines will probably not even want to pick up on.

Dr. Charles Handler:

I totally agree. And I think, again, it only knows what it's heard from Miles Davis, right? Miles Davis might evolve. Most musicians evolve quite a bit by other things that they see and hear that inspire them. I don't know, and I haven't played with that feature of it, but it's definitely just keeps coming back to this is a tool that it's just used by humans for what humans are going to do, by the nature of their humanity. And it can help accelerate those things or make those things better or more evil or whatever it is. But the intention has to be there from the human to utilize it in that capacity. Stupid, really. Even though it seems really smart and the chat GTP thing, it's just so natural the way it converses. I think that's part of what it's slipping past some of our caveman, the caveman principle of Mico had that caveman principle and that we fear things we don't understand because we haven't seen 'em in there that we can't wrap our heads around them. So the more that it comes over that also that uncanny valley of not being quite human enough that you discredited or it spooks you, it's weaving its way, getting past our defenses in some sense. So

Dr. Tomas Chamorro-Premuzic:

Yeah,

Dr. Charles Handler:

I wanted to, and on the point of this is a tool that can help us, I'm curious to the extent that you are able to talk about it, because you may have some proprietary things going on, but how are you thinking about using it in your daily work and your company in terms of, Hey, this is now a tool, how can we leverage this? I'm sure you're exploring that, and again, there may be some private stuff there, but to the extent you can generalize or speak to it, that'd be great.

Dr. Tomas Chamorro-Premuzic:

Yeah, no, absolutely. So I think if you think about sofa's kind of talent acquisition or recruitment, there are a lot of tasks there that are pretty repetitive, standardized and writing job ads, communicating with candidates, translating text from candidates into a profile or a model of who they are matching candidate features to job openings. And then the one I've been playing with a lot is actually creating things like personality feedback or feedback that use in coaching, right? Debriefing people. Yep. It's quite amazing. You know, can ask chat G P T to tell a neurotic individual that they are neurotic in a way that won't hurt their feelings and focusing on highlighting the positive. As you know, it's always been very, very difficult when you want to, especially democratize feedback for the direct to consumer market or b2c and tell people from the overall population that there may be low in conscientiousness or high in neuroticism or low in openness.

We always say, oh, there's no good or bad, but at least in American culture, and to some degree in Western society, people generally one end better than the other. So you can actually completely reverse this and ask this AI or this language model to provide pages and pages of positive feedback on the inverted comas negative side of any trade, and to do it in the style that would actually fit that person's personality. So if you are highly conscientious and have attention to detail, provide all of the information, you can reference studies, et cetera, and it does that, right? And of course if you wanted to actually give this to a person, you would need to read it, check it, and verify. But I can tell you my teams have spent days looking at these texts and I would say 60, 70, or maybe 80% of it is usable.

And then the rest, obviously what happens with the ip? Do you have to quote and reference chat G P T? Well, not really because they've, it's taken this information from general data repositories, so it hasn't actually created it. It's simply borrowing it and adapting it. So who owns it? We don't know because we don't know the sources. So I'm actually really interested in the ethical and legal aspect of this as well, which we are looking at. And then the other area I would say is in general, I got really interested in natural language processing and metadata as a means to measure model, and even automate inclusion. So very, very simple that some of the projects we're working on right now, getting hold of company data content and context of emails and checking whether your demographic kind of a category. So let's say you are middle-aged white male engineer in a company, whether that predicts the response patterns and the communications of people into you and with you compared to, for example, being a member of the outgroup, let's say a black or Hispanic female who is either very old or very young, et cetera.

And you can actually really help organizations measure and diagnose inclusion, which is mostly something that they haven't done. Everybody can measure diversity and can have their targets, but if inclusion is how people are actually treated once they're in a organization, you don't want people to snoop on employees emails and monitors and it will be unfeasible to do it at scale. So we're starting to use this and similar technologies to mine these data and try to translate it into insights of what goes on obviously at the group level and preserving the anonymity of people. But we need to know what category they're in and that's how we can basically get a measure or a sense of inclusion, diagnostics and metrics.

Dr. Charles Handler:

I mean, that's an excellent use case of how can this help us be better? Because we all know the, excuse me, and it's a separate podcast and many more about D E I and inclusion in terms of how companies execute on it. It's sup, super easy to say you're doing it, but when you actually try to get some measurable outcomes to guide you and to help get some feedback on what you're doing, that becomes very, very difficult. So that's a very clever idea and I think that many more of us will have many more clever ideas that will only ultimately add to our body of knowledge. And I think that with this one too, there's usually a pretty big gap between research, and I still think it is when you talk about referee journal publications and stuff, but I feel like these type of tools can help accelerate how quickly we're able to ask these questions and gather the data that we need.

Again, another way it helps cut out that base layer of, okay, as a researcher we can cut out a lot of stuff that holds us back from keeping up the pace of change. And also just we can spend more effort on the good stuff or on multiple studies at once when we don't have to do as many things on the bottom end of it to make it do what we want. So I think, or we don't even have the technical abilities and we don't have the budget or the resources to be able to go find those people or build it. It's a lot like people renting or partnering with a platform instead of building their own. I mean, a lot of times it's like we don't have the time and energy to do that, but we need what it provides for us.

Dr. Tomas Chamorro-Premuzic:

And I go back to your comment of the chess player and with technology of humans working with AI being better than one without the other. But in order to get there, you still need to upskill and reskill. Let's take our field, which is a very small field, but if you look at IO psychology or organization or business psychology, you and I know that 10 years ago or around that time when big data was starting to be big in hr and the common reaction was, oh my God, this is just silly, stupid, we can do it all, et cetera. And now those same people are either really into this or at least not explicitly or publicly ignoring it and pretending that they care because these things take off. And in the process people had to learn and be curious and re-skill upscale so that they can become better by using these tools.

And I think right now it's also important to acknowledge that in some cases the technology might be better working alone without the humans. I mean, certainly the case if you haven't got real expertise. So most airplanes fly themselves, but they have competent pilots that are there checking and they usually help, even though most accidents are caused by human errors. Yes, I think when self-driving cars really work, they will work better if there isn't a human in the loop, because humans in the loop are likely or a human driver out there, they're likely to kind of room things for everybody else. And in the same way, I think if something like video interviews with algorithmic or AI scoring could really work because it's okay right now, but not really that good, then it's feasible to think that they will be better if there isn't a human in the loop.

Because the minute you introduce the human biases are inevitable. And no unconscious bias training can help a human forget that the person in front of them is either male or female or seems like a male or a female, older, young, poor or rich, et cetera. So I think, but that's also part of the reason we're doing this because we are maybe so smart that we can create technologies that have the ability to de-bias as an increase meritocracy, which is what I'm really interested in, even though you don't get there from one day to the next, and it's a trial and error process. And yeah, I think not being scandalized when technologies go wrong helps, especially if we live in a world where the status quo is shocking and a little bit of progress could actually help a lot.

Dr. Charles Handler:

Yeah. So one interesting note, when's the last time you heard anybody even say big data, right? It's almost, it's all big data now, but that was kind of the harbinger that was our entry point into this new kind of paradigm that we're working in. I think you're right about the look, everybody's saying, well, machines are biased. Machines are biased. Well, that's because they're trained on something that makes 'em biased. So that to work in the way that you're talking about, we do have to make sure that these things are trained in a way that they, they're not bringing the bias in their base programming and that's going to take work, but it's totally achievable if you ask me. So as we, thanks so much. It's a great conversation as we kind of play out here, want to make sure you get a chance to let everybody know, is your book actually out now? Can you buy it on Amazon?

Dr. Tomas Chamorro-Premuzic:

That depends on when now is because now is something for us. But let's say that it's out on February the 28th, and I'll let the audience decide whether that is in the future, in the past, or maybe today. But yeah, they can pre-order it on Amazon now already. Again, it's called iHuman AI automation and the quest to reclaim what makes us unique Amazon or their favorite store online or brick and mortar store. Very good. And to find out more about this book and the previous book than anything I do it makes me look very old, but my website is a place to go. You mentioned big data as a kind of anachronism, also a url, but it still is www dot Dr. Thomas with no h.com.

Dr. Charles Handler:

Cool. Very nice. Well, I'm going to get my copy and I'll bring it tos up. If so, you can autograph it for me.

Dr. Tomas Chamorro-Premuzic:

Yes. And I hope you like it and you know, keep up with your great work. I think it's really, really important that we get truly trained and expert IO psychologists to be curious about the innovations and not ignore, resist, or refrained from progress, but also to do it with a healthy degree of skepticism and not jump or dive into any shiny new object and pretend that everything is new and nothing has been, everything has been invented today, and that the past is irrelevant because the only thing we have data on is the past and maybe the present and the future is sheer speculation.

Dr. Charles Handler:

And I think as IO psychologists we're trained to be skeptical for better or worse, sometimes the default is too much skepticism. But thanks so much. I really appreciate it. Great conversation.

Dr. Tomas Chamorro-Premuzic:

Great pleasure.

Dr. Charles Handler:

Science 4-Hire is brought to you by Sova Assessment Recruiting Software powered by science. Sova's Unified Talent Assessment Platform combines leading-edge science with smart, flexible technology to make hiring smarter and easier. Visit sovaassessment.com to find out how your organization can provide an exceptional candidate experience while building a diverse, high performing and future ready workforce.

March 28, 2023
Dr. Charles Handler
Changing assessment for good
Join our community

We’ll keep you up-to-date with the latest developments, events and insights.

© 2023 Sova Assessment Ltd. All Rights Reserved.