My guest for this episode is Collin Hawkes, Senior Lead I/O Psychologist at Lumen Technologies, an internet services company whose mission is to connect people with technology. Join us today for an interesting conversation that focuses on the use of AI and machine learning to break jobs into a variety of elements, a process also known as “Job Analysis”. We share our experiences in this nascent, but important area and use them as a bridge to other topics that relate to the use of AI in IO psychology.
Our discussion begins with the sharing of our mutual admiration for job analysis as both an art and a science. We have both spent countless hours laboring over the tedious aspects of this essential tool of the trade.
This crucible has led us both independently to the idea that there must be a better way to tackle the critical but painful minutia while still staying true to the art of the whole thing.
Collin notes:
I'd literally go line by line and these tasks and say, okay, what task, this task statement, this specific one task statement, what does this line up to in terms of a competency? And so, I have this Excel sheet with all these tasks in one hearing, and I would go down and I was like, what am I doing this for? Why am I doing this every single time? And so, it sort of created a thought in my mind, well dang, I could train an AI to do the same thing that I'm doing all time and it would probably be better than me at doing it.
We then discussed Collin’s pet project, the T stat, which is an automated tool that automatically categorizes task statements which are essentially the “atoms” of a job because they identify each and every task that is required to perform a job. These tasks are then aggregated into higher level factors such as competencies. I then shared my own experience in building an AI based tool that can take transcripts of job analysis interviews that when aggregated can identify the competencies and traits that are most important for a job.
Through our discussion of these projects, we find common ground in the use of AI in other areas of our trade, including assessments, video interviews, and job matching tools.
The discussion is definitely worth a listen!
For anyone interested in learning more about Collin’s app- check out t-stat.com
Catch Collin Hawkes on LinkedIn.
Announcer:
Welcome to Science 4-Hire with your host, Dr. Charles Hander. Hiring is hard. Pre-hire talent assessments can help you ease the pain. Whether you don't know where to start or you just want to stay on top of the trends, Science 4-Hire provides 30 minutes of enlightenment on best practices and news from the front lines of the employment testing universe. So, get ready to learn as Dr. Charles Handler and his all-star guests blend old-school knowledge with new wave technology to educate and inform you about all things talent assessment.
Dr. Charles Handler:
Hello and welcome to the latest edition of Science 4-Hire. My guest today is none other than Collin Hawkes, someone that I have known a little bit over the last, I'd say year. And we've discussed how we can automate job analysis, how can we help that tedious process, but tedious, but very important process along so that we can get the benefits but be more efficient. And so, he and I have talked about that a lot. In fact, so many interesting conversations that I thought it would be awesome to have him on as a guest. So, Collin, I'd love for you to introduce yourself and tell us about your role in your company as well.
Collin Hawkes:
Yeah, thanks Charles. Happy to do that. So, I'm an Iowa psychologist, same as Charles, a little bit less fancy than him, but the same field. I've had a lot of different jobs. I've worked in healthcare and automotive manufacturing, and now I work in the technology space. And I would say I'm really passionate about process improvement and always improving the things that we do and the way that we do them and sort of working towards a model of everything we do, growing our capacity to be more effective in the future. So that's what I am. And then I work for a company called Lumen Technologies right now. So, we, what we do is we connect people with technology. We own the brand CenturyLink. We're connecting people through applications. We own a lot of fiber optic cable that's in the ground. We own these huge cables that go into the ocean from one continent to the other. So, it's a pretty cool thing. And as an organization, we're really working towards a similar theme that I am passionate about myself, of driving this innovation and trying to do things in a new way.
Dr. Charles Handler:
Cool. That's awesome. Well, that's why while we're talking today, and so we again came together just talking about some efficiencies and some applications for job analysis. I've for a long time, and my background now, I don't know, 25 years in job analysis has been a constant. And I really love it. I love being out in the world and doing jobs, watching people do jobs and learning about jobs. And I've learned about so much in just about every sector of the economy. And you also gain a real appreciation for the people that are coming to work every day. We do a lot of blue collar, a lot of hourly jobs and just gain a real appreciation for the backbone of our workforce globally. Really. It's just a great thing. And you get out of the dang office, you get to walk around and do some stuff. So, we've all had some fun, but it can be tedious.
And I think you, Collin, can share your experience. The better you get at this, you want to line up a lot of interviews and a lot of touch points because it's important and also for compliance, making sure that you've got a good sampling. But we always joke, myself and my team, that after the fifth, sixth, whatever interview, we can pretty much nail it. There's still things you learn. So, I guess the point being is that there's a lot of work that you do that, oh, maybe we can do five or six and then kind of automate the rest and see if there's some cohesiveness there. And I've had those ideas a lot. I saw a presentation at SIOP about automating job analysis, but it turned out to be very technical. Okay, what are the machines you have to be able to operate and that kind of stuff. So that's all in a book somewhere in a description, a little bit soulless. But I would love for you, I have my own experience with this and it's not nearly as detailed as yours. I'd love for you to tell us a little bit about the project you've embarked on a little bit, and you know what you did found, and I'm going to listen intently.
Collin Hawkes:
Okay? Yeah, sounds good. Yeah, I think to your point, we know that the basis of almost all the work we do in I psychology is coming from job analysis and a thorough job analysis is critical because if we don't understand the role, we can't possibly use it in the selection space, which is mostly what I'm in is what I'm you're into. And then I think it's, it's benefit also is building stakeholder buy-in, right? If we were to just go fully on our own, do a job now with never talk to anybody, they wouldn't. They wouldn't so much believe that. And I think exactly sort of the discussion that you and I had Charles, I think we kind of came up with the same idea, pretty much the same idea right around the same time, hey, wait a minute. And so, I'll sort of walk through how I came to be like, well, we should consider doing this, right?
In that process improvement mindset, what this originally stemmed from, I've worked for several different large organizations in my career, and we usually have some sort of job catalog where it's like my company now has about 6,000 different job codes. And part of my job leading the selection assessment space here is understanding how those specific jobs align to different assessments that we have in play and making sure we're doing that. And it's always changing. Every week you have 10 new jobs created in summer archived. And so, it's an ongoing process of making sure you're keeping your job analysis up to, and you're understanding the new roles that you have coming in. And so, my thought process of working towards making this automated tool was I was spending all this time reviewing job documentation. And so, I think about job analysis. I think about it in sort of three main buckets.
One is you're reviewing the job descriptions or you're having whatever documentation people have about the job. The second one is you're holding these focus groups and understanding more about what the job is. And then the third one is a lot of times we'll use survey data to understand maybe from a larger population what competencies are really important. And in our organization, we use a couple different assessment selection tools. And one of the ones we use is HireVue. And this is an automated video interview platform that measures job related competencies. And we use it in an automated way. So, it ends up delivering us how well as people line up on these different competencies, and many other companies now have used a lot of technology to automate processes by training it from what humans, what ex or humans have done.
And so, after spending time under understanding their product quite a bit and spending all this time sitting there looking at job descriptions, and I'd literally go line by line and these tasks and say, okay, what task, this task statement, this specific one task statement, what does this line up to in terms of a competency? And so, I have this Excel sheet with all these tasks in one hearing, and I would go down and I was like, what am I doing this for? Why am I doing this every single time? And so, it sort of created a thought in my mind, well ding, I could train an AI to do the same thing that I'm doing all time and it would probably be better than me at doing it. And so, we partnered with this company called Graber, some of their coders to build this tool.
Essentially what we did is we took a fair amount of task statements, and we connected them to these specific competencies. And we did this a bunch of times and the tool learned from that. And so now you can run, take new task statements, put it through this tool, and it'll connect it to a specific competency. And I sort of described this a little bit before I was using a screwdriver, maybe now I'm using a drill and it's not always the right thing to do if I'm doing a different part of my project, you can't always use a drill to do everything you still need to do. Yeah, of course. The other parts of the job analysis. But it can be helpful in automating a piece of the process.
Dr. Charles Handler:
Yeah, I tell you what you brought back. Interesting. I am always happy to share my memories when they're relevant here and sometimes when they're not relevant, but task-based job analysis, I think it's pretty uncommon in my world to be doing it at that level. But I worked in the public sector and there we had to do everything. So, my story, I did police and firework, and we were under consent decree. We linked, I mean we had a book this thick, for those of you who are listening, let's say it's two, three inches thick of task statements, hundreds and hundreds. I would fall asleep in the meeting when we were going over every single 400 task statements. I ended up drooling all over the notebook. So that's what I always think of. So exactly, could we have automated that? Yeah, somehow. Now, of course you want the expert.
That is one of the, if we look at how AI can help us and that's the key, help us. Yeah, it's not going to replace us. So, what are those tedious aspects that we can automate? And my lens was really, we had to do recently a project with, I think we had to do 80 interviews across two countries. And my thought was, well, hey, we typically will do the interviews and then go back to focus groups. I would say focus groups slash interviews, and we'll go back and then we'll manually go through all of our notes, and we'll start pulling out themes, words that we hear a lot. That's where the artistry comes in. You're looking at all those words, you're identifying the themes, and then you start to build a model. And a lot of times we'll do this intermediary step even where we'll make our own little survey tool.
And as we read through each thing, we'll start populating that and we'll aggregate the data. So my thought was, well, we don't want to do none of these interviews, but what if we could do some of 'em and then we could get a transcript and do some nlp and then be able to just even confirm our model, take a lot of that middle work out where the most common words or clusters of words or whatever come together to in one spot. And you could say, all right, we have pretty good confidence that what we found in the first 10 or 15 of these is really going to be stable over everything and here's kind of our model that we could move forward with into even a survey or something else. So, to me it was that volume reduction savings. There's a little bit of similarity there.
I'll tell you from our experience, and we've talked about this, this was something that we'd really never done before. My team and I were learning on the fly and where we hit the wall was, we had, even after we pulled the noise out, hundreds of different terms, but we had no lexicon to show how those terms were related to one another. So that's where we are still right now working through. And anybody in the audience can help us with that. That's great. We'd appreciate it. Let get in touch with me. But we don't have that lexicon. We don't have a training data set we can use. So we're left with, okay, do we have to manually go through and code these hundreds and hundreds of words into clusters of words? We looked at doing a word cloud or something to see which ones occur most frequently, but that didn't really quite get to the point. So we're still playing around with it. We've got other data scientists from our company involved now too. So it's a slow moving kind of side project, but we hit the wall there. We have not been able to save the world on our project yet. So what kind of limitations or things have you run into?
Collin Hawkes:
Yeah, no, that's a good point. I think you and I talked about that challenge before I think. And so the tool that we've been developing right now, I'm calling it the T stat. So it's the task statement auto categorization tool. And the challenge has been, I would describe batt stat today as sort of a middle schooler a little bit knows what it's doing sometimes, but it's not always right. And our plan on this is to have people go into the tool, use it, and then correct it and tell it what's right. Because a lot of times now what I'll do is I'll have it go through, write a bunch of task statements, I'll have myself or another IO go through and rate it again and just blind match and see how close it is, right? And so the update that we just were able to make this past week was creating the tool so that someone goes in, they upload all these task statements, they run the tool and it says a recommendation.
And then you can go in and say, you can essentially choose a new competency that's better aligned to that task statement than it originally gave out. And we'll be able to use that data to train it over time to improve it. And so that's one of the ways that we're trying to work towards refining it for the future. I would never expect, I mean even if you consider the perfect outcome, like Charles, if you were to go through this and rate 20 of these task statements and I was to do at 20, we wouldn't always be a hundred percent exactly the same. Of course, because a lot of our task statements are just written by managers. They don't think a lot about or have a lot of training about how to write a task statements about one thing happening to one person doing one thing at one time to one object. And so a lot of times you'll have these task statements that's it's multiple pieces happening in one of those areas. And so it's a little bit hard to decipher what you should prioritize that in that one task statement there.
Dr. Charles Handler:
Yeah. So there's some other applications I'm starting to think of. So yours is definitely filling a need of, again, reducing tedium are those highly technical task statements. So do they relate to soft skills competencies or are they more like, yeah, good question. Fires up router with optical cable optimization or something versus presents to any audience audiences of different levels with great ease or something like that. So soft versus hard skills.
Collin Hawkes:
Yeah. Right now I would say there's a mix of what's incoming between soft and hard skills. They're mapped too. We've trained the algorithm on just competencies, right? It's really the output. Actually it's funny because one of 'em had Excel, it was one of my outputs I had to put in there cause I need to know how to use Excel, right? Right. But I would say the output today is those soft skills. And I had thought about spending, and this might be in the future for us too, but spending a lot more time and a lot larger amount of these task statements and connecting them to net. So I even originally thought about just using, trying to use all of Net's data and saying, I'm just going to use everything that net says and link those task statements to the competencies. But they sort of have a more complex, they have skills. There's a couple different areas that they're getting into there. Oh yeah,
Dr. Charles Handler:
Yeah, there's a lot in there.
Collin Hawkes:
So I ended up not going that route initially and maybe we will in the future because honestly I could just steal all their stuff and run it through the tool and then it was essentially automatically be able to tell you what the output was. But that's require a lot more training. I think one of the realizations for me in the technical space of it, I was imagining I was going to need a couple hundred task statements for one competency to get it trained. And the coder guys, my brother who runs this company called Greer, and he was like, it doesn't take that many. And so we ended up starting with only 10 task statements for one competency and that got our middle schooler. And so we're trying to get grow it and get better and use more data to be more accurate. I would say generally right now, I would say the tool is probably accurate about 70% of the time in terms of if we had another person rate it that was knowledgeable in the area.
And then it also has an output that sort of tells you the level of accuracy the tool believe it's achieving based on that. And so that really helps too, I think to say, look, like what you could do now is you could put it in, see what task statements are, and then the ones that have a high level of accuracy, you could use those and then the other ones you could go back through and sort of rate yourself again. So again, to your point, it's not automating the whole process. We're just making something a little bit easier, a little bit less tedious. And for me, I think working in larger organizations a lot, it becomes a complexity because we have so many different types of jobs and ensuring that we're categorizing it. We have the job family and job function, things like that. And that's always a good place to start in terms of how we match these roles to each other. But then you get into these really detailed discussions about, well, does that job, is it exactly the same? This engineer one, is that an engineer too? Is that really the same a hundred percent? And so really spending that time to try to differentiate these things between each other and do that and a way that has a little bit of process around it where it's not just like, if I go do this one day and I do it five days later, am I going to get the same result again? I don't know, maybe. Yeah,
Dr. Charles Handler:
That's some of the hardest work. That's some of the hardest work when I look at a project and say, oh my god, there's pipe fitter level one through eight, and now we need to figure out how are those different? And then well, how does that pipe fit or cluster in all these other job families? It's a lot of work to have to work through that. And there is some subjectivity to it and some objectivity to it. And yeah, it's an interesting blend. So I've thought of a million different, I always make notes here. One of the, I'm just going to riff off some interesting things we can that came to mind for me that maybe we'll have fun talking about. But so I know there's also automated item writing stuff, so it wouldn't be interesting on something where you've got to, it'll write new versions of similar items for knowledge items. Sometimes it'll write the different items that are pretty much the same item, but they're worded differently or whatnot or swap in number values and stuff. But there's, I believe applications that'll actually take a body of knowledge and write you items. It'd be so interesting to see if you could go from what you're doing, like task statements to ai, building a test plan to AI item writing for knowledge related stuff all
Collin Hawkes:
The way
Dr. Charles Handler:
Through. But that stuff just works so much better for the hard skills, the soft skills is where the real psychology comes in and obviously been thinking a lot about chat GTP lately and have been feeding it job analysis, Hey, what would, hey, computer machine here, what would you say conduct a job analysis or what would the results of a job analysis be for job X or Y or Z? And it's really interesting what you get back, it's accurate, but very lacks a lot of detail, very generic in some sense. So yeah, it doesn't give you an amazing feel for the job. And I tested it with some jobs that we have, assessments that I've built for, and they have a couple of very specific things that are differentiators. And so when I said, Hey, chat gtp, what would the job analysis be? Or what are the KSAs competencies?
However, I worded it for the job of X at company X, it spat out something that was pretty relevant, but it missed those two things that you have to be, it's a sales job ultimately, and it doesn't know that. So there's subtleties that it misses. And I would say if you were to do the human versus machine on chat GTP versus a human on how accurately they could dial it in, there would be no question there. So it depends on a person that well trained. Okay, thank you. A trained IO psychologist with experience, so it's not going to replace us anytime soon. And if you went in and said, oh, instead of doing a job analysis for this job, we're just going to ask this AI what it thinks and we're going to take what comes out and then just build a test on it.
Because we also asked it, well, what kind of test assessments would you hang on there and it put the stuff you would expect. So if you're not very knowledgeable in this field, it's at least a good starting point to educate yourself, but there's no fantasy. I guess what I'm getting at is there's no fantasy that you can just tell the machine to analyze a job and it's going to come out with the same kind of thing that we need to do our jobs correctly as scientists. Yeah. So have you played around with that at all, or what would you think if you fed one of your task jobs into that,
Collin Hawkes:
What it would do or asked it? What task statement this or what competency this would align to? Yeah, it's interesting. I, I've got in there a few times. A lot of times when I go in there, it's like says it's too busy, can, can't help me. So that's a limitation certainly, right?
Dr. Charles Handler:
I had the same thing too,
Collin Hawkes:
I think in either case, right? It's a tool that we can use and it's a starting point that can help us gather information if I think, I don't really have deep understanding of how that tool works, but I imagine it's just sourcing all information that's on the internet, and so it's a quick lit review. Everything that's available today, that is what, yeah. Yeah, I could see it really bringing some ideas forth or really having a good starting point in a lot of these areas. Certainly it would take a large amount of training and data to be able to get it to conduct it thoroughly. And there's so many instances where, I mean just between different organizations, the language that we use about our jobs is always different. What is a specialist here versus something else there? It can be totally different. The levels is all right, unique.
And so I think there's always this organization specific flavor that's put on, whether it's a specific job code or something like that, or job title. It's a little bit different across organizations. So I think I see that consistently true. I listened to a podcast you had recently and you talked about, someone asked a question, well, how would Jack g p t do if it tried to answer a behavior-based interview question? I think it'd do pretty good. Yeah, no, because there's a lot of good examples out there of Oh yeah, good responses to those questions. I think it'd be top tier, it'd be high percentile if I had to guess
Dr. Charles Handler:
That thing passed its job
Collin Hawkes:
Interviews. I think that does, listening to that brought something into my mind a little bit of shift of gears, but is how are we considering that is we're assessing our talent in the future? I mean, we talk about making sure that people are putting their best foot forward through the interview process. Is it really their foot forward? We've had some instances where we had people, if you talk about the asynchronous video interview space, how are you making sure this is the person really going to get right? And I think it's been a topic in some areas since people have been working remotely more often than they had over the past couple years.
Dr. Charles Handler:
At least you get to see people. My favorite, this is, I think it's been 20 years, but I still have a coffee mug with this on there. My favorite New Yorker cartoon of all time. It just, it's a dog sitting in front of a computer. It says on the internet, nobody knows I'm a dog. Of course, if you had a video interview, you could probably see that it was a dog. But I just think it gets to the point of we don't really know who who's out there. Some of the other, just thinking of other applications. One of the other things you see AI with doing, and this is outside of our space, but a lot more in the recruitment process, automation space is saying, okay, well we'll just look at a job description or we'll look at a resume and we can pull the skills out and then match the skills.
Oh, by the way, we have a database of a hundred thousand skills. So we have every skill in the world all cataloged, and then we can automatically benchmark your talent pipeline and the recruitment funnel or even internally versus the skills that you need based on our ai. And so first of all, I'm like, well, skill is just these days, especially a broad catchall tournament in my mind for knowledge, skill, ability, competency, behavior. There's a lot means different things to different peoples. But there's not a hundred thousand skills out there. I mean, if you think about it really, yeah, how many different ways are there to describe things unless you get into these super esoteric technical things, which I don't think is part of it. So if you rely on that AI in that sense to just say, we're going to parse apart a job description or some information and then parse apart a person and do the match that's approximating the same kind of stuff that we do. And I can't imagine that stuff is very accurate. And
Collin Hawkes:
Yeah, that's interesting. Our organization uses a tool called Phon people and that they have that capability and we're sort of getting into understanding how that works well, and I think I can see it's benefit if we talk about large volume of applicants, I can see that being advantageous in that areas. You think about specifically people need special skills with certain software, whatever it is. I can see that being helpful there. Again, it's back to the same job analysis question. It comes back to did we do a thorough analysis of what it takes to do the job? Did the recruiter or somebody spend the time to do that on the front end to say this is what it takes to do the job. And then the second part, and it comes back to the candidate as a candidate, spend enough time really telling us the skills that they have because obviously if either of those things is lacking, we're going to have a mismatch.
And I think that really brings to my mind a lot of legislation that is coming out, New York City and Chicago and some of these larger locations now are really legislating, I would say moderate to heavily in this area. And it's been interesting sort of watching about how they came out with legislation initially and then they're sort of changing it over time. And I would say it, it's been become moderately evident or highly evident that they don't have a deep understanding of how organizations are using tools like this today. And so we're trying working to follow our laws and also help them understand how their laws impact organizations through this because they're now tools like that you just talked about that's doing the skills matching. We might have to be doing adverse impact analysis at those levels too, and making sure that it's not creating unfair environments for certain groups of people.
And so that's another interesting question. I haven't seen a lot of data yet from any of those tools. If you talk about your traditional validation of saying, look, here's what we're measuring on and here's the outcome of the job. Do we really see that I, I'll be interested, I imagine our organization will study that and others will in the future, but see how well those job matchings are. I think one of the things that's interesting for me in that space is that if you think about the pieces I always think about, same with video interviewing. Everything that someone does is a function of their motivation. They could be a highly skilled interviewer, a highly skilled person that really has these competencies, but if they're not motivated to put their best foot forward through the process while they're really demonstrating their best capabilities. And so I would say it could off that that's a sort of variable for us.
If you talk about in that AI job matching space too, did that person spend time really updating their resume all the way? And then the question becomes, well, do you want to only have people that are really motivated to get your job? Or do you want to try to find people that don't really, they're so popular and they're so wanted that they don't even have to spend time doing this kind of thing. And so that's sort of a little bit of a dichotomy to me and how are we measuring people and are we considering these factors? Because it gets pretty complex in terms of the different tools that we're using to measure match the match that they've presented to us with the jobs that we have available today. I mean, for me, I think about, I read this book a while ago recruiting the Age of Globalization, and it really just talks about how, if you think about the industrial revolution and all these different parts of human progress, and our organization talks about the fourth industrial revolution now, and that's where we are, which is really using data to change how we work.
And I think this is coming into the space of Iowa psychology coming into the HR space. And I think it's interesting because in a lot of ways, I think some government agencies and people don't recognize that in the employee selection assessment world, we've been really fairly regulated for quite a while. And so showing the things that we're doing are fair and related to the job is nothing new for us. And I think that is in a lot of ways, people who don't have a D depth understanding of this has sort of been like, well, this is a little bit scary. What's going on over here? Is this right to happen? And so I think we're sort of seeing that balance now of some regulation from different government organizations with companies trying to do the right thing, I think most of the time, and trying to use tools to be more objective on our evaluation of talent. If I look at the market and me and you try to do, I was given a talk the other week and I had the same thing, is I imagine a world like this sort of imaginary utopia where everyone is in a job that they love and they're amazing at doing.
Dr. Charles Handler:
Yeah, that's what we want. What gets me up in the morning? Yeah, the legislation stuff. Well, there is a whole podcast coming on that I've recorded, so you'll hear my cool opinions in detail there. But it's reshaping things that we do. The interesting thing is still about these skills things. If you think about it in the world of it or the type of stuff you're talking about with more technical jobs, there are very objective skills. Even with going back to earlier in the conversation, automated job analysis for very objective things that are solidly known and very on the surface, that's not nearly as challenging as saying, okay, let's look at someone's resume and let's say how good a team player they are, how influential are they, how adaptable are they? Those kind of things, which are very, very important. Even your values, do you value structure versus autonomy?
Do you value money versus social governance? I mean, there's these trade offs and then there's these kind of soft things that define people in their work environment that I don't think a lot of these things that are hoovering resumes and matching people do. Now, that being said, it still, if you've got 5,000 applicants for a Java programmer job, which apparently if you read the news that that's not the case because there's a dearth of candidates for all these jobs, but if you have an overload of people and you need to screen out just who actually might actually have the skill that you really need, that's critical for the job. It's very valuable, there's no doubt about it. But I think as you start to get into who is this person really, and how well do they fit in our environment and can they grow with us and all that stuff, these things cannot answer
Collin Hawkes:
Those questions. Yeah, I agree with that.
Dr. Charles Handler:
And those questions are also critical because the foundation of any legal compliance is job relevance and then fairness. And those things go together because you know can technically ask if it's job relevant. Technically, you can ask anybody anything if you can show it's job relevant. Of course, we know there's things that are unfair that you would ask or just things you want to avoid. So you don't always do that. But to me, that's why we love the job analysis. It's because it can show us what is job relevant and that will help a lot of people. Hopefully these tools, if they work correctly, do that more efficiently. But to me, the soft skills thing, it's just not there as far as being
Collin Hawkes:
Able to. Yeah, yeah. I don't see from a job matching standpoint, I don't see how you'd really, theoretically I don't see that matching you're saying. Yeah, maybe somehow you could figure out in the data, but it seems like it'd be pretty hard. It's face validity below if you can find something. Oh, people that are more teamwork, you say these certain words in their resume, maybe. Yeah, maybe that's there. I haven't seen any of that yet. But I think that's why I would go back to using objective measurement, through measuring behavior based. There's another sort of assessment where you're really measuring that further down the line because it's hard to do that. You say, oh, I'm good at Adobe. I have some certification at Adobe, or something like that, right? Okay, I see that person has these skills. But if we're talking about those competencies, a little more soft skills, that would be something that really would be very hard to measure in that sort of AI job matching way.
Dr. Charles Handler:
And back to the motivation part of it, we've all had jobs where we were qualified, we had the basic abilities or whatever required to do it, the skills. But man, if we're not motivated to come to work every, you can't do your job if you're not at work or if you're checked out. So that piece of it is important as well. But again, there we design hiring processes. That's another big important thing that I learned, and I always preach it, is hiring is not the result of one test typically, right? It's the result of a whole combination of things even outside the funnel, how you're finding people, and then there's interviews and there's multiple people that are looking over you. And so to say that one test is the reason, or even when we validate tests, it's important to do that. But who validates the entire selection system? I wish we had the chance to do that. It doesn't always happen that much. So there's a lot of these bigger things we can't necessarily fully control at this point. It's the nature of the beast, but you're building the blueprint of who you want on the job and measuring people against that blueprint. If that's not accurate or thorough, the random chance factor is dialed up.
Collin Hawkes:
Very increases for sure.
Dr. Charles Handler:
Yeah. So that kind of brings it back to why it's so important to be doing the job analysis stuff, and in my mind why it's so important to start thinking about how can the modern era that we're in assist us there? And you said your tool's a middle schooler, ours is a kindergartner, probably maybe a going to leap a couple grades once we figure some stuff out. But does anybody really have a middle schooler, a high schooler, or a college person? I mean, if you would say, well, chat GTP is pretty smart. Maybe that's a college person, but no, it's not. So yeah, we don't have that higher ed. We talk again in five or 10 years. It might be a different story.
Collin Hawkes:
Yeah, I think the other thing that comes to my mind, Charles, is just the impact of this. And in the growing world of work, we talk about so many jobs that are coming soon today, we don't even know exist, especially in my organization. We talk about this is going to be so many new jobs across the world, we don't even know. And so I think the importance of us as IO psychologists to be able to understand those roles, understand what it takes to do them well, and to be able to quickly adapt and change how we measure those in our objective measurement assessment world is so important. And so that's why I continue to see tools, what we're both working on to be important tool in our tool belt as iOS to say, well, what can we use to make sure we're always keeping up and understanding about these new jobs and what's happening?
So I think that's encouraging to me. And I think about the other part of it that's encouraging to me too are why tools improving. The way we do job analysis is critical is because we have so many different improvements in our assessment and selection space. There's so many vendors now coming out with just amazing tools of how we can measure people. And I think in a lot of ways it's sort of like your little tagline says, right, something we know is based in the literature and it's ages old validation wise, and it just has sort of a new flare of technology. And I think both of the things we're trying to do are that same way. It's not like it's never done a task statement, measured a task statement before and see if it aligns to a competency. We're just trying to do this in a new way that's a little bit quicker and maybe sometimes even better than it was before.
Dr. Charles Handler:
So cool. As we wind it down, I would also say anybody listening out there, if you've done similar work to this, if you've got a part of the story to share, reach out to us. We're excited about hearing it. We'll have our ear to the ground at syop obviously to look for this kind of stuff, but maybe someday we'll even be able to have enough going on that we can put a session together. Collin. Yeah, thanks so much for your time. It's been an awesome conversation. Anything you want to leave the audience with how they might be able to follow you or anything exciting that you've got going on? This is your opportunity.
Collin Hawkes:
Yeah, no, I think that's good. I mean, I think most people on your podcast, you can find me on LinkedIn, right, Collin t Hawkes, and my last name is H A W K E S. Collin is two L's, middle name is starts with a T. And then for anybody who wants to use this tool, originally we thought about looking to patent or something like that, but I think what we're just doing is make it available for people to use. So if you go to, it will be T dash s t a t.org, we'll be where people are able to go. I think that'll be right. And use the tool and I mean, help us grow it, right? We got to teach our middle schooler. He needs some, yeah, he needs some more education. And so that would be awesome. I think people will be able to use it sometimes to help yourself and your own job analysis process, but continue to grow our database of understanding and making more accurate automated tool like this.
Dr. Charles Handler:
And we'll put information in the show notes that go along with the posting too. So good stuff. Well, thanks so much. Maybe we'll see you soon over at this SIOP show. Thanks for your time today.
Collin Hawkes:
Yeah, thanks so much, Charles. Great chatting with you.
Dr. Charles Handler:
Science 4-Hire is brought to you by Sova Assessment Recruiting Software powered by science. Sova's Unified Talent Assessment Platform combines leading-edge science with smart, flexible technology to make hiring smarter and easier. Visit sovaassessment.com to find out how your organization can provide an exceptional candidate experience while building a diverse, high performing and future ready workforce.