Hosts Richard Landers and Tara Behrend welcome Dr. Marcus Crede, professor of psychology at Iowa State University, for an engaging conversation about meta-analysis methods and research in industrial-organizational psychology. Crede shares his unconventional path from math major to methodologist, and discusses his influential work on transformational leadership, grit, and cognitive ability. The discussion explores the complexities of conducting meta-analyses, including publication bias, moderator analysis challenges, and the importance of pre-registration. Crede reveals surprising findings from his cross-cultural leadership research and reflects on the Sackett et al. reanalysis of cognitive ability-job performance relationships. The conversation also addresses contemporary concerns about sample diversity and whether IO psychology has lost touch with blue-collar occupations.
Key Takeaways:
- Transformational leadership shows weaker relationships with performance in Western countries than previously believed
- Pre-registration in meta-analysis helps address publication bias and questionable research practices
- The cognitive ability-job performance relationship may be weaker and more complex than traditionally taught
- Meta-analysts should carefully consider cultural and contextual moderators in their analyses
- Contemporary workplace psychology research increasingly focuses on office workers and student samples rather than blue-collar occupations
- Meta-analysis requires balancing comprehensiveness with practical constraints
- Wolfgang Viechtbauer provides excellent educational resources for learning meta-analysis techniques
- Simpson’s paradox considerations matter when combining data from multiple populations
Website: https://thegig.online/ Follow us on LinkedIn: https://www.linkedin.com/company/great-io/ Join our Discord here: https://discord.gg/WTzmBqvpyt Join The GIG Email List: https://docs.google.com/forms/d/e/1FAIpQLSfVQ4hyF8MA4G9W-ERwVL8_e91a-MUMuhNvxhXmgkSFUDFatg/viewform?embedded=true%22
Transcript
[Richard Landers] (0:00 - 0:31) Welcome to The Great IO Get Together, on tonight's show, quips and queries about the world of work as IO psychology comes alive. Now please welcome our hosts, Richard and Tim. Welcome everyone to Great IO Get Together number 39. My name is Richard. This is my co-host, Tara. Today we are exploring chapter 13 of our textbook, Research Methods for IO Psychology, and this chapter is all about meta-analysis. So to help understand the cutting edge of meta-analyzing. On the show today, we have Dr. Marcus Creed, professor of psychology at Iowa State University. Welcome to the show. [Marcus Crede] (0:31 - 0:33) Thank you for having me. It's a pleasure to see you both. [Tara Behrend] (0:35 - 0:51) Well, we're very excited to talk to you today about all things meta-analysis. But to get started, we usually like to ask our guests to tell us something that the audience might not know. Maybe the story of how you got interested in the field of psychology or something else that might give us insight into how you think. [Marcus Crede] (0:51 - 2:42) I was lucky enough to spend my sort of formative youth in South Africa. So I got my undergraduate degree there. I started out as a math major. I came out of high school thinking I was quite good at math and thought the intersection of math and business was interesting. And I found out in my sophomore year that I was not remotely good enough at math to be a math major. But I had a friend who took an IO class and he suggested that I check it out. And so I did. And that seemed a lot easier than applied mathematics. And so I ended up kind of taking some undergraduate classes. And then I stumbled across a book by Arlie Hochschild, a sociologist, I believe, by training, called The Managed Heart, which was about debt collectors and flight attendants and how they have to kind of manage their emotions in their daily work. And I thought that was just incredibly interesting. And then I also read a few chapters from Studs Turkle's work, a book called Working, which was kind of these excerpts about how people experience their jobs. And that really grabbed me as well. And so I was lucky enough then to do my master's degree at the University of Cape Town, but then got a position in a PhD program at the University of Illinois to actually work with a decision-making researcher, Janet Snezak. And that experience was interesting as well because I arrived at Illinois and she unfortunately felt very ill just a few weeks after I got there and passed away in my second year in grad school. So I think I had maybe one meeting with her. And I didn't really have an advisor until my fourth year when a young assistant professor joined our program, Nathan Kunsell. And I was at his door probably on his first day on the job for him and asked him whether I could work with him. And that's how I got introduced to meta-analysis. It's a weird tale, but I got there eventually. [Tara Behrend] (2:43 - 3:05) It's so funny to me how so many people who end up incredibly successful in the field of methods have the same story, which is essentially that it was an accident and a series of lucky conversations or a mentor sort of having them on the shoulder. I also love that Studs Turkle book. I think I have two copies because it's just too good to learn. [Marcus Crede] (3:05 - 3:10) Somebody should do an update of that. I feel like it would be useful to get the modern perspective on different jobs. [Tara Behrend] (3:10 - 3:25) Yeah, I think so. I've actually been thinking about something like that. But anyway, that's a great story. And I guess you and Richard are then connected by Nathan in that way too. So Richard, did Nathan teach you meta-analysis too? [Richard Landers] (3:26 - 3:27) No, Denise did. [Marcus Crede] (3:29 - 3:31) Well, so I get a second hand from Denise. [Richard Landers] (3:33 - 3:33) That's great. [Tara Behrend] (3:35 - 3:49) Marcus, you've published many meta-analysis, and I think the majority of them have become centrally important to the field. And so thinking about your body of work, I think my first question is, do you have a favorite of all the meta-analysis you've published? [Marcus Crede] (3:50 - 5:36) No, that's a good question. I think the one that I'm in retrospect sort of most proud of is one that we probably had the hardest time publishing. So it's not the one that maybe most people think of, which might be that meta-analysis on grit or some of the stuff we've done on leadership with Peter. So it is a meta-analysis on leadership, but it's one in which we looked at whether transformational leadership, which is taught in every business school, most leadership researchers are acquainted with it, whether that the relationship between transformational leadership and subordinate job performance really generalizes well across countries and cultures. So it did not end up in a journal that most management and IO people would consider to be a stellar journal. We still like it, but I think the findings are super interesting. So we managed to scrape together data from, I think it was around 40 different countries. And we're really able to show that despite the fact that this thing is taught as the essence of effective leadership, there's very little evidence that it's actually related to job performance in most, especially Western countries. The relationship is the weakest and kind of central in Northern Europe. It's really only in sub-Saharan Africa, sort of the Middle East, Latin America, perhaps, where the relationship is probably at a level where most transformational leadership researchers anticipated to be. So to me, that was kind of a big deal. It was a hard sell, perhaps, because of those findings, that a lot of people who teach this stuff didn't want to hear that, hey, maybe in the US or in Germany or in the UK, maybe subordinates don't really want this. So that, to me, is looking back, that's probably my favorite one, even though it was a real pain to get through to journals. We suffered many, many rejections. [Tara Behrend] (5:36 - 5:56) It's so important as a lesson in persistence, too, when you know that what you're doing is important and that people might not want to hear it. You're right, it's so much easier to get a paper published when it's telling a lovely story that everybody knows already. And that really challenging the way people think is harder, but it ends up being so much more important. [Marcus Crede] (5:57 - 6:24) And it's been a, I think, I'm not certain of this, but I feel like we were the first to kind of pioneer this method of using meta-analysis and kind of importing cultural data from Project Globe, that huge study of cross-cultural psychology and leadership. And I've seen a number of papers since then kind of follow that approach, some better than others. So I'm glad that at least the approach and the methodology has taken off a little bit. [Tara Behrend] (6:25 - 6:50) Yeah, I love the idea. I love the idea of using this as a tool to remind people about the assumptions that we can't see in the world, right? That we're, we forget how particular the American relationship to work is and how abnormal it is. And so why would anything that we learn here apply to other people? I mean, it's a really important reminder. [Marcus Crede] (6:50 - 7:34) It doesn't even necessarily apply that well here, right? Even in North America, the relationship is not that strong, especially once you control for some sort of very basic methodological sort of artifacts or characteristics. So it turns out a lot of the research in North America is same source. So we ask subordinates, hey, what do you think of your leader and how well do you do your job? And there the relationship is really strong. But when you separate out those two sources, when you ask supervisors how well their subordinates do and the subordinates how well the supervisors are doing in terms of leading them, then the relationship kind of falls apart. So I think a lot of the research in this particular area has really kind of been prone to these methodological artifacts, which we were also kind of able to tease apart. [Tara Behrend] (7:34 - 8:10) Yeah, and it's hard because once an idea is out in the world, even if you debunk it, people are both citing it, especially students who have no way of knowing that the field has moved on and that they shouldn't be relying on those papers anymore. But we don't have a great way of letting people know about that. That's a really important point. I wanted to ask you, you have several really long-standing and productive collaborations. So for example, with Peter Harms, I'm wondering if you have any words of wisdom for how to both establish and maintain those good collaborations over a long time. [Marcus Crede] (8:11 - 9:37) I'm not sure if I have any good recommendations. I can tell you the story of that collaboration. So Peter was, I think he was a year behind me in grad school. He was originally admitted as, I think, a personality psychology student working with Brent Roberts, but he sort of drifted over to IOS psychology over the years because he has an interest in personality and leadership, which is a very natural fit with where the field is. And yeah, we just started hanging out. So we became friends in grad school. We played poker and we played golf together and perhaps the biggest bonding experience is that we are often irritated by the same things, which is important in all sorts of relationships, academic and sometimes even romantic relationships. It's a good bonding experience. We hardly ever see each other. Maybe once every five years at a conference, we'll run into each other, but we've stayed in contact through email. And I think the reason why the relationship has worked is that I think we both bring something to the relationship that the other person maybe doesn't have or doesn't want to do. And then I think we trust each other. We try to be as honest with each other as possible. And so we know that the other person's work doesn't have to be quadruple checked for errors because we've kind of learned that over the years that we speak up and I sometimes tell Peter that I think an idea is not maybe all that great and he does the same to me. And I think that's an important part of any working relationship. [Tara Behrend] (9:38 - 10:00) Yeah. One of these days, Richard will learn to check my work more carefully because it's riddled with errors, but so far he has not learned that. Well, sort of continuing along the theme of just how you go about your work, I was also curious about how you choose a topic for what to meta analyze. Like when do you know that the field needs a meta analysis on some topic? [Marcus Crede] (10:01 - 12:01) Great question. Often we have an idea, Peter and I, or I individually and we look at the field and often the answer is simply there's not enough or sometimes the state of the field is so poor that you really shouldn't do a meta analysis. So a good example of that is my last graduate student, Lukas Sotola, who's now at Pace University as a social psychologist came to me in his first year and he had a really strong interest in something called system justification theory. I had no idea what this was. So I started to read some papers with him about it and we soon realized that that whole area is just kind of hopeless, right? And methodologically, a lot of the, it's a lot of experimental work, but often like these tiny, tiny sample sizes, the key values all in this kind of weird area of just below 0.05. So a lot of 0.048, 0.041, 0.047. And I just talked to him, I said, listen, this doesn't seem like something A, you should be working on at all, but it's also not enough to do a meta analysis because we can't really trust what's being done here. So we ended up doing an entirely different kind of paper, which was kind of a Z curve, which we can talk about if you wanted to, to kind of really highlight some of the methodological problems in this entire field. But yeah, so often it's just a topic that we feel, it's an important topic. There's been a lot of research. There's some dispute about what the effect sizes are or whether they are moderators. And then we have to ask ourselves, do we want to spend a year or two working on this? Are we interested enough to really dive into this? Because it's a lot of time. And there's always the risk of somebody else coming along and doing it quicker or sometimes better than you. And that's a chance that's easier to take when you have tenure. I think we were often quite nervous doing that as assistant professors because it does happen that you spend a year on a topic in meta analysis and somebody else comes along and publishes that same topic before you. [Tara Behrend] (12:01 - 12:35) Right. I mean, I think the biggest mistake is that people are saying meta analysis is quick and easy and that no matter what goes into it, the findings are useful somehow. But your point is really great that if you are summing together a lot of questionable research, you don't come out with truth on the other end, right? You come out with the average of a lot of noise and it's harder than ever to make sense of what's going on. Do you think that's the biggest mistake that people make when they start doing meta analysis, just sort of like underestimating what it can do? Or are there other kind of classic rookie mistakes? [Marcus Crede] (12:35 - 19:09) I mean, I think there's a lot. I have a whole kind of list of things that I wish people would do better or consider. But I'll start off with maybe a critique of the paper that I'm made perhaps most best known for, which is the meta analysis on grit. And sometimes when I think back to it, I'm still very glad we wrote that paper and I think it's a good paper. But one of the main problems with the literature on grit is the incredibly poor measurement of it. And I don't want to get too far into the weeds on this, but most of the researchers who study grit use what's called the short grit scale, which is eight items to measure the two supposed facets of grit. And the four items that measure the perseverance facet are completely misaligned with the theoretical content or nature of perseverance. So it's supposed to be whether you persist despite a setback. So you try something new and it doesn't work. Do you just give up or do you keep going at it? So when you're learning a musical instrument for the first year or two, you suck. And some people give up, others know that that's natural and keep going. So theoretically, I think that idea is really interesting. But when you look at the items, they don't really match up to that, right? They're largely kind of conscientiousness items. So I sometimes look back at that meta-analysis and say, well, is it really a meta-analysis of grit as a construct or is it a meta-analysis of a really terrible scale that people just happen to be using? And so that's something I battle with. So that's one recommendation I would make and something that I think researchers sometimes make mistakes in is that the constructs are either not well-defined or they're really badly measured in the field. And whether it's useful to then do a meta-analysis on that, I'm in two minds about it. There's a couple other things that tend to irritate me or that I have concerns about when I do reviews of meta-analyses. The first one is I think often it's just an insufficient amount of effort in the search process. A lot of researchers think that going to psych info or whatever main database is in their field and doing a keyword search of whatever their topic is is going to be enough. And you're obviously going to find some papers that are relevant, but you're not going to get everything. So really doing detailed searches, searching abstracts rather than keywords and rather than titles, doing kind of searches of reference lists, going to Google Scholar. A lot of the things that you talk about in your chapter, I would really echo that. You really have to do a deep dive, especially if you want to examine moderators. If you have a specific moderator in mind, you think, man, I really think this matters here. And there's a great example, a paper that came out towards the end of last year in Group and Organization Management where they looked at it, I think it was abusive supervision or abusive leadership and how it relates to the job satisfaction of subordinates. Interesting topic. A lot of people find that stuff interesting. And what they were especially interested in was how this generalizes across countries, across cultures, very similar to the transformational leadership paper that we had written. And I read it and I was like, oh, they only found data, I think it was on 12 countries. That was really weird. I thought this is a widely studied topic. I would have expected far more. So I literally spent two or three hours that afternoon just doing like some Google Scholar searches and some psych info searches. I found data for 13 additional countries that they hadn't captured. And so I was able to double the number of countries that they had data for in an afternoon. And when we then, I basically re-did their entire meta-analysis and all of their results changed completely. But they get flipped on their heads because you can't really do a cross-cultural generalizability study based on data from 12 countries, most of whom were Western or Southeast Asian. And so that kind of lack of care in the search process, I think really undermined that paper and I see it fairly often. So especially if you're looking at moderators, you need variance in those moderators and you need a decent number of papers that really examine those moderators as well. The next one to me is kind of related to the measurement issue I talked about. I think sometimes people don't understand the constructs very well. There was a big back and forth in the emotional intelligence literature on whether emotional intelligence is related to job performance and leadership behaviors and all sorts of things. And some of the meta-analyses published in that field lumped a whole lot of things into the job performance bin that really had no place being there. So for example, academic performance was classified. So GPA suddenly becomes job performance. How supportive your parishioners are if you're a church leader, that suddenly becomes part of job performance. How well you adjust to being posted overseas becomes part of job performance. So a lack of understanding about what job performance really is leads to these kind of mixed results that then can't be replicated by other researchers. The last two that I'll quickly mention is a lot of researchers don't assess potential publication bias enough. I think there are numerous literatures that where the publication bias evidence is large enough that I don't think a meta-analysis is really warranted. Again, my former student Lukar Sotola did a great job of looking at meta-analyses that had been published in Psychological Bulletin, where they had published all their data, all their coding with it, and he went through like 30 different meta-analyses. And there was a whole bunch of them, maybe five or six, where there was such clear evidence of p-hacking that you basically had to throw out the entire meta-analysis. So I think if the researchers had done that earlier and looked at that as a potential problem, those meta-analyses maybe wouldn't have entered the public discourse. And then the last thing, which is perhaps a more modern concern, is we can pre-register meta-analyses. We can think very carefully about how are we going to analyze the data, what moderators are we going to look at, how are we going to look for publication bias, and these days I think it's become more and more important for us to do that, to really put down our markers ahead of time before we ever do the coding, and before we do the analysis of how we actually are going to do all those steps. [Tara Behrend] (19:09 - 19:59) Those are all really great recommendations, and if I can connect a common thread between them, I think the bottom line is that you have to care about what you're doing, and you have to believe that finding out the truth is the goal, right? Which is worth questioning if everyone shares that goal. And so I completely agree with you that if you care about what you're doing, then that means putting a lot more thought into what you're doing and a lot more thought to what you've really discovered when you do this meta-analysis. What is the contract? What is job performance is a great point. I always get excited about meta-analysis that look at measurement differences as a moderator, for example, right? Well, people who measure it this way found one thing, and people who measured another way found a different thing. I mean, that's really useful information for the field too, but I don't really see those very often. [Marcus Crede] (20:00 - 20:07) We don't really get, I think as a field, we don't get rewarded for doing measurement work really carefully. [Tara Behrend] (20:07 - 20:08) Right. [Marcus Crede] (20:08 - 20:52) It's very hard to land that sweet business school job if you're a measurement person, right, who really says, here's a construct we think is important, let's really pay attention to how we measure it and whether there's differential item functioning and whether we can translate the scale. Those kind of questions don't get the recognition that they perhaps deserve, but downstream it leads to all sorts of problems when it comes to meta-analyses and any other type of work that you do with those measures. So part of me wishes that we sometimes would abandon some of our really fancy statistics and even meta-analyses perhaps and return back to saying, let's figure out what constructs we think are important and how do we best measure these things before we run away with all the fancy modeling that we like to do. [Tara Behrend] (20:52 - 21:17) I hear more people making that plea or that argument that fancy statistics is not going to save a bad research question. It's not going to save a bad design. And statistics is easy. Thinking is hard, right? Asking the right question is hard. So related to that and related to what we've been talking about, do you think AI could do a meta-analysis that's good and useful? [Marcus Crede] (21:18 - 22:08) Oh, you know, I am not an AI person. I try not to use it. So I don't know enough about it to really offer a very competent answer. But there was, I think, yesterday or the day before, there was that paper published by, I think some researchers at Apple. Did you hear about this? It seems to show that AI, a lot of the programs that are out there really struggle with some relatively basic reasoning tasks that children do really well. And as long as that's an issue, and there are so many, I think, critical decisions that have to be made when you're doing any kind of study, including a meta-analysis, I certainly wouldn't trust an AI program to do a meta-analysis. It may be able to produce something that looks like one and reads like one. But right now, I count me as very skeptical. [Tara Behrend] (22:09 - 22:27) I mean, when you think about all the judgment calls that go into that, right, and whether an AI could even articulate what the judgment calls are that it made, whether they're right or wrong is a separate question entirely. But does it even, can it even report back to you which decisions it was thinking about in doing an assert? [Marcus Crede] (22:28 - 22:34) Can an AI program respond to reviewer comments? That may be the most useful part of large language models. [Tara Behrend] (22:35 - 23:08) That was a great question. That was wonderful. Yeah, but I think about my former students, Nikki and John, who you know, who they were working on a meta-analysis that involved going physically to the headquarters of the International Communication Association and sitting there going through all the abstracts because they weren't online anywhere, right, to make sure that they had a done a complete search. And that kind of thinking about where might I look for information that might be unexpected or surprising or that is not online is obviously beyond the capability of any software we have now. [Marcus Crede] (23:09 - 23:15) Whatever happened to meta-bus? Did that ever take off? I never used it, but I don't know what. [Tara Behrend] (23:15 - 24:04) Yeah, so Frank and Piers and Krista, I think, got initial funding. They get it going and set it all up, but then they didn't have that funding renewed. So what's there is still available, but I don't think they're updating it anymore because they don't have funding. But I guess that could change at any time, like if they get it off the grant. Interesting. Yeah, it was a great initiative. We have a few more questions. I did want to get your thoughts on. So some people, not so much in IO psychology, but maybe other fields of psychology, criticize meta-analysis as being subject to a kind of levels of analysis fallacy. So since it's paradox, do those critiques ever have merit? And what would you tell researchers to sort of think about to make sure they're not falling into that trap? [Marcus Crede] (24:04 - 25:34) Yeah, I saw that question of yours, and I hadn't read that, although I can see the argument. So I hadn't thought about it that much. I guess I'm not overly concerned unless people are, I guess, combining data from multiple samples into a single-level kind of analysis. Then I would be concerned. And the recent paper, which I'm sure you both are aware of, the Sackett et al. paper in which they reanalyze the cognitive ability-job performance relationship, they make a point of sort of talking about that, to say, listen, we're going to be very careful about kind of separating out any kind of data where multiple samples are combined into a single analysis because you end up with that kind of possibilities of having kind of a Simpson's paradox kind of effect. So I guess it could happen, but that would be the same problem with any individual paper. So I'm not overly concerned about it. I have a colleague here in the counseling department, and she's done some really good work, and she's definitely avoiding this, so I don't want to criticize her. But she's done some great work actually getting the individual-level data from individual studies to do what she, I think, calls an individual-level meta-analysis. So she's able to combine them, but also look at nonlinear effects within each of the data sets. There, I guess, if she had naively put everything together into a single data file and just calculated a correlation, then I could see that being an issue. But she obviously would never do that. And I haven't seen anybody do that kind of work. [Tara Behrend] (25:34 - 26:13) Yeah, I haven't either, because I think what it would involve is capturing not just the correlations, but the means, like the levels. So, you know, if high-performing samples, like, you know, elite military teams look one way, and then your regular office workers look another way, like sort of capturing that, and then somehow standardizing that onto some meaningful scale. I mean, it's actually quite challenging and another reason to be thinking about the theory of, like, what are you measuring, and do you have any reason to suspect that that would be the case? Most of the time, we know enough about these variables that we would have a good reason to expect or not expect that. [Marcus Crede] (26:14 - 26:58) But it is a concern I have with studies that rely too much on sort of online panel surveys, right, because there you can end up with multiple populations that you then aggregate into a single sample. You're hoping that there aren't any kind of weird Simpsons paradox kind of effects going on. But I could see, you know, some of your data coming from the U.S. and some of your data coming from India and some of it coming from the Philippines, and you smush it all together, and you end up with effects that are present in none of the three samples individually because of these meaningful differences and some variable that you might be interested in. So, if anything, I think it's a bigger issue for those kind of samples and all the research that we see, something to be aware of. [Tara Behrend] (26:59 - 27:23) Yeah, it's another argument for context, right, for caring about context and thinking about what are the components of context that are relevant to your situation. So, the last question we wanted to ask you today is just some recommendations for our audience about maybe something you've read in the past year that really got you thinking. Any authors in particular that you think students should read that they might not come across? [Marcus Crede] (27:23 - 31:17) Right, yeah. So, I have one recommendation for somebody who I think is really good at explaining and teaching meta-analysis as a resource for people who want to do this, and that's actually my former statistics TA from Illinois, Wolfgang Feuchtauer, who just ate the nicest person imaginable, and he has this incredible set of workshops and resources online. So, Google him. You'll easily find his website, and he's just, you know, I think he writes very excessively at various levels of technical expertise. So, as your understanding advances, I think you can dive more and more into some of his technical stuff. You had asked, yeah, if I'd read something recently that made me think, and I think that Sacket et al. paper that I just mentioned really did make me think, right? It is kind of made me reassess some of the, one of the core assumptions that we have in our psychology that we at least used to teach all of our students, right? That cognitive abilities are really excellent predictive job performance, and that the relationship gets stronger for more complex jobs, and that paper seems to kind of really question that. So, to me, that really made me think about not only that relationship, but also reading it more closely about, I guess, some of the research that we're doing. So, I was struck. I think that their meta-analysis covers the period 2000 to 2021, roughly, and maybe going a little bit further on to 2022 or so. And I think they only found a little over 100 studies, which I found remarkable, right? That over a 2021 year period, we're only publishing about five papers a year or five unique samples a year. It wasn't even papers a year. They'd look at this relationship, and a lot of it had to be requested from, I think they had to contact some researchers about data that was unpublished and consulting companies about research that they may have done. So, the fact that we're not kind of doing that much work in that area was something that I found interesting. Also, it made me think a little bit about, is this relationship that we emphasize so much in our training, is it necessarily a linear relationship? And so, one of the things that I've been, we've written a paper or two about it, is the question of whether a variable can be a necessary but not sufficient condition for some sort of an outcome that we care about. And I'm wondering whether there are jobs where cognitive ability is necessary but not sufficient. And I think that Sacket et al. paper kind of hints at that a little bit by saying the jobs of today are more and more social, right? We have to be able to interact effectively with team members and with customers and people in general, we're not just at our little station in a factory anymore. But that cognitive ability, the correlation may no longer be as strong as we thought it was, but it may still have this necessary but not sufficient kind of characteristic. We need it, but we need other things in place as well. So, some of my thinking has shifted towards that kind of possibility. And the other thing that I was struck about by that meta-analysis is the relative lack of what I would classify as sort of blue-collar occupations. And so, reading that paper actually made me start kind of, we're kind of doing a big review of who are we actually studying anymore in our field. It used to be, if you go back to the 70s and 80s, I think you had a lot of blue-collar occupations in our samples. If you read the Journal of Applied Psychology, it was a lot more of that. These days, it's a lot of office jobs, which is understandable because more and more people are working there, and it's a lot of student samples, and it's a lot of online panel studies. Are we losing track of people who are entrepreneurs who are blue-collar? So, it's not related to the meta-analysis, but it got me thinking about whether we've lost track of some of our roots a little bit. [Tara Behrend] (31:18 - 32:44) Yeah, I certainly share that concern, and I think there are more and more people who see a study about, say, drivers or construction workers and say, well, this is not generalizable, but they never apply that critique to a study of office workers. So, you really have to question the value system and the assumptions that are underlying that kind of statement. And I think, related to the second paper, the one other thing that has changed is people's attitudes about testing and attitudes about cognitive ability and the value of cognitive ability. I mean, in the 90s, it was prioritized, right? Like, McKinsey was sort of famous for saying, like, we want the smartest people. Well, they said the smartest guys, but you know. And people don't talk like that anymore. We want ethical behavior, right? We want leadership, we want compassion, we want good team members. And by prioritizing other things and the measurement of job performance, we actually change that relationship. It's not like the relationship exists without human intervention. Like, we've always had a role in defining what job performance is. Well, I guess this is a really great conversation. I have a ton to think about, a ton to read. We'll have to collect all of your suggestions and put them in the episode notes for this one. But we so appreciate you spending time with us today and sharing your great insights. Thank you so much. It's been great to have you. [Marcus Crede] (32:44 - 32:54) Well, thank you for the invitation. It's always nice to chat with IOS psychologists. I'm the only one within about 200 miles here, so it's a rare treat for me. [Richard Landers] (33:05 - 33:08) See you next time for another great IO Get Together.
