Join hosts Richard Landers and Tara Behrend on the latest episode of the Great IO Get-Together as they discuss research strategy with experts Dr. Enrica Ruggs and Dr. Larry Martinez in this interactive psychology podcast. Learn why experimental methods remain powerful but underutilized in workplace psychology, when mixed methods approaches are most effective, and how to balance simplicity with complexity in research design. Both guests share insights from their current projects on diversity signals, allyship training, and psychological safety in the workplace, demonstrating how different methodological approaches can address similar questions in organizational research.
Website: https://thegig.online/
Follow us on LinkedIn: https://www.linkedin.com/company/great-io/
Join our Discord here: https://discord.gg/WTzmBqvpyt
Join The GIG Email List: https://docs.google.com/forms/d/e/1FAIpQLSfVQ4hyF8MA4G9W-ERwVL8_e91a-MUMuhNvxhXmgkSFUDFatg/viewform?embedded=true%22
Transcript
[Richard Landers] (0:00 – 0:39) Welcome to the Great IO Get Together. On tonight’s show, quips and queries about the world of work as IO psychology comes alive. Now please welcome our hosts Richard and Taylor. Welcome everyone to Great IO Get Together number 29. My name is Richard, this is my co-host Tara. We are exploring chapter three of our textbook today, Research Methods for IO Psychology, and this chapter is all about research strategy. So to help us understand how to make good research strategy decisions, on the show today we have Dr. Enrica Ruggs, Associate Professor of Management at the University of Houston, and Dr. Larry Martinez, the A. Dale Thompson Endowed Chair of Leadership in the Psychology Department at UT Arlington. Welcome to the show. [Tara Behrend] (0:40 – 0:41) Thank you. [Richard Landers] (0:42 – 0:43) Yeah, thank you for having us. [Tara Behrend] (0:44 – 0:54) Well, we’re very excited to hear from you both today about this topic. Thanks so much for joining us. So maybe to get us started, can you tell us about what you’ve been working on lately? [Enrica Ruggs] (0:57 – 0:58) Larry, you want to go first? [Larry Martinez] (0:58 – 0:59) You go first. [Enrica Ruggs] (1:01 – 4:41) I love that. Okay, so many exciting things going on right now. One of the things that I think I’m really excited about is actually a theory paper, so I’m not sure how exciting that is for this book, but it’s looking at, it’s an integrative conceptual review looking at diversity signals and all of these different signals that organizations send that are sometimes viewed as performative by people. And why don’t these signals become viewed the way that people want them to be? So we created this very cool model, I think, integrating signaling theory and attribution theory to sort of test what are the boundary conditions that influence when information asymmetry is reduced. So this idea that an organization sends a signal and people actually are like, oh, okay, they do care about diversity in the way that we want them to care about diversity. So we like it, we’re happy, all of these things, versus they send a signal and people are like, we don’t really see that as in line with what you do. So maybe they see it as a misalignment of values and actions, or they see it as performative or inauthentic. And so I think that that’s a really cool piece that I’m hopeful has both theoretical value for scholars, so being able to ask better questions around how we make recommendations about, hey, these diversity practices are the ones that you should do because they’re effective. When are they effective versus when are they maybe less effective? What we can ask better questions and talk more about that a little bit later about different designs within that. But also I think it has practical value for organizations to really be able to know like, hey, we’re doing X, Y, and Z, why is it working versus why is it not? And so sort of related to that, so that’s a theory piece, but I have a paper right now with some of my amazing graduate students looking at this in a more empirical perspective. So looking at when organizations say they care about diversity and they send these different statements, when do people see those as performative or less authentic versus when do they see them as more authentic? So we did a couple of, well, we did one experiment where we actually tested this experimentally. We manipulated the diversity statements to have these sort of different attribution criteria to see, does that influence people’s perceptions of performativity? And it does. Then we followed that up with a cool field study. So something happened at an organization where there was an event and then we went in and the organization sent out all these statements and did all of these different practices. And we actually measured people’s perceptions of performativity. And we also measured different attitudes about the organization and sort of intentions like, would you recommend this organization to other people? And then we followed that up with a survey to employees at different organizations. So using prolific, so looking at how perceptions of performativity may be related to job attitudes and other things. So using multiple time points early on. So those are a couple of things that I’m working on that I am finding interesting right now. [Tara Behrend] (4:41 – 4:43) Awesome. Thank you. How about you, Laurie? What are you up to? [Larry Martinez] (4:44 – 6:53) I guess I should talk about my work on allyship. That’s what I’ve been doing the most of recently. We did a lot of basic studies just to understand what allyship looks like in a workplace context, sort of building up the taxonomy of what it is, how it’s done, when it’s successful, when it’s not successful. And that turned into a training, an intervention that’s sort of a workshop that addresses some of the do’s and don’ts of the research that we did before. So we can train people to be better allies for one another. And what we did most recently was a randomized control trial where we had one group of people take the allyship training. Another group took a control training, which in this case was actually just a traditional implicit bias training. So not a true control condition, but another sort of treatment. And the idea is that the allyship training should have a better impact on trainee reactions than the implicit bias training. And we’re still analyzing those data, but that seems to be the case with what we’ve seen so far. So that’s good. A lot of the other stuff that we’ve been doing, we have a meta-analysis on confronting prejudice. It’s kind of related to the allyship stuff. It’s not all workplace-specific confrontations, but that’s the nature of that project. And then a lot of the other stuff that we’ve been doing has been really just understanding the experiences of different types of employee populations that get a lot of research attention. So we’ve done a lot of work with farm workers in Oregon, mostly Latino, mostly immigrant populations. Those interviews and focus groups we actually did in Spanish. So that was an interesting thing that we had to sort of figure out how to do for the first time. And then a lot of my students are really interested in neurodiversity. So sort of looking at how that identity and those characteristics related to neurodiverse identities play out in the workplace from lots of different angles. So those are kind of the main highlights of what we’re up to lately. [Tara Behrend] (6:53 – 7:13) One of the things you mentioned is that when you’re designing an experiment, it’s always tricky to choose the right kinds of comparisons and controls, right? Because it changes so much about how you interpret what you find. Is there anything else you can tell us about how you approach that task of deciding what kinds of comparisons make sense? [Larry Martinez] (7:14 – 9:50) Yeah, I kind of struggled with this quite a bit, actually. And I think when I… This was supported by a grant. And when I wrote the grant, I think what I wrote into the grant was that the control condition would actually be some other type of thing, right? Like a leadership training or Excel training or something like that. But one of the things that’s kind of difficult to do and kind of throws a wrench in a lot of the research that I know Enrique and I both do is when we’re working with organizations and with community partners, you don’t always get to do exactly what you want. So these data were collected as part of an organization, a program called EcoCAR, which is run through the Department of Energy and sponsored by GM. And it’s basically these universities can apply to be a part of the program. And if they’re selected, they’re given a car, like a traditional gas guzzling gasoline car. And then they take four years to four-year competition to convert that car and make it more eco-friendly. And there’s emissions tests throughout. So that’s sort of the population. And we got access where we could come to their quarterly workshops and do the trainings because DEI is a big initiative with a lot of these government organizations right now. So that’s what we did. And we didn’t feel like having an Excel training or some other type of non-DEI training was appropriate. And that’s not what they were interested in. They wanted everybody to get better at DEI. So I was lucky. I got my co-worker, my colleague at PSU, Tessa Dover, to construct sort of a dummy… It’s well-constructed. She’s an expert in diversity training and backlash to diversity training in particular. But we constructed an implicit bias training that we thought also would be good. And the results seem like both are good, but we are seeing a small incremental difference. So it’s a big risk because it’s like, well, these both should work. And this is a grant and I’m kind of staking the grant working on this new training being better than whatever the comparison is going to be. So it would have been a lot easier just to say, yeah, it’s better than this other training that has nothing to do with DEI. But we wanted at the end of the day, based on the situational constraints and I don’t know, I guess just my personality, like let’s go for it and see if we can get incremental effects over and above what most people think is the gold standard. Let’s try for that. But it’s a difficult decision. [Tara Behrend] (9:51 – 10:21) So much of being a good researcher is about keeping your eyes open and adapting and following those interesting threads when they present themselves to you. Anrika, how does this process potentially look different though when you’re working on federally funded research and you’ve submitted a proposal in advance that says I’m going to do X, Y, Z? Does that limit your ability to do that or are there still ways that you can sort of adapt once that plan is in place? [Enrica Ruggs] (10:22 – 13:17) Yeah, yes and no. I mean, I think that the sort of beauty of having federally funded work is that it really forces you to think up front about the big picture of the research that you’re trying to do and in some ways thinking a little bit more carefully about like the design. But it does get tricky if you have multiple studies over time that you’re trying to build from one another. And so like Larry was saying, if you have a study where maybe you’re trying to do it, I have a federally funded grant right now where we are doing some interviews to look at what are things that managers and leaders can do to help foster psychological safety in the workplace, particularly for women of color. And the idea is that we’re going to develop an intervention based on the data from the interview studies. So we’re interviewing employees and managers. The intervention study, we could not design that going into the federally funded grant because in part it hinges upon what we’re doing with the first study and what we find in the first study. And so I think that there is room in federally funded grants to write things that here’s what we’re planning to do, but not design every single aspect of it. But if you do have pieces that are designed, you generally want to try to stick to it. If you can’t for a variety of different reasons, so all sorts of things happen, maybe you can’t get access to the population that you’re trying to get access to, or the equipment that you need, or something happens, something else happens where it just doesn’t make sense, then it’s always important and good to reach out to the program officer and say, here’s what we’re thinking, here’s why we’re thinking we might need to change this aspect of the study and working with that person to see what makes sense and how can you change it, or if you need to direct funds in other areas, thinking about how you can do that where it’s still in line with the grant that you have, or perhaps they might say like, nope, you absolutely can’t do that. And then you have to think about what that looks like. And maybe it’s that the new aspect that you’re trying to do, you can’t do it within the realm of the grant. Maybe you can do it on the side with other funds that you have or think about another study later on. So I think it depends on what the situation is. There is some room for flexibility, but there’s also a little bit more constraint around that going in, which is not a bad thing. [Tara Behrend] (13:18 – 13:52) Well, I think every student working on a dissertation who runs into a roadblock is going to be really comforted by the news that that happens no matter where you are in your career, that things sort of come up that are unexpected and the best research sort of turns that unexpected surprise into something valuable. When you’re reading a literature and you’re reading an exploratory paper, what are the signs of quality that you’re looking for? Like what sets a good exploratory paper apart from a sort of mess? [Larry Martinez] (13:52 – 14:46) I think one thing that sticks out for me is a lack of theory. So like I’ll read papers that are exploratory in nature, and that’s in and of itself just fine, but there’s not that conversation with, and here’s what this means, and here’s how this impacts what we know about this phenomenon generally in the literature. So without that, it kind of feels like a book report. It’s like, we found all this stuff, and here’s a list of all the stuff we found. But without that conversation of what it means and what we should take away from that and how we should maybe think differently about this phenomenon, I usually push papers that I’m reviewing back and say like, this is not really science in the sense that we are learning anything. You need to sort of contextualize this in a way that makes sense given the body of knowledge that we already have. [Enrica Ruggs] (14:46 – 15:42) Yeah, I would absolutely agree with that. I think, you know, thinking about what is the contribution, and contribution can look different. So I think being in a management program and looking at some of our top organizational science journals, we talk a lot about making a theoretical contribution, which may or may not come from an exploratory paper, and I think it’s fine if you’re not making a theoretical contribution, but still being able to speak to the literature in terms of like, what is that phenomenological contribution that you’re or even if it’s as IO psychologists in doing work, if there are more practical contributions, still being able to speak to that contribution in a way that ties it together into the literature to help us understand what are the research questions that we can now ask from this work, I think is really important. [Tara Behrend] (15:43 – 16:12) Well, it sounds like you’re saying that you have to be an expert first in order to know what kind of exploration makes that meaningful contribution to a literature that just setting out and describing the world isn’t really going to get you there. So are you thinking about exploratory and qualitative in the same breath or can a quantitative paper be exploratory? Are there any differences between what makes a good exploratory quantitative and qualitative paper? [Enrica Ruggs] (16:13 – 18:04) Yeah, I don’t necessarily equate exploratory with qualitative research, right? I think that you should look at your research question and your research question should really drive your design and your method. And so there are some things that are exploratory, you know, having qualitative work really can help with exploratory questions because we’re trying to understand a phenomena. But if you dig into the literature, exploratory doesn’t mean just go and start asking people questions. You still should be conducting a literature review to understand what are the things, what are the constructs or the ideas or the theories that may be connected to this phenomena of interest. Even if this exact phenomena hasn’t been studied, what can the literature tell us that we can ground our study in? And you may find that there might be some type of quantitative exploratory study that you might want to do. So that could be a survey, for instance, right, that sort of measures people’s perceptions or attitudes or whatever the phenomena of interest is in ways that allows you to look at relationships or things like that, that then you could follow up with other types of work. So I don’t necessarily think that exploratory has to mean qualitative work, but really your design should be driven by your research question, number one, and then taking a look at the literature to help you understand how do I couch what I’m doing in what is known and a good solid foundation. [Tara Behrend] (18:05 – 19:04) Yeah, let’s underline that, that time lags don’t magically demonstrate causality. And if you are measuring two things that are both stable across two time points, it doesn’t actually matter how long the time lag is, right, that A can still cause B or B can still cause A or a third variable can still cause both of them. You really have to think about why you expect these variables to change over time, which is what you’re saying, right? Like sometimes it’s because there’s an event or something, or sometimes it’s because you think it’s a sort of a non-stable characteristic of a person, but you have to have that explanation and time lags won’t save you from a bad design or bad thinking. Yeah, I think that’s terrific. Well, this is secretly airing a pet peeves hour. So do you have any other pet peeves or things that you see sort of over-reliance on in the literature right now for any reason that you’d love to see people use less of? [Larry Martinez] (19:05 – 20:30) I think one thing that’s worth mentioning is another thing that we inherited, I think, from our advisors. So speaking of experimental designs, it’s really common to see hypothetical types of situations where you create resumes or you create people that are applicants and people will rate them, so there’s a lot of stimuli that get created. What sometimes people do that is kind of a fatal flaw is they’ll create one resume for each condition and not account for the fact that there could be some weird idiosyncratic thing about that one resume that is not related to the thing you’re trying to manipulate, right? So if it’s formatted differently or there’s something else going on, or if you’re using pictures of people, it could be that one’s wearing glasses and the other one’s not and somehow that matters or the color of their hair, something like that. So using multiple stimuli within the same condition is good. So then you want to be able to say like, okay, we have four people that are all the same type of person, but they’re all slightly different so that we can show statistically that these four aren’t different from one another, but when we collapse them together, they’re different from this other group of four people. So being able to rule out that idiosyncratic like, oh, I just had one example of this, that I think is not, it’s hard to get that published. [Enrica Ruggs] (20:31 – 21:46) Well, they got to do some pre-testing on stimuli too. Not seeing enough of that. Put it in your online supplement, all the things. Another sort of, I guess, thing that I’m seeing that’s a little bit tough is people will have operationalizations of constructs that don’t actually match the construct that they say that they’re talking about. So you’ll read a front end of a paper and I’ll get really excited. I’m like, oh, they’re testing this like really cool hypothesis that makes a lot of sense to me. And then I go to the method and I’m like, I have no idea what you’re doing. Like this proxy variable means nothing. Like that is not what this independent variable is at all. So really making sure, paying attention to how you’re operationalizing your variables, I think is such an important thing and doing that well and thinking about that on the front end. Another pet peeve of mine that I’m seeing a lot of right now is over-complicated designs and models of things that don’t seem real to me, like in my mind. [Tara Behrend] (21:46 – 22:47) Yeah. I want to come back to the over-complicated models in a second, but I also want to just emphasize something else that you both, I think, argued really well, which is that in survey designs, for example, there’s a pretty clear set of rules for evaluating survey quality, right? Like we know what to look for and there’s not an equivalent set of rules for experiments because it’s going to be different in every case. Like the kinds of external variables that might affect your conclusions are harder to list out and say, look for the name you’re using on the resume or look for the instructions you’re giving in the description because they are idiosyncratic. So my observation over time is that experiments are way harder to do. I think they’re my favorite and I wish everyone would do more of them, but they’re also really hard to get right, which is potentially why people have become more skeptical of them over time because there are so many bad experiments out there. [Enrica Ruggs] (22:47 – 23:59) Yeah. I think, like I said, you really have to pre-test, pre-test, pre-test your stimuli and sometimes that means you pre-test it and you have to go and make changes to the stimuli based on the result and then that doesn’t mean just go out and run your experiment. Now that you’ve made changes, you need to pre-test those again to make sure, like, am I capturing or manipulating the thing that I’m trying to actually manipulate? Am I manipulating that thing in something else? Or am I manipulating a completely different thing? Am I accidentally influencing an attitude or perception that I wasn’t intending to? So adding valence to a stimuli that I wasn’t intending to. So like Larry said, maybe you red hair, like, triggers something for people that we weren’t intending to trigger for people. So really thinking about, when I say pre-test, it’s not just like a one-time thing. It’s really looking at those stimuli, thinking about it and going back maybe a couple of times until you get it right. [Tara Behrend] (24:00 – 24:28) Yeah, that’s really great advice. Well, so now let me ask you again about this challenge of increasing and increasing complexity of the models that we’re seeing reported in papers with many moderators and mediators and mediated moderators and moderated mediators and so forth. Where is the line between sort of under complexity and over complexity for you? Like, when do you know that you’ve gone too far? [Enrica Ruggs] (24:30 – 26:04) I think if you get past the three-way and you don’t have a really good reason, that’s a tough model for me. Like, I don’t really know that a four-way interaction, I haven’t seen very many four-way interactions explained well, maybe one. So I think, you know, really thinking about, do you need a moderated, moderated serial mediation model? Like, what are, what does that mean exactly? And what’s the question, really going back to, what’s the question that you’re trying to answer? And what is the most parsimonious way to do that? And then if you need to build up, so this is something we see a lot in social psychology, we don’t necessarily see it as much in the organizational science literature. And there are pros and cons to doing research in different types of ways, but I think that you see more of a building block approach of starting with that good two-by-two, right? And then maybe adding to that or changing out the IVs and seeing, okay, does this IV now influence the dependent variable? And really doing a building block approach. I don’t think every paper has to be that way. And there are some papers where, yes, your moderated, moderated mediated model makes sense, but I think that there are many fewer where that makes sense than we’re seeing. [Larry Martinez] (26:05 – 27:18) Yeah. And a lot of, I think a lot of those really good like social psych papers that build, what they’re doing is they still have this main through line. So like Enrico was saying, it’s like, what’s the most parsimonious story. And I, you know, at the end of the day, if we can’t communicate what we’ve found clearly to people who don’t speak our language, we’re not doing a good job. So having this highfalutin very conditional, well, it depends on this and it’s only for these people under these circumstances at this time period, that doesn’t help people understand. So if we’re not good storytellers, then that’s kind of the main thing. And a lot of these papers that build, they have that main story, but then study two is, okay, well, study one was on students. So we’re going to do it with, with workers. And then study three is, well, those were hypothetical situations. So now we’re going to do a lot of field experiment where we go into a mall or something and demonstrate the same thing. So you’re got the same story, but you’re ruling out these alternative possibilities. So that I think that it’s like maintain the storyline and then get rid of doubt raisers, basically. [Tara Behrend] (27:19 – 27:29) Why do you think people have become so enthralled by these giant models or attempted to, to put them together? What do you think is driving that? [Larry Martinez] (27:31 – 27:32) I have a cynical view on this. [Tara Behrend] (27:33 – 27:35) Well, let’s hear it. [Larry Martinez] (27:35 – 28:34) Sometimes I’ll read these models and it’s like, they’ve thrown everything in, they’ve measured tons of different things. And then they found these correlations and created a model. And it’s, I, I think in a lot of times it’s kind of like key hacking. It’s like, okay, well, this is what was significant. So therefore this is what the story is. And this is a controversial sort of thing. I think that sometimes we should allow ourselves to have an exploratory open-minded sort of frame of mind where if we see something that we weren’t expecting, that is a correlation. And then you, you sort of pursue that. That’s okay. And I know not everybody kind of agrees with that, but I think if you’re just throwing everything in because it’s significant and not thinking about the theory and not thinking about what it means and not thinking about the new insights that that might generate, that it seems like you’re just capitalizing on the fact that you got good correlations and it doesn’t, again, you’re, you’re losing the narrative of what this actually means. [Enrica Ruggs] (28:34 – 30:32) I think that happens. But I also think that publishing work in high quality journals is becoming harder and more difficult and there is an underlying quest for novelty. And so I think that there is an undervaluing of a simple story now in the field. We want everybody to have the super novel sort of perspective. And I think we’re trying to right the ship a bit. So I don’t want to say the field is awful, but we are starting to see greater emphasis on replication than we had before. And some of these things, but I think, you know, people are trying to get jobs and keep jobs. And so they’re like, Hey, if I have to do something novel, I can’t do this two by two. That’s not novel. Let me like show you how I can do a handstand and collect all this data and do all these really crazy, fantastic things that should get me into the top journal. So I think that happens. And I also think it’s, it seems easier anyway, like it’s physically easier to analyze some of these things. Like I am a fan of process. I think the macro is great, but I also think it makes it much easier for people to say like, Hey, I can do this really complex model. And now they have like the little drop down thing. And like, you just drag and drop the variables and, and just say like, I’m going to do model 57, boom. And it spits something out, which is a lot easier than if you had to do this in other programs back in the day, which some of us may or may not remember. [Tara Behrend] (30:34 – 31:41) Well, you’re identifying a lot of really important incentives in the field that can drive, you know, behavior that doesn’t move the field forward in a way that we want to see if we want to trustworthy and robust science. I absolutely agree that if you are scratching your head, how to make a contribution, the easiest thing to do is take some model that someone else published and add a new moderator to it somewhere and say like, well, it depends on my new moderator. That’s very important and special. It’s way harder to ask an important question. And it’s very easy to answer any question that you can ask, but asking the question is the hard part, right? So, um, I, I very much appreciate your perspective on that. I, by the way, I did not know that you were both Mickey students. That explains a lot, a lot, all the pain. That’s great. Um, well, so maybe just to wrap us up as a last question, um, what are some of the go-to reading recommendations that you like to offer to your own students or the ones that you consult for yourselves, um, when you’re, when you’re designing a study and you’re thinking about what the best design might be? [Larry Martinez] (31:42 – 33:53) So it really, really depends because in our work, we start with like, okay, like you said, like, what’s the question we want to ask? What’s the phenomenon we want to understand better? And that can take you in any number of different ways with respect to how you then answer that question or even ask the question. And one example that comes to mind, we were trying to do, um, a study on non, the, uh, gender identity. So we had, um, we had stimuli that ranged from really gender non-conforming to really gender conforming. So we had pictures of people who actually had undergone, you know, a gender transition in real life, um, with different pictures along that timeline. So, and sometimes they looked more masculine and sometimes they looked more feminine. And in between those two sort of end points, there’s the middle, right? The in-between time. So what our hypothesis was, was that to the extent that you, um, were further away from these traditional gender norms in terms of presentation, you would experience more discrimination. But to do that, we had to come up with a metric of gender norm, um, violation. And measuring that turned out to be a really difficult thing because it’s sort of, you’re measuring the extent to which you’re different from these two extreme end points. So it was something that I’d never thought of before, something that I’d never encountered before. And it, what it turned into was just me and my friends kind of like on our back window, with dry erase markers, writing out equations, thinking about like, well, if we did this, and then if we took the absolute value of that and divided it by this and reversed this, and we figured it out eventually, but that’s kind of an example of like, okay, here’s a phenomenon. That’s like the extent to which you differ from these two end points matters and calculating that. And then operationalizing that into a measure was something that we had to like really think about. So. [Enrica Ruggs] (33:53 – 34:01) Sounds like Larry is going to be sending us some moderated, moderated, multiple limitations coming soon. [Larry Martinez] (34:02 – 34:13) No, it’s still a simple story, but the metric was, the metric was like, okay, like, okay. The extent to which you don’t adhere to either masculine or feminine norms. [Enrica Ruggs] (34:14 – 34:15) I love that. [Tara Behrend] (34:15 – 34:42) Well, you know, we always say that, that there is no perfect method because what we’re trying to do is accumulate science across different studies. And I think both of you are such phenomenal examples of using different methods, depending on the question and building on the insights that you generate. I’m really just delighted to have you both here. This is a fun conversation and thank you so much for making the time. Thank you. [Richard Landers] (34:43 – 35:00) Yeah. Thank you again. That’s it for another gig. To stay in touch, subscribe on YouTube, check out our website at thegig.online, join our LinkedIn group, sign up for our email notification list and join our Discord. So many ways to connect. Thanks for joining us and see you next time for another great I-O get together.
