Designing Better Research: Franki Kung on Quantitative Methods in Industrial-Organizational Psychology

The Great IO Get-Together (The GIG)
The Great IO Get-Together (The GIG)
Designing Better Research: Franki Kung on Quantitative Methods in Industrial-Organizational Psychology
Loading
/

Dr. Franki Kung, Associate Professor at Purdue University, discusses research design in quantitative methods, blending insights from social psychology and industrial-organizational psychology. Franki shares how she prioritizes research questions over methods, advocates for mixed methods approaches, and addresses the challenges of creating culturally sensitive measurement tools. The conversation explores creative field experiments, self-fulfilling prophecies in employment interviews, and the importance of diverse research designs in advancing workplace psychology.

Key Takeaways:

  • Start with research questions rather than methods when designing studies
  • Mixed methods can honor lived experiences while creating measurable impact
  • Consider operationalization, confounds, and resource constraints systematically
  • Challenge dominant research design norms that may crowd out innovative work
  • Cultural context matters for valid research conclusions
  • Creative field experiments can demonstrate complex phenomena like self-fulfilling prophecy
  • Peer review processes need meta-science approaches to improve validity and accessibility
  • Qualitative methods deserve equal consideration in IO psychology research
  • Research design choices should match the questions being asked
  • Innovation comes from questioning established methodological norms

Reviewer Zero https://www.reviewerzero.net/
Aly, M., Colunga, E., Crockett, M. J., Goldrick, M., Gomez, P., Kung, F. Y. H., McKee, P. C., Pérez, M., Stilwell, S. M., & Diekman, A. B. (2023). Changing the culture of peer review for a more inclusive and equitable psychological science. Journal of Experimental Psychology: General, 152(12), 3546–3565. https://doi.org/10.1037/xge0001461

Website: https://thegig.online/
Follow us on LinkedIn: https://www.linkedin.com/company/great-io/
Join our Discord here: https://discord.gg/JcTcMu335K
Join The GIG Email List: https://docs.google.com/forms/d/e/1FAIpQLSfVQ4hyF8MA4G9W-ERwVL8_e91a-MUMuhNvxhXmgkSFUDFatg/viewform?embedded=true%22

Transcript

[Richard Landers] (0:00 - 0:33)
Welcome to The Great IO Get Together, on tonight's show, quips and queries about the world of work as IO psychology comes alive. Now please welcome our hosts, Richard and Tim. Welcome everyone to Great IO Get Together number 34.

My name is Richard. This is my co-host, Tara. Today we are exploring Chapter 8 of our textbook, Research Methods for IO Psychology.

And this chapter is all about research design in quantitative methods. So to help us design better quant studies, on the show today we have Dr. Franki Kung, Associate Professor of Psychological Sciences at Purdue University on the show. Thanks for having me here.

[Tara Behrend] (0:33 - 0:55)
We are super excited to have you, Franki. I'm really looking forward to hearing about how you think about this issue, just given how many different designs you use and how much you are clearly a leading thinker in this area. So maybe you can start by just telling us how you approach the design process in your work.

Like when you've got a question, is there a particular design that you gravitate to? How does that thought process work?

[Franki Kung] (0:55 - 2:20)
I would preface by saying that my PhD training was in both IO and social psychology. And one of the fun part about being in this interdisciplinary space is to learn different methods and applying different theories which drive different research design. When I start doing a study, I think the most important thing for me and my students to think about is what question are we asking?

Always starting with the question instead of the method. And I know some people like to think about the method first and then see what questions they can answer, which I think is also another viable approach. But autonomy in having a job in academia often allow us to think about a question first.

So usually that's the process, how I think about research design. And when I have a question, then I think about operationalization. Like what are some choices I can make to find a valid way to measure what I want to measure?

And then I start to think about potential confounds and alternative explanation that I need to rule out. And then at the same time, resource constraint, time constraint, especially before tenure or for a graduate student who has a limited time to finish the program, like I can't propose a 10-year longitudinal studies, like who knows where we'll be by then. So usually that's the sequence of how I think about research design.

[Tara Behrend] (2:21 - 2:59)
That's super helpful. So I appreciate you laying out those different factors, and I want to come back to each of them. So we'll come back to the idea of constraints and about confounds.

First, I want to talk more about your mixed background in social psychology. You know, when I think of social psychology cliches, it's like doing a lot of little experiments, highly constrained environments, not a lot of realism. Maybe that stereotype is not fair.

Maybe the field is getting past that. But that's definitely the first place that I go to think about social psychology research. So when you try to blend those backgrounds that you have, how are you balancing the control versus the richness and the realism that you also bring out in your work?

[Franki Kung] (3:00 - 5:16)
Social psychology is quite diverse in the sense of what kinds of methods people use and for people who have a background that is leaning more toward the cognitive kind of psychology design. And they do a lot of more contrived lab experiments. And there are also a lot of social psychologists who do field study.

And I think one of the reasons why I started. So I started in grad school in I.O. psychology, and I love the practical aspects. I love doing research that can make a real impact in the world.

I love thinking about doing research that makes our work life better because we've spent so much time at work. At the same time, I was drawn by some of the really creative theory and design that we can borrow from social psychology and testing some of the important questions that I as a college are asking. So I would say one of the things that I've been enjoying and blending to areas really is increasingly using mixed methods in my own research.

So I'll just give you an example of that. Recently, my student Rick Young, who is a faculty at Baruch College, just started this fall. We did a study trying to come up with a way to measure Asian-American workers' dehumanization experience.

But using mixed methods actually allowed us to first do in-depth interview study, talking to people from different backgrounds and see how their work experience is like before we even start creating and thinking about the measurement and subsequent studies. And that was such a rewarding process using this mixed method design, because later on when we create that scale, that workplace dehumanization scale, a lot of those scale items were a version of certain participants' own lived experience being translated into the measure. And it's just so fun to see how even in thinking about research design, we can honor and turn diverse, real-world lived experience of humans into measurable impact.

And that's one way how I try to bridge that to world.

[Tara Behrend] (5:17 - 5:46)
I really love that example. Yeah, it's such a nice way of showcasing how you think about the unique contribution of each kind of design. As you were talking, I was wondering, as AI methods become more popular and people are more tempted to skip that hard work of talking to individual human people, are you worried that we're going to be missing a category of information that only really comes from those individual conversations?

[Franki Kung] (5:46 - 6:38)
Yeah, what a good question. I mean, I would ask you the same question because that's your research expertise. And I have found some survey companies now have created AI participants.

I don't know that you've heard of it. I haven't tried it. It did scare me a little bit to think about now we don't even need humans as participants in our study, and AI can be a participant.

And when we use AI participants, whose responses are they representing and what knowledge are we learning and whether we can generalize to the broad and diverse populations. And I think your chapter in the textbook talk a lot about the importance of generalizability. And so I think using AI doesn't escape the foundational principle that when we think about validity and generalizability, those would be important concerns as well.

[Tara Behrend] (6:39 - 8:01)
Yeah, I mean, I agree with you for what it's worth. I think that there are lots of kinds of knowledge that we know are not represented on the internet. Like some kinds of knowledge don't get written down.

I think we also see that the personas that AI tries to create, they can't really keep it together very long. They can't faithfully recreate a person with defined characteristics very long, which means you're always progressing to the mean. You get the average of all scenarios, which is exactly the opposite of what you're talking about, which is about appreciating individuality and the things that a survey would miss, essentially, because they come from a human being.

The bottom line, I think that we try to make the argument in this book is that every question has an appropriate design, but there isn't a design that goes with all questions and you just need to make sure that you line that up properly. All right. So earlier you mentioned constraints and you were talking about pretender.

Maybe you don't have a lot of resources. I know this is surprising to all the people listening who think that professors are living large, but in fact, that is not the case. And so maybe the constraints are limited resources, maybe it's access to data, maybe the data were collected by someone else and they're messy.

Can you think about an example where the constraint actually pushed you to think creatively in a way that led to a more interesting or creative design?

[Franki Kung] (8:02 - 10:57)
Yeah, what a fun question to think about. I mean, some of the constraints that I can think of are no different from constraints in life and for just work and study outside of research context, right? So time and energy and cost, those are all limited.

I think in research, one factor that I need to constantly remind myself of, and I don't typically think of it as a constraint necessarily, but the responsibility is the ethical aspects of how when we do the design, we need to think about doing no harm and impact on the people participating in the study. When I was in graduate school, I was interested in studying performance feedback, which I think being able to deliver good quality feedback is very important. And in order to do that, sometimes you have to manipulate the content of the performance feedback, including sometimes manipulating how well or how poorly the person's performance is, which is not true to the objective performance of the person.

And the question is then, what can we do that in real organizations, like going to a company, randomly assign employees to receive different kind of feedback, which could potentially affect their merit increase and salary, which beyond just ethical concern, they're legal and many other concerns. So even if the organization allows you to do that, I don't think researchers should be doing it without carefully considering some of these issues. So then that had pushed me and my team to think about, well, how do we manipulate performance feedback in a way that is still real to the participant and allow us to test the research question to get evidence for causality, which in the end, we collaborated with an English writing center on campus at a university.

So students would go there to learn about writing anyway, and many of them are students. And in the process of that, we recruited some participants to randomly receive different kinds of feedback and we measured the responses. So the contact itself is real, and yet there is some room for the researchers to come in to try to manipulate the feedback in an ethical and responsible way to get at the feedback receiver's reaction.

And then we get a chance to debrief and tell them that, like, this is what we were trying to do. And sometimes you don't get to apologize directly to the person, and I'm sorry that there's deception involved, right, to really treat the participant with dignity and respect, and yet while also doing the research that would inform the literature. So that's one example that came to mind, but there are numerous situations, I think, especially applied IO psychologists might need to think about constantly every day.

[Tara Behrend] (10:58 - 11:37)
That's a good example. And yeah, feedback is a perfect example of the importance of controls because you need to think about who's delivering the feedback and what the person's actual level of ability is, right? So if you just looked at the feedback without experimentally controlling it, like it would be confounded by how good the person was.

So it's an especially tricky thing, I think, to study, and I love the approach that you described. When you work with your students, I mean, is this how the conversations look in your lab when you're designing your new study together? Like, how do you get them thinking about choosing quant designs or like noticing what those confounds might be?

[Franki Kung] (11:38 - 15:43)
I think there are several ways to think about that. One, relying on the general principles that we already know about what makes good design good design. So in your chapter, you talk about validity, right?

Like, so no matter what design researchers choose to use is supposed to be valid. That is not something that changes across the design that we choose. Another aspect that we just talked about is the ethical, responsible approaches, right?

Like, so no matter what design we choose, we're trying to encourage students to think about the impact of the design of it. And there are also some general, I think, psychology principle that we can leverage. So using what we know from psychology in the design of psychology experiment to enhance the research experience that achieve more robust effect of a manipulation.

For example, the study from the participant's perspective, almost like user experience research in some way, right? Like to think about if you were a participant to go through the study, how can we make it more intuitive? How can we make it more clear?

How can we reduce distraction? Sometimes even the smallest thing I think can distract participants from a person, I got my PhD in Canada. So they spell certain words differently compared to researchers in the U.S. So even when doing studies, right, like in different country, we have to be very careful about spelling so that the different ways of spelling wouldn't distract participants when thinking about like, ooh, why is this happening? Or thinking about social distance, right? Like maybe the researcher is coming from a different background than I am. So these kind of subtle things.

I would also say beyond leveraging psychology, leverage randomness. So randomness can be our enemy, but randomness in design is actually so fun to think about. Like random assignment help us find better evidence for causality, counterbalancing some of the orders of the questions, randomizing the different versions that participants receive so that the superficial feature of the quantitative design has no room to affect the conclusion that you're drawing.

I think those are some of the things I can think about. If I were to answer your question terror in a more personal way, I have this acronym that sometimes when I mention it, my students would roll their eyes because I've talked about it so often. When we do research, we think about this acronym, FMRI.

I'm not a neuroscientist, but it just happened that way. F is for fun. A lot of people forget, I think, especially students, right, like doing creative study design or using a creative or new quantitative methods, there's a cost to learning it.

So it's not like I choose that method, then I immediately know how to do it. Like doing multi-level modeling for the first time, you have to study multi-level modeling first, right? So how to allow students to choose a method that is fun for them to learn and factor that into the training.

And this for meaningful, choosing a method that is personally or theoretically relevant to what they're trying to study. And that sometimes might even change the direction of the research that we are looking at or the level of analysis that we're looking at. R is relational.

Is there a way we can leverage the research design to serve a community that we want to have a long-term relationship with or have them participate in the design? Is there a team that we want to build like with other organizations that can collaborate on that work? So that's something that we think a lot about.

And finally, I think the million-dollar question, right, is the impact and who are we trying to use the knowledge to improve their lives and who are the people that we're trying to empower is something that we think a lot in research design as well.

[Tara Behrend] (15:43 - 16:48)
I love that. I'm stealing it for sure. And what I appreciate about your approach is that I think a lot of researchers put a survey together and then after the fact go back and say, well, I've got some insufficient effort responding or like these people weren't engaged.

But you're approaching it from a much more humane perspective, which is design an experience that they would want to pay attention to that they would feel engaged with. Like what a better approach to respect their time and create an experience that is maybe valuable for them too. And it doesn't take that much more effort to do it the right way.

So why not? I really like it. I also really like what you said before about experimental control is really important and when you can't do that, just throw some chaos in there and some randomness and I think that's a really good piece of advice.

I really appreciate that. But I wanted to shift gears a little bit and ask you to talk a little bit about your work with Reviewer Zero. I think this is a cool initiative that you're involved with.

Maybe you can tell us a little bit more about the project and how it works.

[Franki Kung] (16:48 - 20:00)
I'm so glad you asked because to me, my involvement with Reviewer Zero is so much about, again, using psychology and improving the system of psychology and the science of psychology. So Reviewer Zero started in 2020 by a group of psychology faculty and neuroscience faculty and graduate students with the goal of reimagining peer review system to increase representation in the amount of scholars in a few and also the kind of scholarship that we publish so that we can really more comprehensively understand the psychology of every human being and it's relevant to everyone. The reason why it's called Reviewer Zero is that a lot of times when we think about peer review process, so for those who haven't had peer review experience yet, the way scientists make sure or try to make sure that things in the literature are rigorous is relying on other peer scientists to review and critique your work and improve your work so that it is publishable, so to speak.

In this process, often, I think as Richard and Terry both have experience, sometimes peer review can be very tough and negative and it's never nice to receive negative feedback in a way that you feel like is personal. So we have this kind of saying that Reviewer Two is always the most critical reviewer. So in reimagining peer review, if we can have a more developmental formative experience that make the peer review process more about improving the science and the scientists doing the work, maybe we can be the Reviewer Zero, be the person who can offer support, training, coaching, especially for students and trainees who might not have the support of how to navigate peer review process before they submit their papers for the first time. So that's how the name came about and it's really pushing for a reset of how we imagine peer review culture would be and we do a lot of workshops with journal editors and reviewers.

We recently did some with new editorial board for social psych journals and also for publishing groups. So we did a workshop with Nature Publishing not too long ago and then we also do trainee workshops revealing some of the heading curriculums about what peer review is like. A lot of people from the survey and feedback that we got, sometimes they don't understand even something, you know, as researchers have published multiple times.

Students might not even understand getting an R&R is actually a good thing because often our revised and resubmit is written in a way that is so critical. The first time, you know, a submitter to a journal didn't even understand like that's the best possible outcome other than straight, like right away acceptance, which almost never happened. So we're trying to talk about these behind the scenes, hitting curriculums with graduate students and undergraduate students to prepare them to go through the peer review process.

[Tara Behrend] (20:01 - 20:30)
It's an amazing initiative. So can anybody, can any student sign up to get like, they should just go to the website. We'll link that in the, we'll link it somewhere.

I mean, in the context of what we're talking about today, then, like how much do issues of research design come up as something that the Revere Zero Team is trying to get people to talk about differently? Like, is it about, you know, rejecting the sort of like gut instinct to say, like, I don't know this method. It's not my method.

And so it must be terrible. Like, how does that, how does all that look?

[Franki Kung] (20:31 - 23:52)
So I think there are two ways to think about how research design is relevant when we think about the peer review context. One is what are the norms and the dominant research design that we see in the field, and how is that being uphold as the gold standard that in some way could be a good practice, right? Because we want to know best practices, but how in other ways, having dominant research methods actually crowd out some of the really exciting, creative, meaningful work that might not fit the box to not be able to be published.

So in Iowa Psychology, for example, I think it's safe to say that because of the tradition of using psychometrics and quantitative methods, often when you submit a qualitative paper, right, like you might not be able to get reviewers who have qualitative design training. So A, they might be asking not the right question in evaluating your research, or B, they might just not appreciate that work as much because if we use psychology to understand that if the reviewer themselves would enjoy or appreciate qualitative methods, they probably would learn that in grad school or they would learn it in some other way. The fact that they haven't learned it, chances is that they're less likely to appreciate it as much, right?

So that led to the second part of how I think about research design and how related peer review process is that there is such an important meta science opportunity for us to use what we have learned in research design to think about the evaluative process in this peer review system and how can we make it more valid, more generalizable, more ethical, more responsible, more open, less costly, or at least the cost is not disproportionately distributed to make certain researchers or certain groups to pay more premium in order to publish.

So in that context, right, we can think about how can we make our peer review process more globally accessible? Is it often the case that we need to have employees from Fortune 500 companies to participate in order to publish in high tier journal? Can a researcher who have a meaningful data set from a local regional but important company in their country be seen as a way that is also important, as important to be published, right?

So that's one way of thinking about it. But Terry also mentioned the idea about research methods. Sometimes depending on cultural context, depending on research question that people are asking, sometimes different research design are better at answering different questions, too, right?

So there's that indigenous approach in thinking about psychology methods, being able to have researchers who have the cultural knowledge and the background to understand the when, the who, and how and why, which I think your textbook talked about as well. If you have an outsider who might not understand it, the way researchers understand these questions could be off and not, you know, contributing to valid statistical or research conclusions.

[Tara Behrend] (23:53 - 24:42)
It's such a good point. And it's so easy as a reviewer to, I guess, commit that you know, ecological fallacy and think that, you know, the way things are is the way they should be, but that's almost never the way to get innovation, right, is to assume that everything's fine the way that it is. So I appreciate that you're doing all that work.

So with that in mind, with the idea in mind that, you know, there are forces at play that maybe keep people from being really creative and choosing designs. I have a sort of two-part question. First is, is there a particular paper or a person maybe who you really think is doing this right and doing a consistently interesting and creative design?

And then second, are there designs that you wish you saw more of that are just not as present that you think would be useful to move the field forward?

[Franki Kung] (24:42 - 29:40)
I can think of many people that I admire. I'll quickly mention a few and maybe give a little bit more context about one particular design that I thought was so genius that I come back to often. I teach a survey of organizational psychology at Purdue University, and then we talked about these designs by doctors Mickey Hebel and Eden King, and one of their paper have researchers wearing prosthetics as a pregnant woman going to different stores and see how, like, have secret camera.

I'm like, how cool is that that you get to do that? Like, I want to, well, I can't really sign up as a researcher in a study because people wouldn't believe that I'm pregnant. But how cool?

There is also Dr. Sonia Kang at the University of Toronto. I think she and other people have very creatively generated bogus resume with equivalent qualifications and send it to companies and see how demographic characteristics affect call back and interview opportunities. So it's so hard to sometimes find evidence to look at these things in an ecologically valid way, but there's research design or researchers who do that.

In social psychology, Betsy Levy Pollock at Princeton University often do creative field studies and one of the study design that I love is she collaborated with a radio station and randomly assigned people from different regions in a country that has recently experienced genocide with educational messages as the intervention and I believe so opera about something irrelevant as the control condition to see if certain educational program could increase people's likelihood for conflict resolution for peace moving forward. Like, how awesome that you get to do an experiment that, you know, change literally a society as you're doing it.

I think these are very aspirational design in some way for undergraduates and graduate students to think about, because it's constrained about cost and resources. Wouldn't it be nice if a government wanted to collaborate with a researcher to do these things? But one design, and I admit that I'm biased because I graduated from University of Waterloo, Dr. Mark Santa, who is unfortunately not with us anymore, but he and other researchers and one of his protege, Dr. Christine Mogul, who is a professor now, has used this design to understand self-fulfilling prophecy in context that affects workplace outcome, which I thought it is so hard to unpack because how do you statistically or in research demonstrate that how one person perceive their world and treat their world affect the experience that they receive and in turn affect their own outcome?

That is just so challenging, I think, as a question to answer. But he and his team created this method that they, for example, in one study, they were trying to look at interview biases against black interviewees for jobs. They recruited white lay people interviewer to come to the lab and have Confederates who are white and black job interviewees that were actually researchers and recorded the nonverbal behavior and the kind of questions that they might ask, the physical distance that they had with the interviewee and a lot of these kind of social nonverbal information, and found that there is a difference for these lay interviewer depending on the racial identity of the interviewee. This is not the end of it. They did a follow-up study that now have professional actors being trained on how the interviewers, on average, would treat white interviewee and black interviewees differently to create two conditions.

So the white treatment condition and the black treatment condition and invited white participants to come to the interview, being randomly assigned to one of these two groups, and found that it is not the racial identity of the interviewee that affects interview performance and outcomes, but it's really how they're being treated. And this is a long way of saying that is how they demonstrated self-fulfilling prophecy in employment interviewing, in different contexts, and that is generalizable to other contexts as well. And they recruited, for example, sexist interview fewer and find that when the interviewer is sexist, the interviewee who was a woman would perform worse, which is, you know, not too surprising, but the fact that they can demonstrate it so reliably in such a creative way.

I thought it's just a design that I think a lot of people can borrow.

[Tara Behrend] (29:40 - 29:56)
Well, Franki, it's so clear, you know, the care and thoughtfulness and humanity that you bring to your work. I think it's really inspiring, and I really, truly appreciate you coming to share some of your wisdom with us today. It's been a great conversation, so thank you.

[Richard Landers] (29:56 - 30:10)
Thank you. Join our LinkedIn group. Sign up for our email notification list and join our Discord.

Thanks for joining us, and see you next time for another great IO Get Together.