
Hosts Richard Landers and Tara Behrend welcome Dr. Tianjun Sun, Assistant Professor of Psychological Sciences at Rice University, to explore the foundations of psychometric measurement. Dr. Sun shares her unconventional journey from counseling psychology to becoming a quantitative methodologist, driven by a desire to quantify abstract concepts like personality and individual differences. The conversation covers the critical role of measurement in industrial-organizational psychology, the intersection of statistics and psychology, and practical applications in organizational settings. Dr. Sun emphasizes that solid measurement is the foundation of good research and offers insights on reliability, validity, and communicating complex psychometric concepts to non-technical audiences.
Key Takeaways:
- Psychometric measurement requires both psychological understanding and statistical rigor
- Reliability places a ceiling on validity in both research and professional relationships
- Individual differences can be quantified through careful measurement development
- Effective communication of measurement concepts requires translating technical language for practitioners
- Solid methodological foundations are essential for trustworthy research findings
- Every organizational problem involving people requires psychometric consideration
- Measurement validation is about ensuring numbers mean what you think they mean
- Career paths in IO psychology often emerge from combining complementary skill sets
Website: https://thegig.online/
Follow us on LinkedIn: https://www.linkedin.com/company/great-io/
Join our Discord here: https://discord.gg/WTzmBqvpyt
Join The GIG Email List: https://docs.google.com/forms/d/e/1FAIpQLSfVQ4hyF8MA4G9W-ERwVL8_e91a-MUMuhNvxhXmgkSFUDFatg/viewform?embedded=true%22
Transcript
[Richard Landers] (0:00 – 0:15)
Welcome to The Great IO Get Together on tonight’s show, quips and queries about the world of work as IO psychology comes alive. Please welcome our hosts, Richard and Tim. Welcome everyone to The Great IO Get Together number 33.
My name is Richard. This is my co-host Tara.
[Richard Landers] (0:16 – 0:31)
Today we are exploring Chapter 7 of our textbook, Research Methods for IO Psychology. This chapter is all about psychometric measurement. So to help us make better measures on the show today, we have Dr. Tianjun Sun, Assistant Professor of Psychological Sciences at Rice University. Welcome to the show.
[Tianjun Sun] (0:32 – 0:35)
Thank you. Thank you. We’re excited to be here.
[Richard Landers] (0:37 – 0:51)
So you are building a career really centered around innovative psychometrics and quantitative methods in general. So I’m curious what kind of first led you to that. So when did you decide this would be your niche?
What was your path into this?
[Tianjun Sun] (0:51 – 6:01)
Well, first of all, thank you for saying that I’m building a career. That means I am young. I am.
And I’m very excited about building this career that is innovative and sort of integrating psychometrics into IO psychology. To really think about how I started this, I think, like if you ask a lot of IO psychologists, they will say that it probably started when they were in constant psychology. Right.
So I too started exploring more about psychology from a counseling and clinical perspective. I think that’s really most people knew about psychology. Right.
Being a therapist, sort of helping people address problems, their problems. So I was in counseling for a few years during undergrad taking clinical classes, counseling classes, doing counseling practicum. Like professional programs.
And I soon realized that I don’t actually care that much about people’s problems. And I mean, that’s kind of like a joke. But I feel like I care about their problems, but I cannot not help them actually address them.
I think part of the rule being a therapist is like he cannot really tell them what to do. And a lot of times I like I’m not built for that. So so that was quickly out.
But I have always really wanted to understand like why people make certain decisions, why people behave certain ways. And like in the classes and in the experiences of sort of doing this counseling programs or paraprofessional programs, I learned something that’s called individual differences. Right.
So a lot of times these are like personality interests, all of those life experience, those sort of things. And I think my major issue is it’ll sound like I have a lot of issues with things, but I think that’s kind of funny. But my issue with how many people talk about personality or talk about experiences in that context of, you know, what experiences affects people’s possessions or downstream performances, those sort of things.
It’s like there are not a lot of numbers, a lot of feelings, a lot of numbers. And I really wanted something that can help me quantify all of these seemingly abstract ideas. So personality interests, values, you know, social environments, you know, culture growing up, all of these.
These are like seemingly abstract ideas, but I really wanted a way to quantify them. Right. And then I was also in statistics in undergrad.
That was another story. So I was looking for kind of lab experiences that could give me opportunities to kind of do both or combine statistics and psychology. And I saw that Fritz Lab was hiring undergrad RAs.
That’s how I first got started. And my first project in that lab was doing a meta-analysis on something leadership. And I was like, well, I don’t want to do that.
As a side project in that lab, I was helping with sort of item developments of what later became like comprehensive personality inventory. And I was helping with validating and coming up with items to describe, you know, behavior, thoughts, behavior, or all those things. And I thought that was very interesting because you’ve got to create things and you get to really think about, like, what the underlying thing is, that is personality and what personality as kind of like, now we know as latent constructs, right?
How a personality as a latent construct actually influences people’s behaviors that later became these indicators that we use, right? In a latent verbal approach. And I was like, that sounds fun.
And apparently there are so many things that you can do with measurements and with assessments of different ways of measuring constructs, different ways of intrusive and non-intrusive measuring things. And there are different kind of like sources of variances to think about when we get data from assessment that essentially reflect people’s individual differences and how stats can help address all of that. So I guess that was kind of like the long-winded answer of how I really wanted to dig deeper into psychometrics to really help purify understanding, I guess, of individual differences and really have a good way of accurately represent people’s individual differences that we can really have a good number of it.
[Richard Landers] (6:01 – 6:27)
A couple of things that you mentioned there that I want to follow up on. So one is stats. So you, I think, have something that a lot of grad students are recommended to do, which is like get extra stat training before you go to get into a doctoral program.
But it’s pretty uncommon. How would you say that that training and that perspective changes the way you approach psychometrics compared to maybe other folks that are pure psych people?
[Tianjun Sun] (6:27 – 11:54)
I think that’s a very interesting question. And I think that touches on a little bit with my life story. So my mom is a psychologist.
My dad is a mathematician. Being the pleasing child that I am, like how can I make both of them happy? Right.
So that was really, really the reason why I had both psychology and statistics. In the beginning, I was like, I’m going to do something entirely different with my parents. I think every child had a face of that.
But I really wanted to do more with understanding human behaviors. So my mom would, I think, in the American system would be categorized under, I think, a child psychologist, developmental psychologist. Well, I’m not going to work with children.
Why didn’t I say it like that? But anyway, so I’m not going to work with children. And my dad is a mathematician, works a lot with graph theory and stuff like those.
But I also don’t want to work with graph stuff. So I’m like, how can I do something that’s very different from my parents, but also make both of them happy? Right.
So I was doing I wanted to do individual differences, but I wanted to do it in a quantitative way. Right. And that seems to be a very complimentary major to get to support the type of things I was interested in doing.
So that’s kind of like the side track of why I actually had those two majors in undergrad. But to kind of come back to the question of how I think so I can start help me sort of develop to the way that I am today. I think they are really complimentary to each other.
Right. And I do think at least in my personal experience, I do think there is this bi-directional relationship of how psychology is helping me learn stats better. And how stats training is helping me do psychology better.
And also kind of kind of funny. Right. For for for the general public, if we, I don’t know, sample everyone and then do just use some sample statistics on it.
I would think the general public would think psychology is more interesting than statistics. Right. So.
All right. So like in social situations, right. If I want to end a conversation really fast, I just tell people I do stats and they leave me alone.
But but if I want to talk more about whatever I do, right, I would say I’m a psychologist. And then I’ll explain that by saying I’m not that type of psychologist. Right.
Not the type that you’re thinking of. I don’t read minds, even though I do think like if I get enough data, I can probably read like predict what you do stuff like this. But in a way, like when I explain what I do as a psychologist, I say a lot of things about like I build statistical models to help explain behaviors, help predict kind of like outcomes and those sort of things.
When I say like science, that’s there is this bi-directional relationship. One helps the other become better. I meant that, you know, psychology creates or studying psychology creates a lot of these empirical and practical considerations that I think purely studying.
That’s typically wouldn’t really learn to, for example, like the sample size considerations, the missing data, the time constraints of like how you can get information for all of those psychological attributes. I don’t think pure statistics considerations think a lot about those. I think some people work in those areas of addressing those issues.
But I tend to approach these from like if I have this psychometric issue, how can I use stats to better solve this issue or correct for this issue or mitigate this issue so that I can actually study the underlying things better. Right. And a lot of times I think that’s training or just the general knowledge about what’s available out there by staying on top of the methods.
Just knowing what’s out of what’s available out there really helps me think more about what more bold things can I do with with psychometrics, right? What are some bigger things that I can help address or bigger questions I can help answer with these solid and sound methodological design and statistical consideration. So I think that’s that’s really beneficial.
But I do I do know that a lot of people when they choose to go into psychology. One of the reasons they might say is they don’t like math. They don’t like stats.
But if you go into advanced studies, it’s really all you do. I don’t think you can really go that far without touching stats in psychology. And I consider myself a pretty quantitative person, at least within IO.
And I would recommend everyone to at least get some prior exposure with stats and cognitive training, at least cognitive thinking before they choose sort of graduate pursuits.
[Richard Landers] (11:55 – 12:13)
I’m intrigued by your characterization of the relationship between stats and psychology. So stats definitely inform psych. Does psych inform stats?
Do you still keep a foot in the hardcore stats area and see what you can bring in? How do you manage that at this point?
[Tianjun Sun] (12:14 – 14:19)
I wouldn’t call myself a hardcore statistician. Those statisticians will be very mad if I say that. But I do think psych informs what I focus on in stats.
So I do do some like statistical model development, but specifically to solve psychometric issues. So, for example, I focus a lot on kind of like mixture models. And under item response theory, I do a lot of work with point models.
And I do point model based or more flexible way of modeling responses and data that I would consider a more statistically focused area. And I wouldn’t have picked that aspect in stats if it was not for the psychometric focus or psychology focus. In more recent lines that I focus on, right now I do a lot of work with integrating psychometric models with AI models, very large language models.
And if we work with language models, specifically with embedding based things, it’s kind of like high dimensional matrix or matrices. And how do we massage the matrices? So the psychometric models or principles can be actually inserted within and then operated with psychometric principles.
I also consider that a statistical issue or a statistics focused issue. And that, in a way, prompts me to read more about high dimensional statistics and latent variable models and try to solicit existing ideas from statistics that I can sort of adapt or modify to solve these issues. But I do think that if we’re talking about a spider directional relationship, I do think that’s how my psychology answers way more than how I do psychology, you know, informing stats, because I’m in a psychology department.
[Richard Landers] (14:21 – 15:12)
So part of what I’m thinking about is, I mean, so you talked a little bit about it moving into this sort of NLP or AI based approaches. It seems like there are more opportunities for psych to influence those sorts of practices. And whether you call that more stats versus comp sci flavored stats or whatever, that’s kind of a weird question.
But it seems like there’s more opportunities for us to contribute in that space, not just in our own use of AI to, you know, improve psychometrics exactly, but also to inform the way that AI researchers approach these same sorts of processes. And that’s kind of one of the problems just to stand up a different perspective. I don’t know.
Well, how do you how do you view the sort of AI shifts like you’re obviously doing work in it? Is this is this the future of psychometrics? Is this where are we going?
[Tianjun Sun] (15:14 – 18:41)
I don’t know where we’re going, but I know where I kind of am going. So I do think AI is staying. Right.
I think AI is developing. There is no denying that. And AI is being integrated into basically everything at this point.
In terms of psychometrics, I think we shouldn’t really deviate from psychometric principles. So the validity consideration, reliability consideration, all the psychological principles, psychometric principles that we know from psychometrics, they’re not really changing. When we’re caring about fairness, bias, ethics, all of these, when we’re concerned with hiring practices, all of that underneath is really a psychometric consideration of what information are we using.
What inference are we drawing? And this process, this outcome, is it fair for everyone? So regardless of whether we’re using AI to do it or we’re using some other technology or some other tools or systems, I think the principles are pretty much stays.
And of course, we modernize it more with more tools and systems and new models. But where AI psychometric is going, I think based on what I focus on right now is I know that AI tools are being used a lot in assessments, in personnel management pipelines. What I try to do is instill psychometric thinking and psychometric models or principles into the process so that even though the AI application front is soaring, going very fast, there are spaces for us to still explain it, validate it, and then defend it if needed.
So do I think the future of psychometrics cannot be away from AI? I think to some extent, yes, there is part of the psychometrics space as a field that would need to integrate with AI or AI-based psychometrics. But I think there is still a lot of value in just focusing on psychometrics itself as kind of like a discipline and to really study how we better represent the constructs, measure the constructs, and then adapt model to better fit the data situations that we’re dealing with.
Because if we think about the data situations with AI and with other technology, we now have multimodal data, we now have much more complicated data than we used to have as a field, and psychometric models would need to adapt to that as well to represent all the constructs we’re trying to study. And I think there is a lot of value in building, adapting, examining psychometric models for those things and then integrate that with AI.
[Richard Landers] (18:42 – 19:30)
Something I’ve noticed, I’m curious if you noticed this also, is that in the psychometric space, there seems to be, or the AI psychometric space, there seems to be a kind of split in two camps. And then there’s one that is like, how do we use AI to improve existing, traditional psychometrics practices? Like, how do we use it for item generation?
How do we use it for fairness evaluation? How do we use it to prevent redundancy between items? Like, very traditional kind of steps in measure development.
And then there’s another camp who is saying, we’re going to have open-ended conversations and we’re going to measure stuff out of using AI now. And those seem to be different groups of people, for the most part. I’m curious if you resonate with that view at all, and which side or both sides do you feel like you identify with?
[Tianjun Sun] (19:30 – 23:54)
I think I recognize both groups. And I think, in a way, I’m in between, or I dabble in both. I don’t know if there is that clear split between the two, but I do know a lot of people focus on one versus the other.
But I think there is clear connection between the two for using AI to help with the psychometric process, item development, item validation, you know, Django Django, all of those things. It’s operated not entirely different methods, right? So it’s still language processing.
It’s a lot of them are embedding-based and sort of, you know, some type of large-language model tricks, right? It’s really just, once you use these models to represent these items or these constructs, then the downstream things becomes, for one camp, becomes how can we automate a lot of these process of item evaluation or pretesting or, like, synthetic validation aspects of things. And the other camp becomes, we use language models to represent human narratives.
And then this representation is then studied or modeled for representing individual differences or put into measurement models and stuff like those. So, like I said, I kind of work among both groups. I do some work in embedding-based psychometrics, basically advocating for relying on AI models or tools to supplement human judgments, right?
So there are certain processes in skill development or measurement work that AI and large-language model can help us kind of evaluate item quality before we actually master these items, right? So the prescreening or pretesting situation, and we can represent item characteristics through like pseudo-parameters, right? Pseudo-discrimination parameters, pseudo-difficulty parameters, and, you know, desirability ratings, all of those things as kind of like quintessential psychometric considerations before we put these items into bigger and more complicated models, right?
So language model can certainly help with that, but I also have some work showing that language model cannot replace the human aspects of it, cannot replace the human expert judgment of evaluating item performances, cannot entirely replace the participants, right? And because it doesn’t represent the nuances very well, right? So that is one aspect of what I think we can use AI for that particular aspect of psychometrics.
On the other hand, I do also do a lot of work with sort of narrative identity, right? So I think a lot of work I do is with AI chatbots in interviewing people for their life narratives, life stories, for personality structure interviews, for kind of cognitive performances, for like, you know, interests, all of these sort of things, and stress narratives more recently. All of that is on this sort of premise that when we talk about our experiences, when we describe our behaviors or, you know, thinking all of these things, it carries a lot more nuances than sort of what items that we respond to.
And before, when we deal with these narrative data or qualitative data, we kind of had to rely on manual coding, which was, you know, honestly not fun. And well, for some people, it’s very fun. But now with the length model, you can sort of create this human AI collaboration to help processing information and then represents these nuances and then model these nuances to perhaps uncover new knowledge about the constructs we’re trying to study and then to measure the individual differences and try to, you know, understand different contextualized behaviors and thoughts and feelings, all of those.
So on that, on that camp, I also support that.
[Tara Behrend] (23:55 – 24:13)
How would you say that your approach there is different from earlier models that were using trace data of people, say, their social media or their experience in a game? I mean, these are also attempting to extract traits from natural behavior. Like, how do you see that that has evolved in what you do?
[Tianjun Sun] (24:13 – 26:00)
That’s a very valid question. I haven’t actually done much with social media based approach of setting these things. I think personally, because like if people study my social media, I’ll be not happy.
Anyway, so I don’t really do a lot of work with social media based approach of setting, you know, personality, individual differences, but I do see the parallel here, right? It’s volunteered information that is multimodal that is like a lot of texts, but I do think my approach for I actually know a lot of people using this approach of getting texts information through interviews or through other approaches and then process them using technology to draw insights. I do think this approach is different in a way that there is some type of prompts, right?
There you’re at least in my approach, it’s kind of under this structured interview umbrella, whether, you know, even though it is, you know, AI chatbot doing interview and you’re sort of interacting with an AI interview interviewer, but it is a structured approach. And for downstream applications of, you know, hiring related assessments and for a lot of clinical house related applications, the structured interviews are typically more useful in that way, right? You get more targeted information and, you know, predictability wise, all of those things.
So I do think that alone, but focus on this structured interview approach or this umbrella is more helpful in sort of the application aspects that we’re thinking of.
[Richard Landers] (26:00 – 26:36)
So you’ve made a sort of sideways references to like specific stuff you’ve worked on. I want to ask a more directed question then. Is there a specific project that you think highlights your approach most directly or your philosophy toward this kind of measurement?
You know, I know you’ve done work in kind of like faking and equity and fairness dimensions, but a lot of innovation in kind of blending approaches and trying to figure out how to make the most of these tools. Like, is there a paper you point people out that you’d say like, this is, I’m super proud of this. Where would you look?
[Tianjun Sun] (26:36 – 28:42)
So a lot of work that I’m super proud of are still in the review process, which, you know, it’s a where do I find this complaint, like make process faster. I do have some work that I’m particularly proud of that in the review process that I think represents what I talk about a lot, really. So one example is I’m right now working on this framework of embedding, but item response theory framework to study differential embedding functioning or differential embedding dimension functioning DDF, which is basically drawn from differential item functioning framework.
But with some modification of hierarchy models to adapt to, you know, this high dimensional continuous matrix or matrices that embedding representations typically are in. And then using this differential item functioning framework to study whether embedding models represents information the same way across demographic groups. And if we sort of propagate this entire process into kind of like the hiring considerations, then there is measurement bias things to examine and to think about and also predicted bias things to think about.
And these type of biases or these type of pipelines, in a way, they’re less transparent if we’re just relying on AI models or language processing tools to yield outcomes because things can be masked within within these language models. So I’m particularly proud of that line of work. And that is also the line of work that is currently being funded.
So if you give me money, I’d be more proud of it. So that is some work that I think is combining a lot of psychometric knowledge and principles, you know, considerations and with AI technology. And all I hope for is to, for the process to go faster and things to come up.
[Tara Behrend] (28:42 – 29:00)
Well, we can certainly forward your complete to the management and see if we can. I’m speaking to the management right now. I feel like you’re among the management circle.
I’m afraid not. No, but we definitely, you know, pull on our inner Karens and complete in the manager needed.
[Richard Landers] (29:01 – 30:02)
So a lot of the concepts that we’re talking about. So I think you hit it out at the beginning that by having quant training, you kind of open up a world of options and analytic approaches that you previously wouldn’t be able to use. But for a lot of folks, especially folks, I think with with, you know, master’s degrees who are graduating from IO programs.
They’ve had maybe one or two stats measurement courses. Maybe they then have to come into organizations and convince people that they should care about measurement. And some of the stuff I think even though we’re talking about here is probably like already way above CEO comprehension level, right?
So how do you recommend approaching this kind of problem? Like if you were if you wanted to arm a fresh graduate with the tools they needed to explain why they should care about psychometrics, whether organizations should care about psychometrics. I don’t know.
What kind of thing would you tell them? What’s the point of all this?
[Tara Behrend] (30:02 – 30:06)
Wait a second. Are you saying that CEOs don’t like high dimension embeddings?
[Richard Landers] (30:08 – 30:09)
That’s not all of them.
[Tianjun Sun] (30:10 – 32:46)
They’re high dimensional beings. I think I think this touches on sort of the translational science aspects of a lot of methodological work. I don’t know if I’m doing a very good job in that.
But I tend to tell students that I think psychometrics is really the foundation or measurement is really the foundation of many things, especially organizational decision making specifically related to, you know, personnel type of things. If we think about the hiring process, the promotion process, change management, all of those, right? You have to think about like, what are the metrics of these things, right?
And when people say what are the metrics of these things, they actually mean how do we measure this, right? And I think organizations, they care about, you know, hitting the numbers, hitting the targets, right? Where did the targets come from?
Where did the numbers come from? What does it mean to hit the number? And do you have a good number to represent the thing you want to hit?
All of these are measurement considerations, right? So I think to establish the understanding or really the consensus that measurement is important is probably the first step. But beyond that, I also tend to, in a way, joke that everything is kind of regression, right?
Everything’s kind of regression. If you understand correlation, you basically understand majority of, like, whatever measurement things you’re talking about here. So, like, when we’re thinking about all the validity charges, all the validity things, if you really simplify it to the basic level, it’s really looking at whether things are correlated.
Whether one thing that you are using is correlated to something else that you care about downstream, and whether this job-relevant something is really a good consideration for different people, right? For different groups, right? All of that can be understood as some type of correlation, right?
A lot of conditional correlation, but it can be understood as relationships between variables, between sort of contract, between things we think about. And there is a way of conceptually understand psychometrics without really looking at equations, even though I really like to show equations, because I think it’s straightforward, but I don’t think a lot of people agree with that. Whenever I show equations, I’m like, so clear.
They’re like, no. I’m like, okay.
[Richard Landers] (32:46 – 32:48)
That’s degree coming out.
[Tianjun Sun] (32:49 – 33:57)
One simple equation here, like, everything’s right there instead of, like, this three paragraphs. Anyway, so for that, I think there is this storytelling nature or aspect of communicating psychometrics, right? To embed that in the specific examples, right?
So, like, employee listening and engagement, all of those, right? You have to have data somewhere, like, where do you get this data? What questions do you ask?
How do you summarize all of these things? And then how do you communicate that with, you know, the bosses, the CEOs who don’t look at the raw data, right? And all of that is kind of like the measurement process, the psychometric consideration.
And to really hone down to the question that organizations are interested in solving, I think anything related to people in organizations, they’re kind of like IO problems. And anything related to reflecting the IO problems in organizations, I don’t think you can, you know, bypass it without thinking about psychometrics.
[Richard Landers] (33:57 – 34:19)
I feel like there’s a mindset shift that occurred after grad school where I realized that instead of talking about construct validity, I think the framing you’re using is very similar to what I use, which is like, do the numbers mean what you think they mean? Which is much more accessible than talking about validation evidence and all the other things.
[Tianjun Sun] (34:19 – 34:23)
Are we swapping ideas here?
[Tara Behrend] (34:23 – 34:27)
I don’t care if the numbers mean and just care if they go up. That’s it.
[Tianjun Sun] (34:28 – 34:36)
And then there are ways to make numbers go up when the underlying things are not going up. And that’s also a measurement consideration.
[Richard Landers] (34:36 – 35:11)
So we’re in the last few minutes here. I would love to hear about, you know, you talked a lot about your journey being really interesting and I think relatable in a lot of ways to a lot of folks in IO. I’m wondering that along this path, especially with your stats angles, if there’s any specific or surprising lessons that you’ve gotten out of that that you could share with us.
And I don’t even necessarily mean about like psychometrics itself, but, you know, about people or work or you. What kind of lessons have you pulled out of psychometrics?
[Tianjun Sun] (35:11 – 39:05)
I think this is a really interesting question. I kind of think that, you know, by studying psychometrics or measurements, I’ve become a much more critical person, but also accepting. In a way, I tend to think about many things as a various decomposition problem, right?
Like, why do we observe these phenomena, right? Why is the world, you know, in this state, like what are contributing to these? So parse out the variances, parse out the like the factors influencing things.
And then if, like, in a way, to think about things, you know, from a measurement standpoint, to think about like, what are the indicators for the underlying things these are trying to represent? And to parse out what may be the causes or what may be the reason, I think I become more accepting, right? In a way, like, I think I understand why certain things are certain things.
I understand why, you know, some people are certain ways, which I, you know, attribute that to, I don’t know, a measurement standpoint, a measurement perspective. And I don’t, I don’t think I would necessarily think from that standpoint without really sort of a kind of like psychometric, I don’t know, priming. Another, I think, joke is, I think I have a mantra that is really psychometrics inspired.
Well, I think it’s really just psychometrics, right? So reliability places the ceiling on the lid, right? We know, we know that.
And I think that says a lot about being a person, right? Like, if a person is not reliable, like, regardless of how maximal behavior, like how good they can be, you don’t want to give them stuff to work on, or you don’t want to work a lot with that person because reliability places a ceiling on validity, right? Like, if the person is not reliable, you’re not going to be a valid something.
Anyway, so that’s kind of like a, kind of like a joke thing of me looking at or judging people, right? Is this a reliable person for this type of things? And along kind of a similar line.
So I trained under Fritz Drasko, and I was exposed to sort of psychometric thinking or quantitative thinking along my developmental process professionally. And Fritz used to say that just do good work, and good work will be rewarded, and good things will come, right? If you do good work.
And I’ve always kind of thought that solid measurement is really a perfect example of good work. You know, measurement is not the most interesting thing. Well, it’s very interesting to me.
But I think, like, out there, measurement is not the most flashy area. Like, people sometimes would call that psychometrics is a niche field. I’m like, is it really?
But, you know, it’s kind of like slow work, it is careful, principled work. But if we do solid measurement, if we do solid methodology, we kind of build, you know, trust, right, in the thing in the in the findings we get, and in the type of, I don’t know, the answers we find in addressing questions. So I, I wouldn’t necessarily say that as a surprise lesson, but I do think that it kind of consolidates that you got to do solid work, right, you got to do approach things in a solid way.
Again, reliability places a city on validity, right? If, you know, your, your foundation is not solid, nothing else really will be solid later on.
[Richard Landers] (39:05 – 39:19)
I think, I think there was a, it was a famous statistician whose name now escapes me, unless it was boxed. Now I don’t know. But, basically, without good measurement, we have nothing.
And I think that’s the, certainly the approach that you’re arguing for here.
[Tara Behrend] (39:20 – 39:37)
I wish we had heard that story before we wrote the book, because that would be a great thing to put it in the advice section, you know, where we talk about building a reputation and building a team, I think that’s great. And it’s very true. We wanted to know, as a listener of the podcast, what’s your favorite episode?
[Tianjun Sun] (39:38 – 40:22)
I think my favorite recent episodes, so I really liked the Fred and Deanna episode, where they kind of, from a lifespan perspective of, of looking at IO psychology. And I really enjoy listening to, listening to their perspectives and the intellectual sparkles from that conversation. I also particularly enjoy the shorts of Denisons, right?
Those are some hot takes. And speaking of hot takes, I was bummed that I was not a part of the spice eating episode, because that would have been so great.
[Tara Behrend] (40:22 – 40:49)
That was probably one of my favorites. We should do a repeat, I think, because there’s no shortage of hot takes. So, yeah, maybe we’ll plan that for a future episode to do a season two hot takes.
But have you filled up new games that you can fire? I’ve already ruined every game show I can think of. Like, I can’t think of any other game shows.
But there’s no rules about repeating game shows, so we could perhaps revisit some of our favorites.
[Tianjun Sun] (40:49 – 41:08)
But if you have any ideas, we’re always open. Yeah, I think we can, you know, expand the hot takes game into like sauce based, food based, you know, ingredient, all of those, right? And then you just have more people spitting hot takes and crying on camera.
[Tara Behrend] (41:09 – 41:12)
Spoken like someone who truly loves to decompose variance. I love it.
[Richard Landers] (41:12 – 41:25)
All right, well, again, thank you so much for doing this. This has been been really great. I think I think we’re gonna inspire some some future psychometricians for sure.
And yeah, it’s just been been great to have you on.
[Tianjun Sun] (41:25 – 41:27)
Yeah, thank you for having me.
