SEMINAR
“Model comparison in item response theory modeling”
by Ezgi Aytürk
Date: 22 February 2021, Monday
Time: 12:30-13:30
Place: Zoom Meeting
Abstract
Item response theory (IRT) is a set of latent variable techniques for modeling responses to psychometric tools such as tests and questionnaires. Every IRT model has a set of assumptions regarding how item properties (e.g., difficulty, discrimination, response scale) and the latent variable of interest interact to determine item response. Consequently, IRT models vary in their model complexity, and the scoring of respondents on the latent trait depends on the specific IRT model chosen. In IRT model comparison studies researchers fit several IRT models and try to find the model that optimizes model-data fit and generalizability. In these studies, researchers usually rely on goodness-of-fit indices. In this talk, I will present my recent work showing that current goodness-of-fit indices do not properly account for the differences in the complexities of IRT models and that they systematically favor more complex models regardless of the data. I will discuss the implications of the findings and future research directions.