About the project
To make sense of comparisons, a meaningful baseline is needed. This principle also holds in modeling practice. A frequently occurring baseline is provided by so-called null models. Such null models can be specified in many ways, yet a characterization where all observed variables are assumed to be uncorrelated seems to have been taken off as the default. In this project, we focus on two different applications of such a null model.
A1: Model fit evaluation with the Comparative Fit Index (CFI)
The null model is possibly best known for its role in model fit evaluation with incremental fit indices in Structural Equation Modeling (SEM). Here, the null model serves as a baseline for model assessment, as the fit of a model of interest is compared to the fit of the null model. Different rules-of-thumb have been proposed for determining whether the fit of the model of interest is acceptable. These rules seem to be universally applied even though literature has shown the sensitivity of fit indices and their rules-of-thumb to different data and model characteristics. In this part, we put these results in context and clarify the meaning and behavior of the CFI, one of the most used incremental fit indices, as a function of the null baseline model.
-
van Laar, S. & Braeken, J. (2021). Understanding the Comparative Fit Index: It's all about the base! Practical Assessment, Research & Evaluation, 26, Article 26. https://doi.org/10.7275/23663996 .
-
van Laar, S. & Braeken, J. (2022). Caught off Base: A Note on the Interpretation of Incremental Fit Indices. Structural Equation Modeling. 29 (6), 935-943. https://doi.org/10.1080/10705511.2022.2050730 .
A2: Characterization of random respondents in the TIMSS 2015 student questionnaire
International large-scale educational assessments are known to be widely used for research and educational policy. Yet, it has been argued that these types of low-stakes assessments are vulnerable to invalid response behaviour. Depending on the severity of this invalid response behavior, this can lead to problems with the use and interpretation of the assessment results. Within this application, we specifically focused on random responders. To identify those students likely to engage in random response behaviour, we adopted a mixture IRT approach that incorporates a null baseline model for the random responders.
-
van Laar, S. & Braeken, J. (2022). Random Responders in the TIMSS 2015 Student Questionnaire: A Threat to Validity? Journal of Educational Measurement, 59 (4), 470-501. https://doi.org/10.1111/jedm.12317 .
-
van Laar, S. & Braeken, J. (2023). Prevalence of random responders as a function of scale position and questionnaire length in the TIMSS 2015 eighth-grade student questionnaire. International Journal of Testing. Advance online publication. https://doi.org/10.1080/15305058.2023.2263206 .
-
Chen, J., van Laar, S., & Braeken, J. (2023). Who are those random responders on your survey? The case of the TIMSS 2015 student questionnaire. Large-scale Assessments in Education, 11, Article 37. https://doi.org/10.1186/s40536-023-00184-6 .
-
van Laar, S., Chen, J., & Braeken, J. (in press). How Randomly are Students Random Responding to your Questionnaire? Within-Person Variability in Random Responding across Scales in the TIMSS 2015 eighth-grade Student Questionnaire. Measurement: Interdisciplinary Research and Perspectives.
Background
The project runs from November 2017 until mid-August 2022.
Parent project
- Parent project (Latent Variable Mixture models to track Longitudinal Differentiation Patterns)
Financing
The project is funded by the Research Council of Norway