Norwegian version of this page

Null Baseline Modeling Approaches with Applications in International Large-Scale Educational Assessments (PhD project)

Baseline comparisons starting from "null": shows transition egg to chick

Baseline comparisons starting from "null". Photo: Colourbox

About the project

To make sense of comparisons, a meaningful baseline is needed. This principle also holds in modeling practice. A frequently occurring baseline is provided by so-called null models. Such null models can be specified in many ways, yet a characterization where all observed variables are assumed to be uncorrelated seems to have taken off as the default. In this project, we focus on two different applications of such a null model.

A1: Model fit evaluation with the Comparative Fit Index (CFI)
The null model is possibly best known for its role in model fit evaluation with incremental fit indices in Structural Equation Modeling (SEM). Here, the null model serves as a baseline for model assessment, as the fit of a model of interest is compared to the fit of the null model. Different rules-of-thumb have been proposed for determining whether fit of the model of interest is acceptable. These rules seem to be universally applied even though literature has shown the sensitivity of fit indices and their rules-of-thumb to different data and model characteristics. In this part, we put these results in context and clarify the meaning and behavior of the CFI, one of the most used incremental fit indices, as a function of the null baseline model. 

  • van Laar, S. & Braeken, J. (2021). Understanding the Comparative Fit Index: It's all about the base! Practical Assessment, Research & Evaluation, 26(26), p. 1–25. https://doi.org/10.7275/23663996.

  • van Laar, S. & Braeken, J. (2022). Caught off Base: A Note on the Interpretation of Incremental Fit Indices. Structural Equation Modeling. Advance online publication. https://doi.org/10.1080/10705511.2022.2050730.

A2: Characterization of random responders in the TIMSS 2015 student questionnaire
International large-scale educational assessments are known to be widely used for research and educational policy. Yet, it has been argued that these type of low-stakes assessments are vulnerable to invalid response behavior. Depending on the severity of this invalid response behavior, this can lead to problems with the use and interpretation of the assessment results. Within this application, we specifically focused on random responders. To identify those students likely engaging in random response behavior, we adopted a mixture IRT approach that incorporates a null baseline model for the random responders. 

  • van Laar, S. & Braeken, J. (2022). Random Responders in the TIMSS 2015 Student Questionnaire: A Threat to Validity? Journal of Educational Measurement. Advance online publication. https://doi.org/10.1111/jedm.12317.

Background

The project runs from November 2017 until mid-August 2022.

Parent-project

  • Parent project (Latent Variable Mixture models to track Longitudinal Differentiation Patterns)

Financing

The project is funded by the Research Council of Norway

Published Mar. 23, 2018 11:40 AM - Last modified Apr. 29, 2022 12:38 PM

Contact

Project leader

Saskia van Laar