Psychology (Functional Area: Quantitative Methods)
Permanent URI for this collection
Browse
Browsing Psychology (Functional Area: Quantitative Methods) by Subject "Educational tests & measurements"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access A Differential Response Functioning Framework for Understanding Item, Bundle, and Test Bias(2017-07-27) Chalmers, Robert Philip Sidney; Flora, David B.This dissertation extends the parametric sampling method and area-based statistics for differential test functioning (DTF) proposed by Chalmers, Counsell, and Flora (2016). Measures for differential item and bundle functioning are first introduced as a special case of the DTF statistics. Next, these extensions are presented in concert with the original DTF measures as a unified framework for quantifying differential response functioning (DRF) of items, bundles, and tests. To evaluate the utility of the new family of measures, the DRF framework is compared to the previously established simultaneous item bias test (SIBTEST) and differential functioning of items and tests (DFIT) frameworks. A series of Monte Carlo simulation conditions were designed to estimate the power to detect differential effects when compensatory and non-compensatory differential effects are present, as well as to evaluate Type I error control. Benefits inherent to the DRF framework are discussed, extensions are suggested, and alternative methods for generating composite-level sampling variability are presented. Finally, it is argued that the area-based measures in the DRF framework provide an intuitive and meaningful quantification of marginal and conditional response bias over and above what has been offered by the previously established statistical frameworks.Item Open Access Evaluating Equivalence Testing Methods for Measurement Invariance(2018-03-01) Counsell, Alyssa Leigh; Cribbie, Robert A.Establishing measurement invariance (MI) is important to validly make group comparisons on psychological constructs of interest. MI involves a multi-stage process of determining whether the factor structure and model parameters are similar across multiple groups. The statistical methods used by most researchers for testing MI is by conducting multiple group confirmatory factor analysis models, whereby a statistically nonsignificant results in a chi square difference test or a small change in goodness of fit indices (GOFs) such as CFI or RMSEA are used to conclude invariance. Yuan and Chan (2016) proposed replacing these approaches with an equivalence test analogue of the chi square difference test (EQ). While they outline the EQ approach for MI, they recommend using an adjusted RMSEA version (EQ-A) for increased power. The current study evaluated the Type I error and power rates of the EQ and EQ-A and compare their performance to using traditional chi square difference tests and GOFs. Results demonstrate that the EQ for nested models was the only procedure that maintains empirical error rates below the nominal level. Results also highlight that the EQ requires larger sample sizes or equivalence bounds based on larger than conventional RMSEA values like .05 to ensure adequate power rates at later MI stages. Because the EQ-A test did not maintain accurate error rates, I do not recommend Yuan and Chans proposed adjustment.