Cribbie, Robert A.2018-03-012018-03-012017-07-252018-03-01http://hdl.handle.net/10315/34338Establishing measurement invariance (MI) is important to validly make group comparisons on psychological constructs of interest. MI involves a multi-stage process of determining whether the factor structure and model parameters are similar across multiple groups. The statistical methods used by most researchers for testing MI is by conducting multiple group confirmatory factor analysis models, whereby a statistically nonsignificant results in a chi square difference test or a small change in goodness of fit indices (GOFs) such as CFI or RMSEA are used to conclude invariance. Yuan and Chan (2016) proposed replacing these approaches with an equivalence test analogue of the chi square difference test (EQ). While they outline the EQ approach for MI, they recommend using an adjusted RMSEA version (EQ-A) for increased power. The current study evaluated the Type I error and power rates of the EQ and EQ-A and compare their performance to using traditional chi square difference tests and GOFs. Results demonstrate that the EQ for nested models was the only procedure that maintains empirical error rates below the nominal level. Results also highlight that the EQ requires larger sample sizes or equivalence bounds based on larger than conventional RMSEA values like .05 to ensure adequate power rates at later MI stages. Because the EQ-A test did not maintain accurate error rates, I do not recommend Yuan and Chans proposed adjustment.enAuthor owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.Educational tests & measurementsEvaluating Equivalence Testing Methods for Measurement InvarianceElectronic Thesis or Dissertation2018-03-01Equivalence testingMeasurement in-varianceGroup comparisonsConfirmatory factor analysisStructural equation modelingLatent variablesScale validationFactor structure