Regularization in Mediation Models: A Monte Carlo Simulation Comparing Different Regularization Penalties in Multiple Mediation Models
dc.contributor.advisor | Choi, Ji Yeh | |
dc.contributor.author | Singh, Arjunvir | |
dc.date.accessioned | 2022-12-14T16:37:06Z | |
dc.date.available | 2022-12-14T16:37:06Z | |
dc.date.copyright | 2022-08-17 | |
dc.date.issued | 2022-12-14 | |
dc.date.updated | 2022-12-14T16:37:06Z | |
dc.degree.discipline | Psychology (Functional Area: Quantitative Methods) | |
dc.degree.level | Master's | |
dc.degree.name | MA - Master of Arts | |
dc.description.abstract | The two fundamental goals in statistical learning are establishing prediction accuracy and discovering the correct set of predictors to ensure model specificity. Although the field of variable selection has made significant strides over the past decades, these methods are yet to be fully adapted to mediation models. Regularization methods that utilize the l1 penalty such as the Lasso and adaptive Lasso incorporate a small amount of controlled bias into the ordinary least squares estimates to help improve the generalizability of the estimates by significantly reducing their variance across samples. Additionally, the Lasso can perform variable selection and help achieve model selection consistency or sparsistency. Recent literature has proposed methods that have introduced regularization to mediation models. These include regularized structural equation modelling or RegSEM. The current research compares the performance of various regularization penalties such as the Lasso, adaptive Lasso, MCP and SCAD in the context of mediation models. No single regularization penalty performed optimally across all simulation conditions. Additionally, we observed disproportionate selection rates for the Lasso and SCAD penalty with alternating mediators which was indicative of disproportionate shrinkage of the a and b pathways. However, the absolute bias induced in the a and b pathways was equivalent across all samples for each penalty term. This highlights the perils of shrinking individual regression pathways instead of indirect effects as a whole. Overall, the choice of the type of regularization penalty implemented depends on the particularities of the research question. | |
dc.identifier.uri | http://hdl.handle.net/10315/40732 | |
dc.language | en | |
dc.rights | Author owns copyright, except where explicitly noted. Please contact the author directly with licensing requests. | |
dc.subject | Psychology | |
dc.subject | Statistics | |
dc.subject | Quantitative psychology | |
dc.subject.keywords | l1-penalty | |
dc.subject.keywords | Regularization | |
dc.subject.keywords | Bias-variance trade-off | |
dc.subject.keywords | Sparsity | |
dc.subject.keywords | Variable selection | |
dc.subject.keywords | Mediation | |
dc.subject.keywords | Lasso | |
dc.subject.keywords | SCAD | |
dc.subject.keywords | MCP | |
dc.subject.keywords | Adaptive lasso | |
dc.title | Regularization in Mediation Models: A Monte Carlo Simulation Comparing Different Regularization Penalties in Multiple Mediation Models | |
dc.type | Electronic Thesis or Dissertation |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- _Singh_Arjunvir_2022_Masters.pdf
- Size:
- 1 MB
- Format:
- Adobe Portable Document Format