Regularization in Mediation Models: A Monte Carlo Simulation Comparing Different Regularization Penalties in Multiple Mediation Models
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The two fundamental goals in statistical learning are establishing prediction accuracy and discovering the correct set of predictors to ensure model specificity. Although the field of variable selection has made significant strides over the past decades, these methods are yet to be fully adapted to mediation models. Regularization methods that utilize the l1 penalty such as the Lasso and adaptive Lasso incorporate a small amount of controlled bias into the ordinary least squares estimates to help improve the generalizability of the estimates by significantly reducing their variance across samples. Additionally, the Lasso can perform variable selection and help achieve model selection consistency or sparsistency. Recent literature has proposed methods that have introduced regularization to mediation models. These include regularized structural equation modelling or RegSEM. The current research compares the performance of various regularization penalties such as the Lasso, adaptive Lasso, MCP and SCAD in the context of mediation models. No single regularization penalty performed optimally across all simulation conditions. Additionally, we observed disproportionate selection rates for the Lasso and SCAD penalty with alternating mediators which was indicative of disproportionate shrinkage of the a and b pathways. However, the absolute bias induced in the a and b pathways was equivalent across all samples for each penalty term. This highlights the perils of shrinking individual regression pathways instead of indirect effects as a whole. Overall, the choice of the type of regularization penalty implemented depends on the particularities of the research question.