Wang, PingGouge, Justin2025-04-102025-04-102025-02-132025-04-10https://hdl.handle.net/10315/42890Within the realm of federated learning (FL), adversarial entities can poison models, slowing down or destroying the FL training process. Therefore, attack prevention and mitigation are crucial for FL. Real-world scenarios may necessitate additional separation or abstraction between clients and servers. When considering multi-layer FL systems, which contain edge server layers, the structural differences warrant new strategies to handle adversaries. While existing works primarily address attack prevention and mitigation in conventional two-layer FL systems, research on attack prevention and mitigation in multi-layer federated learning systems remains limited. This thesis aims to address this gap by investigating the defense strategies in a multi-layered FL system. We propose new methods for anomaly detection and removal of attackers/adversarial entities from training in a multi-layer FL system. First, we train a variational autoencoder (VAE) using the model updates collected from the edge servers. This allows the VAE to discern between benign and adversarial model updates. Following that, we deploy the VAE to detect which edge servers at the cohort level contain malicious clients. Subsequently, we devise two malicious client exclusion strategies: the scoring-based method, which applies a score for each client based upon its appearances within cohorts labeled as benign or malicious, and the Bayesian-based method, which uses Bayesian inference to predict if a specific client is malicious based on the statistical performance of the autoencoder. Both approaches are aimed at mitigating potential harm caused by malicious clients during model training. The experimental results demonstrate the superiority of the proposed methods over previous works for traditional FL mitigation under a variety of scenarios.Author owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.Securing Multi-Layer Federated Learning: Detecting and Mitigating Adversarial AttacksElectronic Thesis or Dissertation2025-04-10Federated learningVAEAnomaly detectionMalicious client removalMachine learningAlgorithmAIArtificial intelligenceML.