Securing Multi-Layer Federated Learning: Detecting and Mitigating Adversarial Attacks

dc.contributor.advisorWang, Ping
dc.contributor.authorGouge, Justin
dc.date.accessioned2025-04-10T10:59:46Z
dc.date.available2025-04-10T10:59:46Z
dc.date.copyright2025-02-13
dc.date.issued2025-04-10
dc.date.updated2025-04-10T10:59:46Z
dc.degree.disciplineComputer Science
dc.degree.levelMaster's
dc.degree.nameMSc - Master of Science
dc.description.abstractWithin the realm of federated learning (FL), adversarial entities can poison models, slowing down or destroying the FL training process. Therefore, attack prevention and mitigation are crucial for FL. Real-world scenarios may necessitate additional separation or abstraction between clients and servers. When considering multi-layer FL systems, which contain edge server layers, the structural differences warrant new strategies to handle adversaries. While existing works primarily address attack prevention and mitigation in conventional two-layer FL systems, research on attack prevention and mitigation in multi-layer federated learning systems remains limited. This thesis aims to address this gap by investigating the defense strategies in a multi-layered FL system. We propose new methods for anomaly detection and removal of attackers/adversarial entities from training in a multi-layer FL system. First, we train a variational autoencoder (VAE) using the model updates collected from the edge servers. This allows the VAE to discern between benign and adversarial model updates. Following that, we deploy the VAE to detect which edge servers at the cohort level contain malicious clients. Subsequently, we devise two malicious client exclusion strategies: the scoring-based method, which applies a score for each client based upon its appearances within cohorts labeled as benign or malicious, and the Bayesian-based method, which uses Bayesian inference to predict if a specific client is malicious based on the statistical performance of the autoencoder. Both approaches are aimed at mitigating potential harm caused by malicious clients during model training. The experimental results demonstrate the superiority of the proposed methods over previous works for traditional FL mitigation under a variety of scenarios.
dc.identifier.urihttps://hdl.handle.net/10315/42890
dc.languageen
dc.rightsAuthor owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.
dc.subject.keywordsFederated learning
dc.subject.keywordsVAE
dc.subject.keywordsAnomaly detection
dc.subject.keywordsMalicious client removal
dc.subject.keywordsMachine learning
dc.subject.keywordsAlgorithm
dc.subject.keywordsAI
dc.subject.keywordsArtificial intelligence
dc.subject.keywordsML.
dc.titleSecuring Multi-Layer Federated Learning: Detecting and Mitigating Adversarial Attacks
dc.typeElectronic Thesis or Dissertation

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Gouge_Justin_2025_MSc.pdf
Size:
751.21 KB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
license.txt
Size:
1.87 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
YorkU_ETDlicense.txt
Size:
3.39 KB
Format:
Plain Text
Description:

Collections