Arash Habibi LashkariDehfouli, Yasin2025-07-232025-07-232025-05-022025-07-23https://hdl.handle.net/10315/43048Modern malware's increasing complexity limits traditional signature and heuristic-based detection, necessitating advanced memory forensic techniques. Machine learning offers potential but struggles with outdated feature sets, large memory data handling, and forensic explainability. To address these challenges, we propose VADViT, a vision-based transformer model that detects malicious processes by analyzing Virtual Address Descriptor (VAD) memory regions. VADViT converts these structures into Markov, entropy, and intensity-based images, classifying them using a Vision Transformer (ViT) with self-attention to enhance detection accuracy. We also introduce BCCC-MalMem-SnapLog-2025, a dataset logging process identifier (PID) for precise VAD extraction without dynamic analysis. Experimental results show 99% accuracy in binary classification and a 93% macro-average F1 score in multi-class detection. Additionally, attention-based sorting improves forensic analysis by ranking the most relevant malicious VAD regions, narrowing down the search space for forensic investigators.Author owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.Artificial intelligenceComputer scienceComputer engineeringVADViT:Vision Transformer-Driven Memory Forensics for Malicious Process Detection and Explainable Threat AttributionElectronic Thesis or Dissertation2025-07-23Malware DetectionMemory ForensicsVirtual Address DescriptorsProcess Memory InternalsVision TransformersAttention Visualization