VADViT:Vision Transformer-Driven Memory Forensics for Malicious Process Detection and Explainable Threat Attribution

Loading...
Thumbnail Image

Date

2025-07-23

Authors

Dehfouli, Yasin

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Modern malware's increasing complexity limits traditional signature and heuristic-based detection, necessitating advanced memory forensic techniques. Machine learning offers potential but struggles with outdated feature sets, large memory data handling, and forensic explainability. To address these challenges, we propose VADViT, a vision-based transformer model that detects malicious processes by analyzing Virtual Address Descriptor (VAD) memory regions. VADViT converts these structures into Markov, entropy, and intensity-based images, classifying them using a Vision Transformer (ViT) with self-attention to enhance detection accuracy. We also introduce BCCC-MalMem-SnapLog-2025, a dataset logging process identifier (PID) for precise VAD extraction without dynamic analysis. Experimental results show 99% accuracy in binary classification and a 93% macro-average F1 score in multi-class detection. Additionally, attention-based sorting improves forensic analysis by ranking the most relevant malicious VAD regions, narrowing down the search space for forensic investigators.

Description

Keywords

Artificial intelligence, Computer science, Computer engineering

Citation