StegResNet: A Residual Learning-Based Lightweight Framework for Robust Spatial Image Steganalysis
Main Article Content
Abstract
Steganalysis, the process of detecting hidden information within digital images, presents persistent challenges due to the imperceptible and adaptive nature of modern steganographic algorithms. To address these challenges, this study introduces StegResNet, a customized deep residual learning model specifically designed for spatial domain image steganalysis. Built upon the pretrained ResNet18 architecture, StegResNet employs transfer learning to leverage generalized visual representations from ImageNet while fine-tuning deeper layers to learn embedding-specific residual features. The architecture incorporates residual connections, batch normalization, and a lightweight binary classification head, enabling effective preservation of low-level noise features essential for detecting subtle embedding distortions. Experimental evaluation was conducted using a composite dataset formed by merging BOSSBase v1.01 and BOWS2, with stego images generated through five spatial content adaptive steganographic algorithms HUGO, HILL,WOW, S-UNIWARD, and MiPOD at payloads of 0.2 bpp and 0.4 bpp. The proposed model achieved an overall detection accuracy of 91.27%, outperforming several state-of-the-art CNN-based steganalyzers such as YeNet, YedroudNet, ZhuNet, SRNet and GBRASNet. Analysis of the confusion matrix and performance metrics confirmed high precision, recall, and F1-scores, demonstrating strong generalization and robustness. The results substantiate that the proposed StegResNet framework effectively bridges residual learning and steganalysis, offering an efficient, interpretable, and high-performing solution for detecting hidden payloads in grayscale images.