| International Journal of Computer Applications |
| Foundation of Computer Science (FCS), NY, USA |
| Volume 187 - Number 87 |
| Year of Publication: 2026 |
| Authors: Nagaraj Shet Manjappa Narahari, Jagadeesha Ramegowda |
10.5120/ijca2026926519
|
Nagaraj Shet Manjappa Narahari, Jagadeesha Ramegowda . Adaptive Cyclic Reconstruction Regularized Bidirectional Feed Forward Network for Robust Malware Classification. International Journal of Computer Applications. 187, 87 ( Mar 2026), 50-56. DOI=10.5120/ijca2026926519
The high structural similarity of the variants and the existence of the subtle visual pattern in binary representations have made malware classification more difficult than ever, as the malware families evolve very quickly. Despite the promising performance of deep learning-based image malware classification systems, the majority of the currently used methods are based on fixed loss functions, which do not respond to changing learning dynamics in the course of training, resulting in unsteady feature representations and poor generalization. In order to overcome these issues, this paper presents an Adaptive Cyclic Reconstruction-Regularized Bidirectional Feed Forward neural network, A-CRCL-BFFNN, to classify malware families strongly. The framework proposed is built upon the concept of a bidirectional contrastive autoencoder, in combination with a dynamic loss regulation scheme that balances dynamically between the reconstruction loss, cyclic consistency loss and contractive regularization through training behavior. Such an adaptation provides the model with the ability to learn structural preservation during initial training stages and increasingly implement robustness and latent space smoothness with each succeeding training stage. Binaries of malware are then converted to color-mapped image representation and normalized to maximize the discriminative features learning. In-depth experiments over the Malimg and BIG2015 benchmark datasets revealed that the presented A-CRCL-BFFNN has the highest classification accuracy, recall, and F1-score, with the incidental computation overhead.