| International Journal of Computer Applications |
| Foundation of Computer Science (FCS), NY, USA |
| Volume 187 - Number 77 |
| Year of Publication: 2026 |
| Authors: B. Sivakumar Reddy, S.K. Harish, Jinka Ranganayakulu, M. Krishna |
10.5120/ijca2026926266
|
B. Sivakumar Reddy, S.K. Harish, Jinka Ranganayakulu, M. Krishna . Deep Adaptive Learning for Robust and Scalable Swarm Coordination in Dynamic Environments. International Journal of Computer Applications. 187, 77 ( Jan 2026), 54-62. DOI=10.5120/ijca2026926266
Large groups of autonomous agents, like mobile robots or drones, can work together to accomplish complex tasks in unpredictable and dynamic environments thanks to swarm coordination. The flexibility, scalability, and communication effectiveness of traditional rule-based or reinforcement-learning approaches are frequently hampered. To improve swarm coordination's robustness and scalability, this paper suggests a Deep Adaptive Learning (DAL) framework that combines attention-based communication, multi-agent reinforcement learning, and meta-adaptive learning. Reducing communication overhead and increasing coordination efficiency, each agent uses a deep neural policy network with a dynamic attention mechanism to selectively process pertinent neighbour information. Additionally, quick policy adaptation to environmental changes without complete retraining is made possible by an environment-change detection module in conjunction with meta-learning. In contrast to current methods, DAL offers a scalable solution for intelligent swarm systems by achieving faster convergence, higher cumulative rewards, and superior resilience to agent loss and communication noise, as demonstrated by experimental results from dynamic area coverage, target tracking, and formation-switching tasks.