CFP last date
21 July 2025
Call for Paper
August Edition
IJCA solicits high quality original research papers for the upcoming August edition of the journal. The last date of research paper submission is 21 July 2025

Submit your paper
Know more
Reseach Article

Optimizing Solar Microgrid Efficiency via Reinforcement Learning: An Empirical Study Using Real-Time Energy Flow and Weather Forecasts

by Isha Das, Md. Jisan Ahmed, Abhay Shukla
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Number 13
Year of Publication: 2025
Authors: Isha Das, Md. Jisan Ahmed, Abhay Shukla
10.5120/ijca2025925190

Isha Das, Md. Jisan Ahmed, Abhay Shukla . Optimizing Solar Microgrid Efficiency via Reinforcement Learning: An Empirical Study Using Real-Time Energy Flow and Weather Forecasts. International Journal of Computer Applications. 187, 13 ( Jun 2025), 33-38. DOI=10.5120/ijca2025925190

@article{ 10.5120/ijca2025925190,
author = { Isha Das, Md. Jisan Ahmed, Abhay Shukla },
title = { Optimizing Solar Microgrid Efficiency via Reinforcement Learning: An Empirical Study Using Real-Time Energy Flow and Weather Forecasts },
journal = { International Journal of Computer Applications },
issue_date = { Jun 2025 },
volume = { 187 },
number = { 13 },
month = { Jun },
year = { 2025 },
issn = { 0975-8887 },
pages = { 33-38 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume187/number13/optimizing-solar-microgrid-efficiency-via-reinforcement-learning-an-empirical-study-using-real-time-energy-flow-and-weather-forecasts/ },
doi = { 10.5120/ijca2025925190 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2025-06-21T01:57:02.453845+05:30
%A Isha Das
%A Md. Jisan Ahmed
%A Abhay Shukla
%T Optimizing Solar Microgrid Efficiency via Reinforcement Learning: An Empirical Study Using Real-Time Energy Flow and Weather Forecasts
%J International Journal of Computer Applications
%@ 0975-8887
%V 187
%N 13
%P 33-38
%D 2025
%I Foundation of Computer Science (FCS), NY, USA
Abstract

This paper investigates the use of deep reinforcement learning (DRL) to optimize the energy efficiency of a solar-powered microgrid under real-time energy flow and weather forecasting. The research generates a fully synthetic dataset simulating a solar microgrid’s hourly photovoltaic (PV) generation, battery state, load demand, and weather-based solar irradiance forecasts. Four RL algorithms are applied and compared: Deep Q-Network (DQN), Proximal Policy Optimization (PPO), Advantage Actor-Critic (A2C), and Deep Deterministic Policy Gradient (DDPG). Each agent learns to control battery charging/discharging actions to balance supply and demand, incorporating solar forecasts to handle uncertainty. Methodology details include dataset generation, environment formulation, and RL training procedures. This paper presents performance metrics (e.g., reward curves, energy utilization) and graphical analyses. In the study’s empirical results, PPO and DDPG achieve the highest efficiency under clear conditions, while A2C adapts best to sudden changes; DQN performs robustly but converges more slowly. All DRL agents significantly outperform a rule-based baseline. The study demonstrates that DRL can adaptively manage real-time microgrid operations under weather variability, improving renewable utilization and resilience. This work provides a comprehensive evaluation of modern RL methods for smart-energy systems.

References
  1. N. Xu et al., “Reinforcement Learning for Optimizing Renewable Energy Utilization in Buildings: A Review on Applications and Innovations,” Energies, vol. 18, no. 7, 2023. mdpi.com
  2. A. Shojaeighadikolaei, X. Zhang, A. Aghaei, et al., “Weather-Aware Data-Driven Microgrid Energy Management Using Deep Reinforcement Learning,” IEEE Power Energy Soc. Gen. Meeting, 2022. par.nsf.gov
  3. Y. Yin, X. Li, Z. Yang, and Y. Wang, “Reinforcement Learning Based Microgrid Energy Management System,” Journal of Xi’an Shiyou University, 2024. xisdxjxsu.asia
  4. B. C. Phan, M. Lee, and Y. Lai, “Intelligent Deep-Q-Network-Based Energy Management for an Isolated Microgrid,” Appl. Sci., vol. 12, no. 17, 2022. mdpi.com
  5. S. Upadhyay, I. Ahmed, and L. Mihet-Popa, “Energy Management System for an Industrial Microgrid Using Optimization Algorithms-Based Reinforcement Learning Technique,” Energies, vol. 17, no. 16, 2024. mdpi.com
  6. G. Jones, X. Li, and Y. Sun, “Robust Energy Management Policies for Solar Microgrids via Reinforcement Learning,” Energies, vol. 17, 2024. mdpi.com
  7. Y. Liu, Q. Lu, Z. Yu, Y. Chen, and Y. Yang, “Reinforcement Learning-Enhanced Adaptive Scheduling of Battery Energy Storage Systems in Energy Markets,” Energies, vol. 17, no. 21, 2024. mdpi.com
  8. S. Chen, J. Liu, Z. Cui, and W. Xiao, “A Deep Reinforcement Learning Approach for Microgrid Energy Transmission Dispatching,” Appl. Sci., vol. 14, no. 9, 2022. mdpi.com
  9. T. Wang et al., “A Multi-Agent Reinforcement Learning Method for Cooperative Secondary Voltage Control of Microgrids,” Energies, vol. 16, no. 15, 2023. mdpi.com
  10. Y. Li, Z. Xu, K. B. Bowes, and L. Ren, “Reinforcement Learning-Enabled Seamless Microgrids Interconnection,” Proc. IEEE PES Gen. Meeting, 2021. pure.psu.edu
  11. D. Liu, C. Zang, P. Zeng, et al., “Deep Reinforcement Learning for Real-Time Economic Energy Management of Microgrid Systems Considering Uncertainties,” Front. Energy Res., vol. 11, 2023. frontiersin.org
  12. N. Xu, Z. Tang, C. Si, J. Bian, and C. Mu, “A Review of Smart Grid Evolution and Reinforcement Learning: Applications, Challenges and Future Directions,” Energies, vol. 18, no. 7, 2023. mdpi.com
  13. V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, pp. 529–533, 2015.
  14. J. Schulman et al., “Proximal Policy Optimization Algorithms,” arXiv, 2017.
  15. V. Mnih et al., “Asynchronous methods for deep reinforcement learning,” Proc. ICML, 2016.
  16. D. Silver et al., “Deterministic Policy Gradient Algorithms,” Proc. ICML, 2014.
  17. T. P. Lillicrap et al., “Continuous control with deep reinforcement learning,” ICLR, 2016.
  18. V. Ruelens, N. Vandael, B. Claessens, et al., “Residential Demand Response Based on Model-Free Reinforcement Learning,” IEEE Trans. Smart Grid, vol. 8, no. 3, 2017.
  19. J. Shi, W. Qiao, Y. Su, et al., “Deep reinforcement learning for optimal energy management in microgrid,” Appl. Energy, vol. 240, pp. 1122–1132, 2019.
  20. S. Kofinas and S. Dounis, “Energy management in solar microgrid via reinforcement learning using fuzzy reward,” Adv. Build. Energy Res., vol. 12, no. 1, 2018.
  21. J. Yang and H. Ma, “A reinforcement learning approach for power management in microgrids,” IEEE Trans. Ind. Informatics, vol. 15, no. 7, pp. 3627–3635, 2019.
  22. A. Vrettos, D. Fuller, and G. Pappas, “Distributed model-free control for islanded microgrids,” IEEE Trans. Ind. Electron., vol. 63, no. 4, pp. 2196–2206, 2016.
  23. X. Zhao et al., “Multi-agent deep reinforcement learning for microgrid energy trading,” Appl. Energy, vol. 243, pp. 343–354, 2019.
  24. R. Z. Qiao, F. Milano, and E. Serrano, “Hierarchical RL for microgrid energy management,” IEEE Trans. Power Syst., vol. 35, no. 5, pp. 4106–4117, 2020.
  25. S. Falahatpisheh et al., “Reinforcement learning-based energy management for smart buildings,” Energy Build., vol. 213, 2020.
  26. H. Cai and Y. Zhang, “Deep reinforcement learning in microgrid energy systems: A survey,” Renew. Sustain. Energy Rev., vol. 126, 2020.
  27. S. Ghimire et al., “DDPG-based control of energy storage in microgrids,” IEEE Trans. Ind. Appl., vol. 57, no. 4, pp. 4054–4062, 2021.
  28. A. Abapour et al., “Multi-agent deep Q-learning for coordinated energy management,” Applied Energy, vol. 292, 2021.
  29. Y. Yu et al., “DRL for electric vehicle charging and microgrid scheduling,” IEEE Trans. Smart Grid, vol. 12, no. 4, pp. 3116–3127, 2021.
  30. H. Lou, J. Wang, Y. Wang, et al., “DDPG for coordinated dispatch in multi-microgrids,” Electric Power Syst. Res., vol. 195, 2021.
Index Terms

Computer Science
Information Sciences

Keywords

Solar Microgrid; Reinforcement Learning; Deep Q-Network; PPO; A2C; DDPG; Energy Management; Weather Forecast.