CFP last date
20 April 2026
Call for Paper
May Edition
IJCA solicits high quality original research papers for the upcoming May edition of the journal. The last date of research paper submission is 20 April 2026

Submit your paper
Know more
Random Articles
Reseach Article

Reducing Latency in Hybrid HPC Systems through Containerization and Parallel GPU Processing

by Manju George
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Number 94
Year of Publication: 2026
Authors: Manju George
10.5120/ijca2026926635

Manju George . Reducing Latency in Hybrid HPC Systems through Containerization and Parallel GPU Processing. International Journal of Computer Applications. 187, 94 ( Mar 2026), 55-60. DOI=10.5120/ijca2026926635

@article{ 10.5120/ijca2026926635,
author = { Manju George },
title = { Reducing Latency in Hybrid HPC Systems through Containerization and Parallel GPU Processing },
journal = { International Journal of Computer Applications },
issue_date = { Mar 2026 },
volume = { 187 },
number = { 94 },
month = { Mar },
year = { 2026 },
issn = { 0975-8887 },
pages = { 55-60 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume187/number94/reducing-latency-in-hybrid-hpc-systems-through-containerization-and-parallel-gpu-processing/ },
doi = { 10.5120/ijca2026926635 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2026-03-29T02:17:20.512146+05:30
%A Manju George
%T Reducing Latency in Hybrid HPC Systems through Containerization and Parallel GPU Processing
%J International Journal of Computer Applications
%@ 0975-8887
%V 187
%N 94
%P 55-60
%D 2026
%I Foundation of Computer Science (FCS), NY, USA
Abstract

The research examines how High-Performance Computing (HPC) and cloud-native environments can meet through scalable computing environments. The study measures the optimization of computational efficiency of large-scale data processing through containerization and hardware acceleration. With the help of a synthetic dataset containing 411 examples of high-dimensional performance measurements, the research modeling simulates different workload distributions in hybrid infrastructures. The main operated tools are Kubernetes as an orchestration tool, Docker as an environment isolation tool, and dedicated software libraries as a tool that monitors GPU acceleration. Findings have shown that a combination of containerization and parallel processing can lower the latency by a large margin whilst ensuring that hardware is utilized fully. It is concluded in the abstract that a single piece of architecture is needed to handle modern data-intensive tasks.

References
  1. E. Huaranga-Junco, S. González-Gerpe, M. Castillo-Cara, A. Cimmino, and R. García-Castro, "From cloud and fog computing to federated-fog computing: A comparative analysis of computational resources in real-time IoT applications based on semantic interoperability", Future Generation Computer Systems, vol. 159, pp. 134–150, 2024.
  2. C. Guerrero, I. Lera, and C. Juiz, "Distributed genetic algorithm for application placement in the compute continuum leveraging infrastructure nodes for optimization", Future Generation Computer Systems, vol. 160, pp. 154–170, 2024.
  3. N. Farabegoli, D. Pianini, R. Casadei, and M. Viroli, "Scalability through pulverization: Declarative deployment reconfiguration at runtime", Future Generation Computer Systems, vol. 161, pp. 545–558, 2024.
  4. B. Sedlak, V. Casamayor Pujol, P. K. Donta, and S. Dustdar,"Equilibrium in the computing continuum through active inference", Future Generation Computer Systems, vol. 160, pp. 92–108, 2024.
  5. J. J. Dongarra and P. Luszczek, "The LINPACK benchmark: Past, present, and future", Concurrency and Computation: Practice and Experience, vol. 15, no. 9, pp. 803–820, 2003.
  6. J. J. Dongarra, J. R. Bunch, G. B. Moler, and G. W. Stewart, LINPACK Users’ Guide. Society for Industrial and Applied Mathematics, 1987.
  7. F. Petrini, D. J. Kerbyson, and S. Pakin, "The case of the missing supercomputer performance: Achieving optimal performance on the 8,192 processors of ASCI Q", in Proc. ACM/IEEE Conf. Supercomputing, 2003, p. 55.
  8. A. Snell, D. Goldfarb, and C. G. Willard, "Designed to scale: The Cray XT5 family of supercomputers", White Paper, 2007.
  9. J. M. Bernabé Murcia, E. Cánovas, J. García-Rodríguez, A. M. Zarca, and A. Skarmeta, "Decentralized identity management solution for zero-trust multi-domain computing continuum frameworks", Future Generation Computer Systems, vol. 162, Art. no. 107479, 2025.
  10. R. S. Madhuranthakam, "Scalable data engineering pipelines for real-time analytics in big data environments", FMDB Transactions on Sustainable Computing Systems, vol. 2, no. 3, pp. 154–166, 2024.
  11. C. Guerrero, I. Lera, and C. Juiz, "Application placement optimization in distributed compute continuum infrastructures", Future Generation Computer Systems, vol. 160, pp. 154–170, 2024.
  12. N. Farabegoli, D. Pianini, R. Casadei, and M. Viroli, "Runtime deployment reconfiguration for scalable distributed systems", Future Generation Computer Systems, vol. 161, pp. 545–558, 2024.
  13. B. Sedlak, V. Casamayor Pujol, P. K. Donta, and S. Dustdar, "Active inference mechanisms for balanced scalable computing continuum", Future Generation Computer Systems, vol. 160, pp. 92–108, 2024.
Index Terms

Computer Science
Information Sciences

Keywords

Scalable Computing GPU Acceleration Containerization Parallelism Cloud AI