International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 127 - Number 2 |
Year of Publication: 2015 |
Authors: Nipun Agarwal, Aman Goyal, Gaurav Maheshwari, and Alok Dugtal |
10.5120/ijca2015906339 |
Nipun Agarwal, Aman Goyal, Gaurav Maheshwari, and Alok Dugtal . Parallel Implementation of Scheduling Algorithms on GPU using CUDA. International Journal of Computer Applications. 127, 2 ( October 2015), 44-49. DOI=10.5120/ijca2015906339
The future of computation is the GPU, i.e. the Graphical Processing Unit. The graphics cards have shown the tremendous power in the field of image processing and accelerated generating of 3D scenes, and the computational capability of GPUs have promised its developing into great parallel computing units. It is quite simple to program a graphical processor to perform many parallel tasks. But after understanding the various aspects of the graphical processor, it can be used to perform other useful tasks as well. This paper shows how CUDA can fully utilize the tremendous power of these GPUs. CUDA is NVIDIA’s parallel computing architecture which enables terrible increase in computing performance, by gearing the power of the GPU. In the first phase, several operating system algorithms in single threaded CPU environment are implemented using C language, then the same algorithms are implemented on CUDA and CUDA enabled GPU in a parallel environment and finally comparison of their performance and results to their implementation in GPU and CPU are shown.