CFP last date
20 January 2025
Reseach Article

Comparative Study of Parallel Programming Models to Compute Complex Algorithm

by Mukul Sharma, Pradeep Soni
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 96 - Number 19
Year of Publication: 2014
Authors: Mukul Sharma, Pradeep Soni
10.5120/16900-6961

Mukul Sharma, Pradeep Soni . Comparative Study of Parallel Programming Models to Compute Complex Algorithm. International Journal of Computer Applications. 96, 19 ( June 2014), 9-12. DOI=10.5120/16900-6961

@article{ 10.5120/16900-6961,
author = { Mukul Sharma, Pradeep Soni },
title = { Comparative Study of Parallel Programming Models to Compute Complex Algorithm },
journal = { International Journal of Computer Applications },
issue_date = { June 2014 },
volume = { 96 },
number = { 19 },
month = { June },
year = { 2014 },
issn = { 0975-8887 },
pages = { 9-12 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume96/number19/16900-6961/ },
doi = { 10.5120/16900-6961 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T22:22:10.454664+05:30
%A Mukul Sharma
%A Pradeep Soni
%T Comparative Study of Parallel Programming Models to Compute Complex Algorithm
%J International Journal of Computer Applications
%@ 0975-8887
%V 96
%N 19
%P 9-12
%D 2014
%I Foundation of Computer Science (FCS), NY, USA
Abstract

The main goal of this research is to use OpenMP, Posix Threads and Microsoft Parallel Patterns libraries to design an algorithm to compute Matrix Multiplication effectively. By using the libraries of OpenMP, Posix Threads and Microsoft Parallel Patterns Libraries, one can optimize the speedup of the algorithm. First step is to write simple program which calculates a predetermined Matrix and gives the results, after compilation and execution of the code. In this stage only single core processor is used to calculate the Matrix multiplication. Later on, in this research OpenMP, Posix Threads and Microsoft Parallel Patterns libraries are added separately and use some functions in the code to parallelize the computation, by using those functions multi-cores of a processor are allowed. Then execute the program and check its run time, then a timer function is added to the code which periodically checks the time it took for the computer to do the parallelization. First the program is run without the Parallel libraries, and then with the OpenMP, Posix Threads and with Microsoft Parallel Patterns libraries code. Then program is executed for each input Matrix size and result is collected. Maximum 5 trials for each input size are conducted and record the time it took for the computer to parallelize the Matrix multiplication. Finally comparison of the performance in terms of execution time and speed up for OpenMP, Posix Threads and Microsoft Parallel Patterns libraries is done using different Matrix Dimensions and different number of processors.

References
  1. Blaise Barney, "Introduction to Parallel Computing", Lawrence Livermore National Laboratory, January 2009.
  2. Anshul Gupta,"Introduction to Parallel Computing", IBM T. J. Watson Research Center, Yorktown Heights, 2003.
  3. George Karypis,"Parallel Algorithms and Applications", University of Minnesota, Minneapolis, March 2012.
  4. George Mozdzynski, "Concepts of Parallel Computing",European Centre for Medium-Range Weather Forecasts, March 2012.
  5. S. Salvini, Unlocking the Power of OpenMP, Invited lecture at 5th European Workshop on OpenMP (EWOMP '03), September 2003.
  6. Dheeraj Bhardwaj, "Parallel Computing- A Key to Performance", Department of Computer Science & Engineering, Indian Institute of Technology Delhi, August 2011.
  7. R. Parikh,"Accelerating quicksort on the intel Pentium 4 processor with hyper–threading technology",Software Community Intel, October 2007.
  8. Werner Backes, Sussane Wetzel, "A Parallel LLL using Posix Threads", Department of computer science, Stevens Institute of Technology.
  9. J. Balart, A. Duran, M. Gonz`alez, X. Martorell, E. Ayguad´e, and J. Labarta. Nanos Mercurium,"A Research Compiler for OpenMP", 6th European Workshopon OpenMP (EWOMP '04), pages 103–109, September 2004.
  10. D. an Mey,"Two OpenMP programming patterns", Proceedings of the Fifth European Workshop on OpenMP - EWOMP'03, September 2003.
Index Terms

Computer Science
Information Sciences

Keywords

Parallel Computing Parallel Programming models Open MP PThreads Microsoft Parallel Patterns Libraries