We apologize for a recent technical issue with our email system, which temporarily affected account activations. Accounts have now been activated. Authors may proceed with paper submissions. PhDFocusTM
CFP last date
20 December 2024
Reseach Article

Benchmarking Raspberry Pi 2 Beowulf Cluster

by Dimitrios Papakyriakou, Dimitra Kottou, Ioannis Kostouros
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 179 - Number 32
Year of Publication: 2018
Authors: Dimitrios Papakyriakou, Dimitra Kottou, Ioannis Kostouros
10.5120/ijca2018916728

Dimitrios Papakyriakou, Dimitra Kottou, Ioannis Kostouros . Benchmarking Raspberry Pi 2 Beowulf Cluster. International Journal of Computer Applications. 179, 32 ( Apr 2018), 21-27. DOI=10.5120/ijca2018916728

@article{ 10.5120/ijca2018916728,
author = { Dimitrios Papakyriakou, Dimitra Kottou, Ioannis Kostouros },
title = { Benchmarking Raspberry Pi 2 Beowulf Cluster },
journal = { International Journal of Computer Applications },
issue_date = { Apr 2018 },
volume = { 179 },
number = { 32 },
month = { Apr },
year = { 2018 },
issn = { 0975-8887 },
pages = { 21-27 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume179/number32/29203-2018916728/ },
doi = { 10.5120/ijca2018916728 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T00:57:13.745854+05:30
%A Dimitrios Papakyriakou
%A Dimitra Kottou
%A Ioannis Kostouros
%T Benchmarking Raspberry Pi 2 Beowulf Cluster
%J International Journal of Computer Applications
%@ 0975-8887
%V 179
%N 32
%P 21-27
%D 2018
%I Foundation of Computer Science (FCS), NY, USA
Abstract

This paper presents a performance benchmarking of a Raspberry Pi 2 Beowulf cluster. Parallel computing systems with high performance parallel processing capabilities has become a popular standard for addressing not only scientific but also commercial applications. The fact that the raspberry pi is a tiny and affordable single board computer (SBC), given the chance to almost everyone to experiment with knowledge and practices in a wide variety of projects akin to super-computing to run parallel jobs. This research project involves the design and construction of a high performance Beowulf cluster, composed of 12 Raspberry Pi 2 model B computers with CPU 900MHz, 32-bit quad-core ARM Cortex-A7 CPU processors and RAM 1GHz each node. All of them are connected over an Ethernet Network 100 Mbps in a parallel mode of operation so that to build a kind of supercomputer. In addition, with the help of the High Performance Linpack (HPL), we observe and depict the cluster performance benchmarking of our system by using mathematical applications to calculate the scalar multiplication of a matrix, extracting performance metrics such as runtime and GFLOPS.

References
  1. Raspberry Pi 2 Model B. [Online]. Available: https://www.raspberrypi.org/products/raspberry-pi-2-model-b/
  2. Raspberry Pi 2 Model B. Operating System. [Online]. Available: https://www.raspberrypi.org/downloads/
  3. MPI. MPI Forum. [Online]. Available: http://mpi-forum.org/
  4. MPI. MPICH. [Online]. Available: https://www.mpich.org/
  5. Netlib. HPL. [Online]. Available: http://www.netlib.org/benchmark/hpl/
  6. Top500.org. Top500 lists. [Online]. Available: https://www.top500.org/.
  7. Green500.org. Green500 lists. [Online]. Available: https://www.top500.org/green500/.
  8. LU factorization. [Online]. Available: https://www.geeksforgeeks.org/l-u-decomposition-system-linear-equations/
  9. Netlib. Netlib blas, [Online]. Available: http://www.netlib.org/blas/.
  10. Mathematics. LU Decomposition of a System of Linear Equations. [Online]. Available: https://www.geeksforgeeks.org/l-u-decomposition-system-linear-equations/
  11. Dunlop, D., Varrette, S. and Bouvry, P. 2010. Deskilling HPL, Vol. 6068 of Lecture Notes in Computer Science, Springer, Heidelberg, Berlin, 102–114.
  12. Luszczek, P., Dongarra, J., Koester, D., Rabenseifner, R., Lucas, B., Kepner, J., McCalpin, J., Bailey, D. and Takahashi, D. 2005. Introduction to the HPC Challenge Benchmark Suite, Technical Report, ICL, University of Tennessee at Knoxville.
  13. Netlib. HPL Tuning. http://www.netlib.org/benchmark/hpl/tuning.html#tips
  14. Dunlop, D., Varrette, S. and Bouvry, P. 2008. On the use of a genetic algorithm in high performance computer benchmark tuning, Proceedings of the International Symposium on Performance Evaluation of Computer and Telecommunication Systems, SPECTS 2008, Art. No.:4667550, 105-113.
  15. HPL Frequently Asked Questions. [Online]. Available: http://www.netlib.org/benchmark/hpl/faqs.html
  16. Sindi, M. 2009. HowTo – High Performance Linpack (HPL), Technical Report, Center for Research Computing, University of Notre Dame.
  17. Petitet, A., Whaley, R. C., Dongarra, J., and Cleary, A. HPL - a portable implementation of the high- performance linpack benchmark for distributed-memory computers. http://www.netlib.org/benchmark/hpl/
  18. Cox J. Simon, Cox T. James, Boardman P. Richard, Johnston J. Steven, Scott Mark, and Neil S. O’Brien. Iridis-pi: a low-cost, compact demonstration cluster. Cluster Computing 17, no. 2 (June 22, 2013): 349-58. doi: 10.1007/s10586-013-0282-7.
Index Terms

Computer Science
Information Sciences

Keywords

Raspberry Pi cluster Cluster Computing Message Passing Interface High Performance Linpack (HPL) Benchmarking RPi clusters.