CFP last date
20 January 2025
Reseach Article

Bridging the Performance Gap between Manual and Automatic Compilers with Intent-based Compilation

by Waseem Ahmed
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 86 - Number 9
Year of Publication: 2014
Authors: Waseem Ahmed
10.5120/15010-3303

Waseem Ahmed . Bridging the Performance Gap between Manual and Automatic Compilers with Intent-based Compilation. International Journal of Computer Applications. 86, 9 ( January 2014), 1-7. DOI=10.5120/15010-3303

@article{ 10.5120/15010-3303,
author = { Waseem Ahmed },
title = { Bridging the Performance Gap between Manual and Automatic Compilers with Intent-based Compilation },
journal = { International Journal of Computer Applications },
issue_date = { January 2014 },
volume = { 86 },
number = { 9 },
month = { January },
year = { 2014 },
issn = { 0975-8887 },
pages = { 1-7 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume86/number9/15010-3303/ },
doi = { 10.5120/15010-3303 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T22:03:44.472407+05:30
%A Waseem Ahmed
%T Bridging the Performance Gap between Manual and Automatic Compilers with Intent-based Compilation
%J International Journal of Computer Applications
%@ 0975-8887
%V 86
%N 9
%P 1-7
%D 2014
%I Foundation of Computer Science (FCS), NY, USA
Abstract

In spite of years of research in automatic parallelization, progress has been slow in terms of tools that can consistently generate scalable, portable and efficient code for multiple architectures. Moreover, a substantial difference in efficiency exists between code generated automatically and code generated by an expert programmer. Although the fact that the best sequential algorithm for a problem can be very different from the best parallel algorithm is well known, the feature of algorithm substitution is absent from most tools available today. However, automatically identifying an algorithm used in code is not trivial considering the nuances in programming style, algorithmic representations and expressions. This paper presents a novel Intent Based Compilation approach that uses a rule-based Expert System Engine to identify the intent of the algorithm used in the code based on fine- and coarse-grained features extracted from code. Using this information, the most optimized algorithm for the target architecture is then substituted. Results obtained by using Amoeba, a framework that incorporates this methodology, on codes obtained from the public domain are presented.

References
  1. Javier Diaz, Camelia Mu˜noz-Caro, and Alfonso Ni˜no. A survey of parallel programming models and tools in the multi and many-core era. Parallel and Distributed Systems, IEEE Transactions on, 23(8), 2012.
  2. Bryan Catanzaro, Armando Fox, Kurt Keutzer, David Patterson, Bor-Yiing Su, Marc Snir, Kunle Olukotun, Pat Hanrahan, and Hassan Chafi. Ubiquitous parallel computing from Berkeley, Illinois, and Stanford. Micro, IEEE, 30(2), 2010.
  3. Mehdi Amini, Ronan Keryell, Beatrice Creusillet, Corinne Ancourt, and Fran˜Axois Irigoin. Program sequentially, carefully, and benefit from compiler advances for parallel heterogeneous computing. Technical report, MINES-ParisTech CRI, 2012.
  4. John Cavazos. Intelligent compilers. In in Proceedings of the IEEE Int. Conf. on Cluster Computing, 2008.
  5. Ananta Tiwari, Chun Chen, Jacqueline Chame, Mary Hall, and Jeffrey K Hollingsworth. A scalable auto-tuning framework for compiler optimization. In Parallel & Distributed Processing, 2009. IPDPS 2009. IEEE International Symposium on, 2009.
  6. M. Frigo. A fast Fourier transform compiler. In Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation, May 1999.
  7. R. C. Whaley and Jack Dongarra. Automatically tuned linear algebra software. In Proceedings of Supercomputing, Nov. 1998.
  8. R. Vuduc, , J. W. Demmel, and K. A. Yelick. Oski: A library of automatically tuned sparse matrix kernels. Journal of Physics: Conference Series, 16:521–530, 2005.
  9. Chirag Dave, Hansang Bae, Seung-Jai Min, Seyong Lee, Rudolf Eigenmann, and Smauel Midkiff. Cetus: A sourceto- source compiler infrastructure for multicores. IEEE Computer, December 2009.
  10. Matthew J. Bridges, Neil Vachharajani, Yun Zhang, Thomas Jablin, and David I. August. Revisiting the sequential progamming model for the multicore era. IEEE Micro, January/ February 2008.
  11. Christian Bienia, Sanjeev Kumar, Jaswinder Pal Singh, and Kai Li. The PARSEC benchmark suite: Characterization and architectural implications. In Proceedings of PACT'08, 2008.
  12. J. Ceng, J. Castrillon, W. Sheng, H. Scharwachter, R. Leupers, G. Ascheid, H Meyr, T. Isshiki, and H. Kunieda. MAPS : An integrated framework for MPSoC application parallelization. In in Proceedings of the Design Automation Conference (DAC), 2008.
  13. Anja Guzzi, Lile Hattori, Michele Lanza, Martin Pinzger, and Arie van Deursen. Collective code bookmarks for program comprehension. In Proceedings of the IEEE 19th Int. Conf. on Program Comprehension (ICPC), 2011.
  14. Shih-Wei Liao. Suif Explorer: An Interactive and Interprocedural Parallelizer. PhD thesis, Stanford, 2000.
  15. Wen-mei Hwu, Shane Ryoo, Sain-Zee Ueng, John H. Kelm, Isaac Gelado, Sam S. Stone, Robert E. Kidd, Sara S. Baghsorkhi, Aqeel A. Mahesri, Stephanie C. Tsao, Nacho Navarro, Steve S. Lumetta, Matthew I. Frank, and Sanjay J. Patel. Implicitly parallel programming models for thousand-core microprocessors. In DAC '07: Proceedings of the 44th annual conference on Design automation, pages 754–759, New York, NY, USA, 2007. ACM.
  16. Georgios Tournavitis, Zheng Wang, Bj¨orn Franke, and Michael FP O'Boyle. Towards a holistic approach to autoparallelization: integrating profile-driven parallelism detection and machine-learning based mapping. In Proceedings of PLDI'09, volume 44, pages 177–187, 2009.
  17. Jason Ansel. Petabricks: a language and compiler for algorithmic choice. Master's thesis, MIT, 2009.
  18. Barry Wilkinson and Michael Allen. Parallel Programming - Techniques and Applications using Networked Workstations and Parallel Computers. Pearson Education, 2 edition, 2005.
  19. Michael J. Quinn. Parallel Programming in C with MPI and OpenMP. Tata McGraw-Hill, 2003.
  20. Krste Asanovic, Ras Bodik, Bryan Christopher Catanzaro, Joseph James Gebis, Parry Husbands, Kurt Keutzer, David A. Patterson, William Lester Plishker, John Shalf, Samuel Webb Williams, and Katherine A. Yelick. The landscape of parallel computing research: A view from Berkeley. Technical Report UCB/EECS-2006-183, EECS Department, University of California, Berkeley, Dec 2006.
  21. Pieter Custers. Algorithmic species: Classifying program code for parallel computing. Master's thesis, Eindhoven University of Technology, 2012.
  22. Cedric Nugteren, Pieter Custers, and Henk Corporaal. Algorithmic species: A classification of affine loop nests for parallel programming. ACM Transactions on Architecture and Code Optimization (TACO), 9, 2013.
  23. Joseph C. Giarratano and Gary D. Riley. Expert Systems - Principles and Programming. Thomson, 2005.
  24. Ian Foster. Designing and Building Parallel Programs: Concepts and tools for parallel software engineering. Reading, MA: Addison-Wesley, 1995.
  25. S. G. Akl. Parallel Sorting Algorithms. Orlando FL: Academic Press, 1985.
  26. W. J. Camp, S. J. Plimpton, B. A. Hendrikson, and R. W. Leland. Massively parallel methods for engineering and science problems. Communications of the ACM, 37(4):30–41, 1994.
  27. Chirag Dave and Rudolf Eigenmann. Automatically tuning parallel and parallelized programs. Languages and Compilers for Parallel Computing, pages 126–139, 2010.
  28. Dheya Mustafa and Rudolf Eigenmann. Portable section-level tuning of compiler parallelized applications. In Proceedings of the 2012 ACM/IEEE Conference on Supercomputing. IEEE Press, 2012.
Index Terms

Computer Science
Information Sciences

Keywords

Intent-based compilation automation code-to-code transformers parallelization parallel compilers