After successful completion of the course, students are able to describe the concept of data dependence and to use it in the parallelization of sequential programs, to identify program transformations for dependence elimination and locality optimization, to implement basic vectorization and parallelization strategies, to explain the polyhedral representation of loops, to discuss analysis techniques for parallel programs.
Overview of parallel systems, data dependence, control dependence, program transformations, vectorization, parallelization for shared and distributed memory systems, locality optimizations, polyhedral model, intermediate representations, program analysis, compilation of parallel languages; outlook on autotuning, runtime parallelization. In the exercise part a given source-to-source compiler will be extended by program transformations and parallel compilation techniques.
Lecture, exercises, programming project, literature study, presentation.
The course is planned as in-person activity. In case of anew restrictions due to the pandemic, the course will be changed to an online mode.
ECTS Breakdown (3 ECTS ~ 75 h):
lecture incl. follow-up 20 exercises 10 programming project 25 reading/presentation of paper 20
Submission of exercise examples, discussion of programming project, presentation of a paper.
Not necessary
Randy Allen, Ken Kennedy. Optimizing compilers for modern architectures. Kaufmann, 2002.Alfred V. Aho, Monica S. Lam, Ravi Sethi, Jeffrey D. Ullman, Compilers: Principles, Techniques, & Tools (2nd Edition). Pearson Addison Wesley, 2007.Michael J. Wolfe. High-Performance Compilers for Parallel Computing. Addison-Wesley, 1996.Hans Zima, Barbara Chapman. Supercompilers for Parallel and Vector Computers. ACM Press, 1990.
Basic concepts of compiler construction, parallel programming.