057.020 VSC-School I Courses in High Performance Computing
This course is in all assigned curricula part of the STEOP.
This course is in at least 1 assigned curriculum part of the STEOP.

2021W, VU, 2.0h, 1.5EC, to be held in blocked form

Properties

  • Semester hours: 2.0
  • Credits: 1.5
  • Type: VU Lecture and Exercise
  • Format: Online

Learning outcomes

After successful completion of the course, students are able to

1) Linux and First Steps on the VSC Clusters

  • login to the VSC Systems using Secure Shell (SSH) and individually configure their own Linux environment to speed up work on the cluster,
  • use at least one common text editor to modify text,
  • use the 25 most important Linux shell commands, for example to create, copy, move and delete files,
  • create shell scripts that automate simple sequences of commands.

2) Introduction to Working on the VSC Clusters

  • describe in word and sketch how a typical high-performance computing cluster is structured, 
  • describe how the batch system works at the VSC,
  • develop the workflows on the VSC necessary for their own research work,
  • use the module environment on the VSC,
  • compile programs on the VSC,
  • create batch jobs for the workload manager SLURM deployed at VSC and submit them for execution,
  • check the status of the jobs sent for execution and, after processing them, check the success of the job especially in terms of correct execution and of the runtime. 

3) Parallelization with MPI (Message Passing Interface)

  • differentiate between pure shared-memory architectures and high-performance computing clusters (combination of distributed-memory and shared-memory architectures) and name the consequences for the parallelization and execution of programs,
  • explain the main advantages and disadvantages of the parallelization concepts (distributed-memoryshared-memory and hybrid parallelization),
  • select the most suitable method for parallelization depending on the situation,
  • describe the essential concepts of the Message Passing Interface (MPI),
  • describe the communication between the individual MPI processes and the generally implicit synchronization between the MPI processes,
  • create a parallel program using MPI,
  • parallelize a serial program using MPI,
  • select methods of MPI communication that prevent deadlocks and ensure the correctness of the program,
  • compare these communication methods with regard to the runtime of the parallel program on a specific cluster,
  • determine from these last two points the best method of MPI communication on this particular cluster,
  • identify any errors in MPI programs and
  • fix identified errors in MPI programs.

 

Subject of course

1) Linux and First Steps on the VSC Clusters

  • Components of a high-performance computing cluster and the actual structure of the VSC,
  • login to VSC and transferring files between workstation and VSC,
  • the most important Linux shell commands and working with a text editor,
  • the most important functionalities of a Linux shell and writing of shell scripts,
  • configuration of the own working environment by setting environment variables and editing  configuration files.

2) Introduction to Working on the VSC Clusters

  • Components of a high-performance computing cluster, difference between login and compute nodes and difference between shared-memory und distributed-memory architectures,
  • structure of the VSC and a brief overview of the available special purpose hardware such as graphics cards,
  • the module environment on the VSC and compiling own programs on the VSC,
  • the workload manager SLURM, its functioning and main options,
  • the most important possibilities of data storage on the VSC,
  • individual work steps on the systems of the VSC.

3) Parallelization with MPI (Message Passing Interface)

  • The main concepts of parallelizing programs on high-performance computing clusters (distributed-memory vs. shared-memory architectures), their main advantages and disadvantages,
  • selection of the most suitable method for parallelization depending on the situation,
  • overview of all aspects of the current standard for MPI and their application field,
  • the essential concepts of MPI in detail,
  • the implementation of different possibilities of MPI communication,
  • concretely formulated tasks to create parallel MPI programs.

 

Teaching methods

1) Linux and First Steps on the VSC Clusters

Lecture about:

  • components of a high-performance computing cluster and the actual structure of the VSC.

Lecture and practical exercises about:

  • login to VSC and transferring files between workstation and VSC,
  • the most important Linux shell commands and working with a text editor,
  • the most important functionalities of a Linux shell and writing of shell scripts,
  • configuration of the own working environment by setting environment variables and editing configuration files.

2) Introduction to Working on the VSC Clusters

Lecture about:

  • components of a high-performance computing cluster, difference between login and compute nodes and difference between shared-memory und distributed-memory architectures,
  • structure of the VSC and a brief overview of the available special purpose hardware such as graphics cards.

Lecture and practical exercises about:

  • the module environment on the VSC and compiling own programs on the VSC,
  • the workload manager SLURM, its functioning and main options,
  • the most important possibilities of data storage on the VSC,
  • individual work steps on the systems of the VSC.

3) Parallelization with MPI (Message Passing Interface)

Lecture about:

  • the main concepts of parallelizing programs on high-performance computing clusters (distributed-memory vs. shared-memory architectures), their main advantages and disadvantages,
  • selection of the most suitable method for parallelization depending on the situation,
  • overview of all aspects of the current standard for MPI and their application field,
  • the essential concepts of MPI in detail.

Lecture and practical exercises about:

  • the implementation of different possibilities of MPI communication.

Practical exercises about:

  • concretely formulated tasks after each new topic to create parallel MPI programs that can be done independently alone or in teams of two students,
  • discussion among the participants as well as in a one-to-one with the course instructors.

 

Mode of examination

Immanent

Additional information

The course is divided into individual blocks all of which are delivered online via Zoom. Individual registration for each block should be done via the homepage of the course - registration is strictly necessary since the details to access the online course will be provided to the registered and accepted attendees only.

Lecturers

Institute

Course dates

DayTimeDateLocationDescription
Fri09:00 - 13:0015.10.2021 Online course via ZoomLinux and First Steps on the VSC Clusters (participation is required for Linux newbies only)
Fri09:00 - 17:0022.10.2021 Online course via ZoomIntroduction to Working on the VSC Clusters (1 day - either 22.10.2021 or 13.01.2022)
08:30 - 13:0023.11.2021 - 26.11.2021 Online course via ZoomParallelization with MPI (Message Passing Interface) (4 morning sessions)
Thu09:00 - 17:0013.01.2022 Online course via ZoomIntroduction to Working on the VSC Clusters (1 day - either 22.10.2021 or 13.01.2022)
VSC-School I Courses in High Performance Computing - Single appointments
DayDateTimeLocationDescription
Fri15.10.202109:00 - 13:00 Online course via ZoomLinux and First Steps on the VSC Clusters (participation is required for Linux newbies only)
Fri22.10.202109:00 - 17:00 Online course via ZoomIntroduction to Working on the VSC Clusters (1 day - either 22.10.2021 or 13.01.2022)
Tue23.11.202108:30 - 13:00 Online course via ZoomParallelization with MPI (Message Passing Interface) (4 morning sessions)
Wed24.11.202108:30 - 13:00 Online course via ZoomParallelization with MPI (Message Passing Interface) (4 morning sessions)
Thu25.11.202108:30 - 13:00 Online course via ZoomParallelization with MPI (Message Passing Interface) (4 morning sessions)
Fri26.11.202108:30 - 13:00 Online course via ZoomParallelization with MPI (Message Passing Interface) (4 morning sessions)
Thu13.01.202209:00 - 17:00 Online course via ZoomIntroduction to Working on the VSC Clusters (1 day - either 22.10.2021 or 13.01.2022)
Course is held blocked

Examination modalities

The performance review takes place by participation in the courses and by reviewing the submitted program examples.

Course registration

Registration modalities

This course will be held in blocks, please see https://vsc.ac.at/training.

Registration for each of the blocks should be done within the corresponding course at https://vsc.ac.at/training.

Contact for this course: training@vsc.ac.at

Curricula

Study CodeObligationSemesterPrecon.Info
ALG For all Students Not specified

Literature

No lecture notes are available.

Previous knowledge

Previous knowledge for the individual blocks: 

1) Linux and First Steps on the VSC Clusters

  • There is no previous knowledge required for this block.

2) Introduction to Working on the VSC Clusters

  • Students are able to independently apply the skills developed as learning  outcomes of  block 1 (Linux).

3) Parallelization with MPI (Message Passing Interface)

  • Students are able to independently apply the skills developed as learning  outcomes of  block 1 (Linux) and can create, compile, and execute a serial program in at least one of the C/C++, Fortran, or Python (NEW) programming languages. 

Continuative courses

Miscellaneous

Language

English