BremHLR Course "Parallel programming with MPI and OpenMP"
- University Bremen, Room MZH6200
- This course gives an introduction to parallel programming. The main focus is on the parallel
programming models MPI and OpenMP. Exercises will be an essential part of the workshop.
- The course given by
Dr. Hinnerk Stüben (Regionales Rechenzentrum der Universität Hamburg)
and Dr. Lars Nerger (BremHLR and Alfred Wegener Institute Bremerhaven).
- This course is open to all interested students or members of the Alfred Wegener Institute for Polar
and Marine Research, University Bremen, the Jacobs University Bremen, Hochschule Bremerhaven, as well
as associated institutions. In addition, we accept this year registrations from HLRN and NHR users that are not affiliated in Bremen.
- Solid fundamentals in Unix, C and/or Fortran will be essential!
- For registration please send an e-mail to
- Deadline: September 25, 2023
- No registration fee!
- For the hands-on exercises, we ask the participants to bring their own notebook computers. They need to have a compiler (e.g. gcc) and MPI library (e.g. OpenMPI) installed so that parallel programs can be compiled. For Linux, there are packages providing this software, while for Windows we recommend to install it via Cygwin (www.cygwin.com).
Information for non-local participants
- Hotels close to the University are the Hotel 7 Things and the Atlantic Hotel Universum Bremen.
- The University can be easily reached from Bremen main station (Hauptbahnhof) using the Tram line 6 in direction "Universität". Please exit at the station "Bremen Universität/Zentralbereich".
- The workshop is held in the building 'Mehrzweckhochaus (MZH)' centrally located on the campus (map).
- Monday, 10:00 - 16:30
- Thinking Parallel (I) - Computer architectures and programming models
- Laplace equation (I) - A realistic application example
- Checking computer setup
- Programming - A parallel "Hello World" program
- MPI (I) - basic functions, communicators, messages, basic data types
- Programming - send and recv
- MPI (II)
- - point-to-point communication (send and receive modes)
- - Collective communication
Programming - Ring I
- Tuesday, 9:15 - 16:30
- MPI (III)
- - Derived data types
- - Reduction operations
- Programming - Ring II
- Laplace equation (II) - Laplace example with MPI
- Wednesday, 9:15 - 16:30
- Thinking Parallel (II) - performance considerations
- MPI (IV) - Virtual topologies and communicator splitting
- Programming - Advanced ring communication
- MPI (V) - SHMEM and on-sided communication
- Programming - get and put
- Thursday, 9:15 - 16:30
- Thinking Parallel (III)
- - Chacterization of parallelism
- - Data dependence analysis
- OpenMP (I) with exercises
Laplace equation (III): implementation with OpenMP
- - Concepts
- - Parallelizing loops
- Programming project (II) - OpenMP part
- OpenMP (II) with exercises
- - Synchronization
- - Loop scheduling
- - False sharing
- Friday, 9:15 - 16:00
- Parallel programming bugs
- MPI (VI) - Parallel I/O with MPI-IO
- Programming - Parallel output with MPI-IO
- Hybrid parallelization - joint use of MPI and OpenMP
- Programming - Hybrid "hello world" program