JUNE 18–22, 2017
FRANKFURT AM MAIN, GERMANY

Session Details

 
Name: Tutorial 13: MPI+X - Hybrid Programming on Modern Compute Clusters with Multicore Processors & Accelerators
 
Time: Sunday, June 18, 2017
02:00 pm - 06:00 pm
 
Room:   Extrakt  
 
Breaks:04:00 pm - 04:30 pm Coffee Break
 
Presenter:   Georg Hager, RRZE
  Rolf Rabenseifner, HLRS
 
Abstract:   Most HPC systems are clusters of shared memory nodes. Such SMP nodes can be small multi-core CPUs up to large many-core CPUs. Parallel programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory). This tutorial analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes. Multi-socket-multi-core systems in highly parallel environments are given special consideration. MPI-3.0 introduced a new shared memory programming interface, which can be combined with inter-node MPI communication. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared with various hybrid MPI+OpenMP approaches and pure MPI. This tutorial also includes a discussion on OpenMP support for accelerators. Benchmark results are presented for modern platforms such as Intel Xeon Phi and Cray XC. Numerous case studies and micro-benchmarks demonstrate the performance-related aspects of hybrid programming. The various programming schemes and their technical and performance implications are compared. Tools for hybrid programming such as thread/process placement support and performance analysis are presented in a "how-to" section.

Content Level 
25% beginner, 50% intermediate, 25% advanced  

Targeted Audience 
People who are in charge with the development of efficient parallel software on clusters of shared memory nodes.

Audience prerequisites 
Some knowledge about parallel programming with MPI and OpenMP.