Programming Models @ BSC

Boosting parallel computing research since 1989

OmpSs tutorial at Supercomputing 2012

- Written by Xavier Teruel


Asynchronous Hybrid and Heterogeneous Parallel Programming with MPI/OmpSs for Exascale Systems

Nov 12, 2012

  • TIME: 1:30PM - 5:00PM\n
  • PRESENTERS: Jesus Labarta, Xavier Martorell, Christoph Niethammer and Costas Bekas.\n</ul>

    ABSTRACT

    Due to its asynchronous nature and look-ahead capabilities, MPI/OmpSs is a promising programming model approach for future exascale systems, with the potential to exploit unprecedented amounts of parallelism, while coping with memory latency, network latency and load imbalance. Many large-scale applications are already seeing very positive results from their ports to MPI/OmpSs (see EU projects Montblanc, TEXT). We will first cover the basic concepts of the programming model. OmpSs can be seen as an extension of the OpenMP model. Unlike OpenMP, however, task dependencies are determined at runtime thanks to the directionality of data arguments. The OmpSs runtime supports asynchronous execution of tasks on heterogeneous systems such as SMPs, GPUs and clusters thereof. The integration of OmpSs with MPI facilitates the migration of current MPI applications and improves, automatically, the performance of these applications by overlapping computation with communication between tasks on remote nodes. The tutorial will also cover the constellation of development and performance tools available for the MPI/OmpSs programming model: the methodology to determine OmpSs tasks, the Ayudame/Temanejo debugging toolset, and the Paraver performance analysis tools. Experiences on the parallelization of real applications using MPI/OmpSs will also be presented. The tutorial will also include a demo.