The main objective of the Programming Models group is to investigate programming paradigms towards productive programming and their implementation through intelligent runtime systems that effectively exploit performance out of the target architecture (from multicore and SMT processors to shared- and distributed-memory systems, small and large-scale cluster systems, including both homogenous and heterogenous systems that use accelerators like GPUs).
We currently organize our work around the design of OmpSs, a set of extensions to provide support to asynchronous tasks and heterogeneity. They are integrated into OpenMP as a base language and interoperate with MPI and CUDA (OpenCL and OpenACC interoperability is in progress). This programming model relies on top of:
- Our Mercurium source-to-source compiler provides the necessary support for transforming the high-level directives into a parallelized version of the application.
- Our Nanos++ runtime library provides the parallel services to manage all the parallelism in the user-application, including task creation, synchronization and data movement, and provide support for resource heterogeneity.
Another objective in our research line is how to achieve the most efficient use of the computational resources in parallel applications based on Shared Memory programming models. For that purpose, DLB (Dynamic Load Balancing) library is a tool, transparent to the user, that will dynamically react to the application imbalance modifying the number of resources at any given time.
OmpSs-2 is a new development that aims at pushing forward even more the research in task-based models by exploring cutting edge ideas and approaches.
If you have any questions or suggestions you can send an e-mail to pm-tools [at] bsc.es. You can also join the pm-tools-users mailing list by sending an e-mail to pm-tools-users-join [at] bsc.es.