OpenMP Programming Model
- Details
- Hits: 10608
OpenMP is an Application Program Interface (API), jointly defined by a group of major computer hardware and software vendors. OpenMP provides a portable, scalable model for developers of shared memory parallel applications. The API supports C/C++ and Fortran on multiple architectures, including UNIX and Windows NT. This tutorial covers most of the major features of OpenMP, including its various constructs and directives for specifying parallel regions, work sharing, synchronisation and data environment. Runtime library functions and environment variables are also covered. This tutorial includes both C and Fortran example codes and an exercise.
An Application Program Interface (API) that may be used to explicitly direct multi-threaded, shared memory parallelism is comprised of three primary API components:
- Compiler Directives
- Runtime Library Routines
- Environment Variables
The API is specified for C/C++ and Fortran and most major platforms have been implemented including Unix/Linux platforms and Windows NT, thus making it portable. It is standardised: jointly defined and endorsed by a group of major computer hardware and software vendors and it is expected to become an ANSI standard.
What does OpenMP stand for?
Short answer: Open Multi-Processing
Long answer: Open specifications for Multi-Processing via collaborative work between interested parties from the hardware and software industry, government and academia.
OpenMP is not meant for distributed memory parallel systems (by itself) and it is not necessarily implemented identically by all vendors. It doesn't guarantee to make the most efficient use of shared memory and it doesn't require to check for data dependencies, data con icts, race conditions or deadlocks. It doesn't require to check for code sequences that cause a program to be classified as non-conforming. It is also not meant to cover compiler-generated automatic parallel processing and directives to the compiler to assist it and the design won't guarantee that input or output to the same file is synchronous when executed in parallel. The programmer is responsible for the synchronising part.
OpenMP Programming Model
OpenMP is based upon the existence of multiple threads in the shared memory programming paradigm. A shared memory process consists of multiple threads. OpenMP is an explicit (not automatic) programming model, offering the programmer full control over the parallel processing. OpenMP uses the fork-join model of parallel execution. All OpenMP programs begin as a single process: the master thread. The master thread runs sequentially until the first parallel region construct is encountered.
FORK: the master thread then creates a team of parallel threads. The statements in the program that are enclosed by the parallel region construct are then executed in parallel amongst the various team threads.
JOIN: When the team threads complete, they synchronise and terminate, leaving only the master thread.
Most OpenMP parallelism is specified through the use of compiler directives which are embedded in C/C++ or Fortran source code. Nested Parallelism Support: the API provides for the placement of parallel constructs inside of other parallel constructs. Implementations may or may not support this feature.
Also, the API provides for dynamically altering the number of threads which may be used to execute different parallel regions. Implementations may or may not support this feature.
OpenMP specifies nothing about parallel I/O. This is particularly important if multiple threads attempt to write/read from the same file. If every thread conducts I/O to a different file, the issue is not significant. It is entirely up to the programmer to ensure that I/O is conducted correctly within the context of a multi-threaded program.
OpenMP provides a "relaxed-consistency" and "temporary" view of thread memory, as the producers claim. In other words, threads can "cache" their data and are not required to maintain exact consistency with real memory all of the time. When it is critical that all threads view a shared variable identically, the programmer is responsible for ensuring that the variable is FLUSHed by all threads as needed.