Comprehensive MPI Tutorial (1): Mastering the Basics of MPI in 5 Minutes

MPI stands for Message Passing Interface, which is a widely used library for parallel computing. It provides a data communication method that is commonly used in supercomputers or distributed systems. MPI is often used in large-scale programming projects that require fast computation or memory space and serves as the communication interface for each computing node. It is also frequently used for large-scale quantum computer simulations on traditional computers.

MPI

MPI Header Files

When writing parallel computing programs using MPI, we need to include some MPI header files to use the API functions provided by MPI. Here are some commonly used MPI header files:

  • mpi.h: This is the main header file for MPI, containing declarations for all of the MPI API functions. When writing MPI programs, we need to include this header file.
  • mpif.h: This is the header file for the MPI Fortran language interface, which provides support for Fortran language to use MPI.
  • mpicxx.h: This is the header file for the MPI C++ language interface, which provides support for C++ language to use MPI.
  • mpi_cxx_iostream.h: This is a header file for the MPI C++ language interface, which provides support for stream output.

These header files are usually located in the installation directory of MPI. When writing MPI programs, we need to include the appropriate header files according to the programming language and version of MPI being used.

In addition to these header files, there are also some standard libraries related to MPI, such as OpenMPI and MPICH. These standard libraries provide the implementation and support for MPI and can be run on different platforms. When using these standard libraries, we need to install and configure them according to the respective documentation, so that we can use these libraries when writing MPI programs.

In short, all MPI functions are included in the mpi.h file, so it is necessary to use this header file at the beginning.

#include <mpi.h>

MPI Functions

The main MPI functions include the following:

This article will provide a brief introduction to the functions, and once a basic understanding of each function is obtained, it will be easier to understand their detailed usage. For detailed usage, please refer to the documentation on mpich.org.

MPI_Init() is the initial program that must be called when using MPI. After this, the program can use the various MPI functions provided. MPI_Init() mainly generates an MPI communicator as a communication bridge between the various threads.

MPI_Comm_rank() lets each Thread know its Rank number in the MPI Communicator.

MPI_Comm_size() can be used to obtain the total number of Ranks in the MPI Group, i.e., the total number of Ranks in this MPI Communicator.

MPI_Send() and MPI_Recv() enable different Ranks to transmit data to other Ranks. MPI_Send() is used on the sending side, and MPI_Recv() is used on the receiving side.

MPI_Finalize() is used to end the MPI Communicator usually at the end of the program after using MPI.

MPI Compilation

MPI programs can be compiled using MPI-related compilers such as mpicxx or mpic++. However, these compilers that start with “mpi” are just wrappers, and it is also possible to use the original GNU GCC toolchain. These wrappers have already linked the corresponding header files and libraries, making it convenient to use. If for some reason you cannot use mpicxx or mpic++ for compiling, you can first use the following command to see the MPI-related library settings and path content on the system and then use the GNU GCC toolchain to compile.

mpicxx -show

For example, you can use g++ to compile an MPI program, just add the corresponding library and include path using -L and -I in the compilation command. You can use mpicxx -show to see the MPI library path on the system.

MPI Execution

Use mpirun to execute an MPI program. Generally, a parameter -n is used to specify how many ranks to use for execution. If -n is not used, the system will automatically generate the corresponding number of ranks based on the number of available threads. The following example runs the program using 4 ranks.

mpirun -n 4 ./a.out

MPI Practical Projects

If you are interested in using MPI for practical applications, you can refer to the Intel-QS open source project. This is a program that uses MPI for quantum computing simulation and can be used for running on supercomputers or distributed computing.

Xponentia
Xponentia

Hello! I'm a Quantum Computing Scientist based in Silicon Valley with a strong background in software engineering. My blog is dedicated to sharing the tools and trends I come across in my research and development work, as well as fun everyday anecdotes.

Articles: 22

Leave a Reply