1. Home
  2. Docs
  3. Tutorials
  4. MPI tutorial

MPI tutorial

Bestowed upon us by the wizards at Argonne, the Message Passing Interface (MPI) is cool stuff, you can read about it here.

MPI is a framework for distributed computing, commonly used in supercomputers. Each Octeract Engine release includes MPI libraries, and can be used with MPI out of the box.

Note that the -n flag is equivalent to NUM_CORES in an options file.

Running on a Single Machine

The syntax to invoke MPI on a single machine is the following:

octeract-engine -n [number_of_processes] [problem_file]

The solver will now generate and run n processes in parallel. It is highly recommended to use at most as many processes as there are physical (not logical!) cores in your system.

Running on a cluster

Octeract Engine should run on any Linux cluster out of the box. The syntax to invoke MPI on a distributed architecture is very similar to single machine MPI mode:

octeract-engine -n [nu_processes]  -m [--mpi-hostfile]

The hostfile is required by MPI, as it contains the IP addresses of all the machines that the solver can connect to. A sample hostfile could look like this:

10.200.300.01 : 32
10.200.300.02 : 8
10.200.300.45 : 2
10.200.300.32 : 12

This file contains two columns delimited by a colon. The IP addresses of the available machines are listed in the first column. In the second column, the user can optionally declare the maximum number of cores that can be used in each machine. If the column is ommited, then MPI will use all cores by default. In this example, the first machine is allowed to utilise up to 32 cores.

Notes

  • Note that the engine will use 1 core by default.
  • Octeract Engine will spawn as many processes as the user requests on startup. If that number is smaller than all the processors specified in the hostfile, some machines will not be used at all.
  • If you are using Octeract Engine at your university’s cluster, you should refer to your cluster’s documentation on how to properly use the engine for HPC.