Using Intel MPI
The Intel MPI library is a multifabric message-passing library that implements the open-source MPICH specification. The library is design for maximum performance on clusters that use Intel processors.
starccm+
command:-mpidriver intel
-mpi intel
To use a compatible local installation instead of the Intel MPI library bundled with
Simcenter STAR-CCM+, set the
INTEL_MPI_PATH
environment variable to the local installation's
root directory. The local installation must have the same major version as the bundled
version, and at least the same minor version.
-mpiflags <mpirun options>
-mpidriver intel:<mpirun options>
-mpi intel:<mpirun options>
Consult the official Intel MPI documentation for further information.
Default Fabric Selection
The Intel MPI default fabric selection of shm:ofi
is overridden with the selection ofi
, meaning that only ofi
fabrics are used. This selection leads to better robustness. This setting can be overridden by specifying the I_MPI_FABRICS
environment variable. Consult the Intel MPI documentation for further information.
Third-Party Library Usage and Selection
Starting with version 2019, Intel MPI relies solely on the Open Fabrics Interfaces (OFI) framework with Libfabric as its underlying communication library. All available communication fabrics are mapped to Libfabric providers.
- Usage on Mellanox InfiniBand Systems on Linux
-
The recommended provider on Mellanox InfiniBand systems is
mlx
. Intel MPI automatically selects this provider on suitable systems, and it can also be requested explicitly with-fabric UCX/ucx
.The
mlx
provider is using the UCX third-party library. For more information on UCX see Using UCX. - Libfabric Selection (All Platforms)
- By default, the Libfabric distributed with Intel MPI is used, except in
simulation runs which meet the following requirements:
- Running on an AMD-based system, and either
- Running a single-host simulation, or
- Running a multi-host simulation without an InfiniBand network
- Libfabric Selection (Linux Specifics)
-
Note that using the Simcenter STAR-CCM+ distribution of Libfabric might lead to performance regressions compared to the Intel MPI Libfabric distribution on Mellanox InfiniBand systems due to the absence of the "mlx" provider in the upstream Libfabric code (but not in the Intel MPI Libfabric distribution).
Besides the Libfabric distributions from Intel MPI or Simcenter STAR-CCM+, you can access a system-wide installed Libfabric distribution by additionally passing the expert command line flag
-xsystemlibfabric
. If the alternative Libfabric distribution is not installed to a system-wide library path, you must supply its library location to Simcenter STAR-CCM+ using-ldlibpath
.
Specifying Environment Variables
To export environment variables to the spawned processes, use the following options
with the starccm+
(starccm+.bat
on Windows)
command:
-mpiflags "-genv VARIABLE1 value1 -genv VARIABLE2 value2"
Consult the Intel MPI documentation for further information.
Usage on IPv6 Networks
To use Simcenter STAR-CCM+ with Intel MPI on IPv6-only networks, you might need to pass:
-mpiflags "-v6"
Consult the Intel MPI documentation for further documentation.
Reducing Memory Consumption
Intel MPI 2021 has shown an increase of memory consumption compared to Intel MPI 2019
on rare occasions when running under Windows due to a change of shared memory
communication controls. There exist several environment variable settings (such
as I_MPI_SHM_CELL_FWD_NUM=0
and
I_MPI_SHM_CELL_EXT_NUM_TOTAL=0
) which can be used to reduce the
memory consumption, but the results can vary depending on the specific simulation
file and hardware system. Consult the Intel MPI documentation for further
information. Also note that according to the vendor, non-default settings can lead
to performance degradation, so apply any of these with care and only if
necessary.
Usage on AMD Systems
As per the official hardware requirements, Intel MPI is not supported on AMD-based systems. Consult the Intel MPI system requirements for further information. Using Intel MPI on AMD-based systems might lead to issues such as abortions or hangs. It is recommended to use the default Open MPI on Linux and MS MPI on Windows when running on AMD-based systems. If Intel MPI needs to be used on AMD-based systems despite this recommendation, consider using the Simcenter STAR-CCM+ distribution of Libfabric in case of issues. This is done automatically under some circumstances. See Libfabric Selection (All Platforms) for further instructions.
Another workaround if the system hangs is to use the PSM3 fabric. To activate it, set
the FI_PROVIDER=psm3
environment variable.
Issues with the PSM3 Provider and Ethernet Link Aggregation
When Ethernet link aggregation (IEEE 802.3ad, bonding) is used on a system, Intel
MPI's Libfabric can lead to a crash when the PSM3 provider is selected. To work
around this issue the PSM3 provider can be disabled by setting the environment
variable FI_PROVIDER=^psm3
. Simcenter STAR-CCM+ applies this workaround automatically when Ethernet link
aggregation is detected. To force the usage of PSM3,
FI_PROVIDER=psm3
can be used.