The intel toolchain consists almost entirely of software components developed by Intel. When building third-party software, or developing your own, load the module for the toolchain:

$ module load intel/<version>

where <version> should be replaced by the one to be used, e.g., 2016b. See the documentation on the software module system for more details.

Starting with the 2014b toolchain, the GNU compilers are also included in this toolchain as the Intel compilers use some of the libraries and as it is possible (though some care is needed) to link code generated with the Intel compilers with code compiled with the GNU compilers.

Compilers: Intel and Gnu

Three compilers are available:

  • C: icc
  • C++: icpc
  • Fortran: ifort

Compatible versions of the GNU C (gcc), C++ (g++) and Fortran (gfortran) compilers are also provided.

For example, to compile/link a Fortran program fluid.f90 to an executable fluid with architecture specific optimization, use:

$ ifort -O2 -xhost -o fluid fluid.f90

For documentation on available compiler options, we refer to the links to the Intel documentation at the bottom of this page. Do not forget to load the toolchain module first!

Intel OpenMP

The compiler switch to use to compile/link OpenMP C/C++ or Fortran code is -qopenmp in recent versions of the compiler (toolchain intel/2015a and later) or -openmp in older versions. For example, to compile/link a OpenMP C program scatter.c to an executable scatter with architecture specific optimization, use:

$ icc -qopenmp -O2 -xhost -o scatter scatter.c

Remember to specify as many processes per node as the number of threads the executable is supposed to run. This can be done using the ppn resource, e.g., -l nodes=1:ppn=10 for an executable that should be run with 10 OpenMP threads. The number of threads should not exceed the number of cores on a compute node.

Communication library: Intel MPI

For the intel toolchain, impi, i.e., Intel MPI is used as the communications library. To compile/link MPI programs, wrappers are supplied, so that the correct headers and libraries are used automatically. These wrappers are:

  • C: mpiicc
  • C++: mpiicpc
  • Fortran: mpiifort

Note that the names differ from those of other MPI implementations. The compiler wrappers take the same options as the corresponding compilers.

Using the Intel MPI compilers

For example, to compile/link a C program thermo.c to an executable thermodynamics with architecture specific optimization, use:

$ mpiicc -O2 -xhost -o thermodynamics thermo.c

For further documentation, we refer to the links to the Intel documentation at the bottom of this page. Do not forget to load the toolchain module first.

Running an MPI program with Intel MPI

Note that an MPI program must be run with the exact same version of the toolchain as it was originally build with. The listing below shows a PBS job script thermodynamics.pbs that runs the thermodynamics executable.

#!/bin/bash -l
module load intel/<version>
cd $PBS_O_WORKDIR
mpirun -np $PBS_NP ./thermodynamics

The resource manager passes the number of processes to the job script through the environment variable $PBS_NP, but if you use a recent implementation of Intel MPI, you can even omit -np $PBS_NP as Intel MPI recognizes the Torque resource manager and requests the number of cores itself from the resource manager if the number is not specified.

Intel mathematical libraries

The Intel Math Kernel Library (MKL) is a comprehensive collection of highly optimized libraries that form the core of many scientific HPC codes. Among other functionality, it offers:

  • BLAS (Basic Linear Algebra Subsystem), and extensions to sparse matrices
  • Lapack (Linear algebra package) and ScaLAPACK (the distributed memory version)
  • FFT-routines including routines compatible with the FFTW2 and FFTW3 libraries (Fastest Fourier Transform in the West)
  • Various vector functions and statistical functions that are optimised for the vector instruction sets of all recent Intel processor families

For further documentation, we refer to the links to the Intel documentation at the bottom of this page.

There are two ways to link the MKL library:

  • If you use icc, icpc or ifort to link your code, you can use the -mkl compiler option:
    • -mkl=parallel or -mkl: Link the multi-threaded version of the library.
    • -mkl=sequential: Link the single-threaded version of the library
    • -mkl=cluster: Link the cluster-specific and sequential library, i.e., ScaLAPACK will be included, but assumes one process per core (so no hybrid MPI/multi-threaded approach)
    The Fortran95 interface library for lapack is not automatically included though. You'll have to specify that library seperately. You can get the value from the MKL Link Line Advisor, see also the next item.
  • Or you can specify all libraries explictly. To do this, it is strongly recommended to use Intel's MKL Link Line Advisor, and will also tell you how to link the MKL library with code generated with the GNU and PGI compilers.
    Note: On most VSC systems, the variable MKLROOT has a different value from the one assumed in the Intel documentation. Wherever you see $(MKLROOT) you may have to replace it with $(MKLROOT)/mkl.

MKL also offers a very fast streaming pseudorandom number generator, see the documentation for details.

Intel toolchain version numbers

2018a2017b2017a 2016b 2016a 2015b 2015a 2014b 2014a
icc/icpc/ifort 2018.1.1632017.4.1962017.1.132 16.0.3 20160425 16.0.1 20151021 15.0.3 20150407 15.0.1 20141023 13.1.3 20130617 13.1.3 20130607
Intel MPI 2018.1.1632017.3.1962017.1.132 5.1.3.181 5.1.2.150 5.03.3048 5.0.2.044 4.1.3.049 4.1.3.045
Intel MKL 2018.1.1632017.3.1962017.1.132 11.3.3.210 11.3.1.150 11.2.3.187 11.2.1.133 11.1.2.144 11.1.1.106
GCC 6.4.06.4.06.3.0 4.9.4 4.9.3 4.9.3 4.9.2 4.8.3 /
binutils 2.282.282.27 2.26 2.25 2.25 / / /

Further information on Intel tools

Systems