UAntwerpen has two clusters. leibniz and hopper, Turing, an older cluster, has been retired in the early 2017.

Local documentation

Leibniz

Leibniz was installed in the spring of 2017. It is a NEC system consisting of 152 nodes with 2 14-core intel E5-2680v4 Broadwell generation CPUs connected through a EDR InfiniBand network. 144 of these nodes have 128 GB RAM, the other 8 have 256 GB RAM. The nodes do not have a sizeable local disk. The cluster also contains a node for visualisation, 2 nodes for GPU computing (NVIDIA Psscal generation) and one node with an Intel Xeon Phi expansion board.

Access restrictions

Access ia available for faculty, students (master's projects under faculty supervision), and researchers of the AUHA. The cluster is integrated in the VSC network and runs the standard VSC software setup. It is also available to all VSC-users, though we appreciate that you contact the UAntwerpen support team so that we know why you want to use the cluster.

Jobs can have a maximal execution wall time of 3 days (72 hours).

Hardware details

  • Interactive work:
    • 2 login nodes. These nodes have a very similar architecture to the compute nodes.
    • 1 visualisation node with a NVIDIA P5000 GPU. This node is meant to be used for interactive visualizations (specific instructions).
  • Compute nodes:
    • 144 nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 128 GB RAM.
    • 8 nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz) and 256 GB RAM.
    • 2 nodes with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and two NVIDIA Tesla P100 GPUs with 16 GB HBM2 memory per GPU (delivering a peak performance of 4.7 TFlops in double precision per GPU) (specific instructions).
    • 1 node with 2 14-core Intel E5-2680v4 CPUs (Broadwell generation, 2.4 GHz), 128 GB RAM and Intel Xeon Phi 7220P PCIe card with 16 GB of RAM (specific instructions).
    • All nodes are connected through a EDR InfiniBand network
    • All compute nodes contain only a small SSD drive. This implies that swapping is not possible and that users should preferably use the main storage for all temporary files also.
  • Storage: The cluster relies on the storage provided by Hopper (a 100 TB DDN SFA7700 system with 4 storage servers).

Login infrastructure

Direct login is possible to both login nodes and to the visualization node.

  • From outside the VSC network: Use the external interface names. Currently, one needs to be on the network of UAntwerp or some associated institutions to be able to access the external interfaces. Otherwise a VPN connection is needed to the UAntwerp network.
  • From inside the VSC network (e.g., another VSC cluster): Use the internal interface names.
External interface Internal interface
Login generic login-leibniz.hpc.uantwerpen.be
Login login1-leibniz.hpc.uantwerpen.be
login2-leibniz.hpc.uantwerpen.be
ln1.leibniz.antwerpen.vsc
ln2.leibniz.antwerpen.vsc
Visualisation node viz1-leibniz.hpc.uantwerpen.be viz1.leibniz.antwerpen.vsc

Storage organization

See the section on the storage organization of hopper.

Characteristics of the compute nodes

Since leibniz is currently a homogenous system with respect to CPU type and interconnect, it is not needed to specify the corresponding properties (see also the page on specifying resources, output files and notifications).

However, to make it possible to write job scripts that can be used on both hopper and leibniz (or other VSC clusters) and to prepare for future extensions of the cluster, the following features are defined:

property explanation
broadwell only use Intel processors from the Broadwell family (E5-XXXv4) (Not needed at the moment as this is the only CPU type)
ib use InfiniBand interconnect (not needed at the moment as all nodes are connected to the InfiniBand interconnect)
mem128 use nodes with 128 GB RAM (roughly 112 GB available). This is the majority of the nodes on leibniz.
mem256 use nodes with 256 GB RAM (roughly 240 GB available). This property is useful if you submit a batch of jobs that require more than 4 GB of RAM per processor but do not use all cores and you do not want to use a tool to bundle jobs yourself such as Worker, as it helps the scheduler to put those jobs on nodes that can be further filled with your jobs.

These characteristics map to the following nodes on Hopper:

Type of node CPU type Interconnect # nodes # physical
cores
(per node)
# logical
cores
(per node)
installed mem
(per node)
avail mem
(per node)
local disc
broadwell:ib:mem128 Xeon E5-2680v4 IB-EDR 144 28 28 128 GB 112 GB ~25 GB
broadwell:ib:mem256 Xeon E5-2680v4 IB-EDR 8 28 28 256 GB 240 GB ~25 GB

Hopper

Hopper is the current UAntwerpen cluster. It is a HP system consisting of 168 nodes with 2 10-core Intel E5-2680v2 Ivy Bridge generation CPUs connected through a FDR10 InfiniBand network. 144 nodes have a memory capacity of 64 GB while 24 nodes have 256 GB of RAM memory. The system has been reconfigured to have a software setup that is essentially the same as on Leibniz.

Access restrictions

Access ia available for faculty, students (master's projects under faculty supervision), and researchers of the AUHA. The cluster is integrated in the VSC network and runs the standard VSC software setup. It is also available to all VSC-users, though we appreciate that you contact the UAntwerpen support team so that we know why you want to use the cluster.

Jobs can have a maximal execution wall time of 3 days (72 hours).

Hardware details

  • 4 login nodes accessible through the generic name login.hpc.uantwerpen.be.
    • Use this hostname if you read vsc.login.node in the documentation and want to connect to this login node
  • Compute nodes
    • 144 (96 installed in the first round, 48 in the first expansion) nodes with 2 10-core Intel E5-2680v2 CPUs (Ivy Bridge generation) with 64 GB of RAM.
    • 24 nodes with 2 10-core Intel E5-2680v2 CPUs (Ivy Bridge generation) with 256 GB of RAM.
    • All nodes are connected through an InfiniBand FDR10 interconnect.
  • Storage
    • Storage is provided through a 100 TB DDN SFA7700 disk array with 4 storage servers.

Login infrastructure

Direct login is possible to both login nodes and to the visualization node.

  • From outside the VSC network: Use the external interface names. Currently, one needs to be on the network of UAntwerp or some associated institutions to be able to access the external interfaces. Otherwise a VPN connection is needed to the UAntwerp network.
  • From inside the VSC network (e.g., another VSC cluster): Use the internal interface names.
External interface Internal interface
Login generic login.hpc.uantwerpen.be
login-hopper.hpc.uantwerpen.be
Login nodes login1-hopper.hpc.uantwerpen.be
login2-hopper.hpc.uantwerpen.be
login3-hopper.hpc.uantwerpen.be
login4-hopper.hpc.uantwerpen.be
ln01.hopper.antwerpen.vsc
ln02.hopper.antwerpen.vsc
ln03.hopper.antwerpen.vsc
ln04.hopper.antwerpen.vsc

Storage organisation

The storage is organised according to the VSC storage guidelines.

Name Variable Type Access Backup Default quota
/user/antwerpen/20X/vsc20XYZ $VSC_HOME GPFS VSC NO 3 GB
/data/antwerpen/20X/vsc20XYZ $VSC_DATA GPFS VSC NO 25 GB
/scratch/antwerpen/20X/vsc20XYZ $VSC_SCRATCH
$VSC_SCRATCH_SITE
GPFS Hopper
Leibniz
NO 25 GB
/small/antwerpen/20X/vsc20XYZ(*) GPFS Hopper
Leibniz
NO 0 GB
/tmp $VSC_SCRATCH_NODE ext4 Node NO 250 GB hopper

(*) /small is a file system optimised for the storage of small files of types that do not belong in $VSC_HOME. The file systems pointed at by $VSC_DATA and $VSC_SCRATCH have a large fragment size (128 kB) for optimal performance on larger files and since each file occupies at least one fragment, small files waste a lot of space on those file systems. The file system is available on request.

For users from other universities, the quota on $VSC_HOME and $VSC_DATA will be determined by the local policy of your home institution as these file systems are mounted from there. The pathnames will be similar with trivial modifications based on your home institution and VSC account number.

Characteristics of the compute nodes

Since hopper is currently a homogenous system with respect to CPU type and interconnect, it is not needed to specify these properties (see also the page on specifying resources, output files and notifications).

However, to make it possible to write job scripts that can be used on both hopper and turing (or other VSC clusters) and to prepare for future extensions of the cluster, the following features are defined:

property explanation
ivybridge only use Intel processors from the Ivy Bridge family (E5-XXXv2) (Not needed at the moment as this is the only CPU type)
ib use InfiniBand interconnect (only for compatibility with Turing job scripts as all nodes have InfiniBand)
mem64 use nodes with 64 GB RAM (58 GB available)
mem256 use nodes with 256 GB RAM (250 GB available)

These characteristics map to the following nodes on Hopper:

Type of node CPU type Interconnect # nodes # physical
cores
(per node)
# logical
cores (per node)
installed mem
(per node)
avail mem
(per node)
local disc
ivybridge:ib:mem64 Xeon E5-2680v2 IB-FDR10 144 20 20 64 GB 56 GB ~360 GB
ivybridge:ib:mem256 Xeon E5-2680v2 IB-FDR10 24 20 20 256 GB 248 GB ~360 GB

Turing

In July 2009, the UAntwerpen bought a 768 core cluster (L5420 CPUs, 16 GB RAM/node) from HP, that was installed and configured in December 2009. In December 2010, the cluster was extended with 768 cores (L5640 CPUs, 24 GB RAM/node). In September 2011, another 96 cores (L5640 CPUs, 24 GB RAM/node) have been added. Turing has been retired in January 2017.

Look for...