Overview

The tier-1 cluster muk is primarily aimed at large parallel computing jobs that require a high-bandwidth low-latency interconnect, but jobs that require a multitude of small independent tasks are also accepted.

The main architectural features are:

  • 528 compute nodes with two Xeon E5-2670 processors (2,6GHz, 8 cores per processor, Sandy Bridge architecture) and 64GiB of memory, for a total memory capacity of 33 TiB and a peak performance of more than 175 TFlops (Linpack result 152,3 TFlops)
  • FDR Infiniband interconnect with a fat tree topology (1:2 oversubscription)
  • A storage system with a net capacity of approximately 400TB and a peak bandwidth of 9.5 GB/s.

The cluster appeared for several years in the Top500 list of supercomputer sites:

June 2012 Nov 2012 June 2013 Nov 2013 June 2014
Ranking 118 163 239 306 430

Compute time on muk is only available upon approval of a project. Information on requesting projects is available in Dutch and in English

Access restriction

Once your project has been approved, your login on the tier-1 cluster will be enabled. You use the same vsc-account (vscXXXXX) as at your home institutions and you use the same $VSC_HOME and $VSC_DATA directories, though the tier-1 does have its own scratch directories.

A direct login from your own computer through the public network to muk is not possible for security reasons. You have to enter via the VSC network, which is reachable from all Flemish university networks.

ssh login.hpc.uantwerpen.be
ssh login.hpc.ugent.be
ssh login.hpc.kuleuven.be or login2.hpc.kuleuven.be

Make sure that you have at least once connected to the login nodes of your institution, before attempting access to tier-1.

Once on the VSC network, you can

  • connect to login.muk.gent.vsc to work on the tier-1 cluster muk,
  • connect to gligar01.gligar.gent.vsc or gligar02.gligar.gent.vsc for testing and debugging purposes (e.g., check if a code compiles). There you'll find the same software stack as on the tier-1. (On some machines gligar01.ugent.be and gligar02.ugent.be might also work.)

There are two options to log on to these systems over the VSC network:

  1. You log on to your home cluster. At the command line, you start a ssh session to login.muk.gent.vsc.
    ssh login.muk.gent.vsc
  2. You set up a so-called ssh proxy through your usual VSC login node vsc.login.node (the proxy server in this process) to login.muk.gent.vsc or gligar01.ugent.be.
    • To set up a ssh proxy using OpenSSH, the client for Linux and OS X or if you have Windows with the Cygwin emulation layer installed, follow the instructions in the Linux client section.
    • To set up a ssh proxy on Windows using PuTTY, follow the instructions in the Windows client section.

Resource limits

Disk quota

  • As you are using your $VSC_HOME and $VSC_DATA directories from your home institution, the quota policy from your home institution applies.
  • On the shared (across nodes) scratch volume $VSC_SCRATCH the standard disk quota is 250GiB per user. If your project requires more disk space, you should request it in your project application as we have to make sure that the mix of allocated projects does not require more disk space than available.
  • Currently, each institute has a maximal scratch quotum of 75TiB. So, please vacate as much as possible of the $VSC_SCRATCH at all times to enable large jobs.

Memory

  • Each node has 64GiB of RAM. However, not all of that memory is available for user applications as some memory is needed for the operating system and file system buffers. In practise, roughly 60GiB is available to run your jobs. This also means that when using all cores, you should not request more than 3.75GiB of RAM per core (pmem resource attribute in qsub) or your job will be queued indefinitely since the resource manager will not be able to assign nodes to it.
  • The maximum amount of total virtual memory per node ('vmem') you can request is 83GiB, see also the output of the pbsmon command. The job submit filter sets a default virtual memory limit if you don't specify something with your job using e.g.
    #PBS -l vmem=83gb

Look for...