Skip to content
Contact Support

Mahuika Cluster

Mahuika is NeSI's High Performance Computing Cluster.

Getting Started

Hardware

Login nodes 72 cores in 2× Broadwell (E5-2695v4, 2.1 GHz, dual socket 18 cores per socket) nodes
Compute nodes 8,136 cores in 226 × Broadwell (E5-2695v4, 2.1 GHz, dual socket 18 cores per socket) nodes;
7,552 cores in 64 HPE Apollo 2000 XL225n nodes (AMD EPYC Milan 7713) the Milan partition
Compute nodes (reserved for NeSI Cloud)
288 cores in 8 × Broadwell (E5-2695v4, 2.1 GHz, dual socket 18 cores per socket) nodes
GPUs 9 NVIDIA Tesla P100 PCIe 12GB cards (1 node with 1 GPU, 4 nodes with 2 GPUs)

7 NVIDIA A100 PCIe 40GB cards (3 nodes with 1 GPU, 2 nodes with 2 GPUs)

7 A100-1g.5gb instances (1 NVIDIA A100 PCIe 40GB card divided into 7 MIG GPU slices with 5GB memory each)

4 NVIDIA HGX A100 (4 GPUs per board with 80GB memory each, 16 A100 GPUs in total)

4 NVIDIA A40 with 48GB memory each (2 nodes with 2 GPUs, but capacity for 6 additional GPUs already in place)
Hyperthreading Enabled (accordingly, SLURM will see ~31,500 cores)
Theoretical Peak Performance 308.6 TFLOPs
Memory capacity per compute node 128 GB
Memory capacity per login (build) node 512 GB
Total System memory 84.0 TB
Interconnect FDR (54.5Gb/s) InfiniBand to EDR (100Gb/s) Core fabric. 3.97:1 Fat-tree topology
Workload Manager Slurm (Multi-Cluster)
Operating System CentOS 7.4 & Rocky 8.5 on Milan