Skip to content

NCI Gadi

NCI is Australia's national research computing facility. Gadi is their flagship supercomputer with NVIDIA GPUs and full CUDA support.

Project: pg06 — CUDA-Accelerated Robot Simulation and Deep Learning Portal: my.nci.org.au Support: help@nci.org.au


Why NCI over Pawsey

NCI Gadi Pawsey Setonix
GPU NVIDIA H200, A100, V100 AMD MI250X
Framework CUDA ROCm only
Isaac Sim / Isaac Lab Supported Not supported
PyTorch / MuJoCo Excellent Excellent

Use NCI when you need CUDA — Isaac Sim, Isaac Lab, or any NVIDIA-only library. Use Pawsey for large-scale PyTorch or MuJoCo workloads where AMD support is sufficient.


GPU Nodes

GPU Memory Notes
NVIDIA H200 141GB 141 GB HBM3 Newest, highest performance
NVIDIA A100 80GB 80 GB HBM2e Excellent for deep learning
NVIDIA V100 32GB 32 GB HBM2 Widely available

Members

Project: pg06 | 16 members | Last updated: 2026-05-01

Name Username Role
Dana Kulić dk4618 Lead CI (PI)
Lingheng Meng lm3024 Delegated Lead CI — project manager
Shayegan Abdollahbeigi sa5743 Student
Adedayo Akinade aa6330 PhD Student
Maxwell Fergie mf3552 Undergraduate Student
Michael FitzGerald mf7023 Student
Lirui Guo lg4387 PhD Student
Somayeh Hussaini sh7589 Visiting Post-doc
Kavi Katuwandeniya kk9539 Research Scientist
Phan Tuan Kiet Le pl0320 PhD Student
Jack Moses jm4855 Student
Aron Thomas at7404 Student
Jack Thornborrow jt6695 FYP Student
Tianyi Xiao tx9325 Undergraduate Student
Mohammad Zawari mz4629 Research Student
Juyan Zhang jz5678 PhD Student

Getting Access

  1. Register at my.nci.org.au
  2. Request to join project pg06 — Lingheng will approve
  3. Once approved, log in or access ARE straight away

Connecting

SSH

ssh {your_nci_id}@gadi.nci.org.au

ARE — browser-based (no SSH needed)

ARE (Advanced Research Environment) lets you launch JupyterLab or a GPU Virtual Desktop directly in your browser. This is the easiest way to get started with interactive work.


Storage

Path Quota Purge Use for
/home/{username} 10 GiB No Config files, scripts
/scratch/pg06/ 1 TiB (project) 100 days without access Active job data
massdata/pg06 No Long-term archival

Note

/g/data/pg06 is not yet provisioned for our project — only /scratch/pg06/ is available. This will be requested in due course.

Create a symlink to your scratch directory for convenience:

ln -s /scratch/pg06/$USER ~/scratch
# Check quota and SU balance
nci_account

# Check scratch (live)
lquota

# Check home (live)
quota -s

Warning

/scratch is purged after 100 days without access. Move important results to massdata/pg06 before they expire. Check expiry with nci-file-expiry.


Folder Structure

All files live under a two-level structure: project folder → stack folder — the same convention as M3 and S Drive.

/scratch/pg06/
├── FYP2026S1_3487/          ← FYP project folder (created by Lingheng)
│   ├── stack_alice/
│   └── stack_bob/
├── PhD_2023_dave/
│   └── stack_dave/
└── Postdoc_2026_somayeh/
    └── stack_somayeh/

Project folder naming:

Role Format Example
FYP student FYP{YYYY}S{1\|2}_{ProjectID} FYP2026S1_3487
PhD student PhD_{YYYY}_{firstname} PhD_2023_dave
Postdoc Postdoc_{YYYY}_{firstname} Postdoc_2026_somayeh

Note

Project folders are created by Lingheng — do not create them yourself. Contact Lingheng to have your project folder set up before you start. Your first action is to create your stack_{name}/ folder inside it:

mkdir -p /scratch/pg06/{your_project_folder}/stack_{your_name}/

The same convention will apply to /g/data/pg06/ once provisioned.


Submitting Jobs (PBS)

NCI uses PBS (not SLURM). Key differences: #PBS directives, qsub to submit, qstat to check.

GPU job example

#!/bin/bash
#PBS -N gpu_job
#PBS -P pg06
#PBS -q gpuvolta
#PBS -l ncpus=12
#PBS -l ngpus=1
#PBS -l mem=96GB
#PBS -l walltime=04:00:00
#PBS -l storage=scratch/pg06
#PBS -l jobfs=50GB

cd $PBS_O_WORKDIR
module load python3
python train.py

Essential PBS commands

qsub job.sh          # submit a job
qstat -u $USER       # check your jobs
qdel <jobid>         # cancel a job
nqstat               # view queue summary