M3 (Monash HPC)¶
M3 is Monash University's HPC cluster. The lab project is mh42 — 500 GB shared quota across all members.
Members¶
Project: mh42 | 33 members | Last updated: 2026-05-04
| Name | M3 Username | Role |
|---|---|---|
| Dana Kulić | danak |
PI — project leader |
| Kavi Katuwandeniya | kkatuwan |
Project leader |
| Lingheng Meng | lmen0023 |
Project leader |
| Somayeh Hussaini | somayehh |
Visiting Post-doc |
| Rhys Newbury | rnewbury |
Post-doc |
| Shiyu Xu | shiyux |
Post-doc/PhD |
| Lirui Guo | liruig |
PhD Student |
| Phan Le | phanl |
PhD Student |
| Malak Sayour | msay0013 |
Research Student |
| Mohammad Zawari | mzawari |
Research Student |
| Alana Hayes Chen | ahayesch |
Member |
| Aron Thomas | atho0057 |
FYP Student |
| Ben McShanag | bmcs0003 |
FYP Student |
| Blake Matheson | bmat0014 |
FYP Student |
| Duleesha Gunaratne | dgun0024 |
FYP Student |
| Isabella Moffat | imof0001 |
FYP Student |
| Jack Bastasin | jbas0020 |
FYP Student |
| Jack Thornborrow | jtho0062 |
FYP Student |
| Jacinta Yao | jyao0029 |
FYP Student |
| Jeremia Yovinus | jyov0001 |
FYP Student |
| Jesse Kipchumba | jkip0002 |
FYP Student |
| Jordan Rozario | jrozario |
FYP Student |
| Lucas Cooper | lcoo0019 |
FYP Student |
| Maxwell Fergie | mfergie |
FYP Student |
| Michael Fitzgerald | mfit0014 |
FYP Student |
| Omar Salem | osal0004 |
FYP Student |
| Raphael Schwalb | rschwalb |
FYP Student |
| Suhaas Kataria | skat0015 |
FYP Student |
| Taine Fryer | tfryer |
FYP Student |
| Tavishi Saxena | tsaxena |
FYP Student |
| Tianyi Xiao | txiao |
FYP Student |
| Vedansh Malhan | vmalhan |
FYP Student |
| William Ngo | wngo |
FYP Student |
Getting Access¶
- Register at my.massive.org.au
- Follow the Join Existing Project guide — Project ID: mh42
- Notify your supervisor after submitting so they can approve promptly
- Approval takes roughly 1 week
Connecting¶
SSH
Strudel — browser-based remote desktop
For GUI workloads (Isaac Sim, Jupyter): strudel.hpc.monash.edu
Interactive GPU session
Warning
Isaac Sim requires an A40 GPU specifically (RT Cores required). Other GPU types on M3 (L4, T4, A100, H100) are not compatible.
More options: M3 connecting docs
Storage¶
| Location | Quota | Persistent? | Use for |
|---|---|---|---|
/home/{your_m3_id}/ |
20 GB | Yes | Config files, small scripts only |
/home/{your_m3_id}/mh42/ |
500 GB shared | Yes | All large project files |
/home/{your_m3_id}/mh42_scratch |
6 TB shared | No — purge schedule undocumented; treat as volatile | Temporary large files |
/home/{your_m3_id}/mh42_scratch2 |
6 TB shared | No — purge schedule undocumented; treat as volatile | Same filesystem as mh42_scratch |
Danger
Your home directory has only 20 GB. Never store datasets, containers, or environments there — use the project directory.
Folder Structure¶
All files live under a two-level structure: project folder → stack folder.
/home/{your_m3_id}/mh42/
├── FYP2026S1_3487/ ← FYP project folder (created by supervisor)
│ ├── STATUS ← project status file
│ ├── stack_alice/ ← Alice's personal stack
│ └── stack_bob/ ← Bob's personal stack
├── PhD_2023_dave/
│ ├── STATUS
│ └── stack_dave/
└── Postdoc_2026_somayeh/
└── stack_somayeh/
Project folder naming:
| Role | Format | Example |
|---|---|---|
| FYP student | FYP{YYYY}S{1\|2}_{ProjectID} |
FYP2026S1_3487 |
| PhD student | PhD_{YYYY}_{firstname} |
PhD_2023_dave |
| Postdoc | Postdoc_{YYYY}_{firstname} |
Postdoc_2026_somayeh |
Setting Up Your Stack Folder¶
Note
The project folder (FYP2026S1_3487/, PhD_2023_dave/, etc.) is created by your supervisor — you do not create it yourself. Your first action is to create your stack_{name}/ folder inside it.
Supervisors: see the M3 Admin Guide for the full setup procedure including default ACLs.
Create your stack folder inside your project folder:
Example for FYP student Alice in project 3487:
Store all your environments, datasets, and checkpoints inside your stack folder.
Project Status Tracking¶
Each project folder has a STATUS file. Keep it up to date:
# When you start
echo "ACTIVE" > /home/${USER}/mh42/{your_project_folder}/STATUS
# When your project ends
echo "DONE-2026-06" > /home/${USER}/mh42/{your_project_folder}/STATUS
# When data is no longer needed
echo "OK-TO-DELETE" > /home/${USER}/mh42/{your_project_folder}/STATUS
| Status | Meaning |
|---|---|
ACTIVE |
Project ongoing |
DONE-{YYYY-MM} |
Finished; data may still be needed short-term |
ARCHIVE |
Ended; kept for reference |
OK-TO-DELETE |
Safe to remove |
Check all statuses at a glance:
for d in /home/${USER}/mh42/*/; do echo "$d: $(cat ${d}STATUS 2>/dev/null || echo 'NO STATUS FILE')"; done
Scratch (Temporary Storage)¶
mh42_scratch and mh42_scratch2 are both symlinks to the same filesystem (/fs04/scratch2/mh42) — use either interchangeably. The original /scratch filesystem was retired in April 2025 and migrated to /scratch2, which is why both names exist.
Follow the same folder structure as mh42/:
Quota: Default scratch2 quota is 6 TB (increased from 3 TB in May 2026). Unlike the project quota, this can be increased — submit a request at the M3 quota request form (allow up to 2 weeks; 10+ TB requests are reviewed and granted for 30 days only). The project quota (500 GB) cannot be increased.
Warning
Monash does not publish a specific purge schedule for scratch. Treat it as volatile — files could be deleted at any time without notice. Archive anything important to S Drive or move to mh42/ before it disappears.
Tip
When creating a project folder in scratch, set group write so your supervisor and teammates can create their own stacks inside it:
Conda Environments¶
Use Miniforge3 (provided by M3) — do not install Anaconda or Miniconda yourself:
Tip
Save conda environments to your stack folder, not your home directory. See M3 Conda docs for setup.
Sharing Files with Teammates¶
By default your files are private. To share with all mh42 members:
Note
There is no per-person access control — granting group access means all mh42 members can read and write those files. Do not modify other people's data even if you technically have access.
To share with a specific person only, use ACL (confirmed working on M3's Lustre filesystem):
# Grant access to existing files
setfacl -R -m u:{their_m3_id}:rwx /home/${USER}/mh42/{your_project_folder}/stack_{your_name}/
# Also inherit for new files created inside
setfacl -d -m u:{their_m3_id}:rwx /home/${USER}/mh42/{your_project_folder}/stack_{your_name}/
# Verify
getfacl /home/${USER}/mh42/{your_project_folder}/stack_{your_name}/
Archiving to S Drive¶
When M3 space is tight, archive completed runs to the lab S Drive:
# Mount S Drive (login node only — not inside a job)
gio mount 'smb://MONASH;{your_monash_id}@ad.monash.edu/Shared'
ln -s /run/user/$(id -u)/gvfs/smb-share:domain=MONASH,server=ad.monash.edu,share=shared,user={your_monash_id}/ENG-ECSE/Robotics-M3 ~/s_drive
# Archive a run, then delete from M3
rsync -av --progress /fs04/mh42/${PROJECT}/stack_${MY_NAME}/logs/<run>/ \
~/s_drive/${PROJECT}/stack_${MY_NAME}/logs/<run>/
rm -rf /fs04/mh42/${PROJECT}/stack_${MY_NAME}/logs/<run>
See the S Drive guide for full instructions.
Quick Reference¶
# SSH into M3
ssh {your_m3_id}@m3.massive.org.au
# Navigate to your stack
cd /home/${USER}/mh42/{your_project_folder}/stack_{your_name}
# Check quota
user_info
# Check all project statuses
for d in /home/${USER}/mh42/*/; do echo "$d: $(cat ${d}STATUS 2>/dev/null || echo 'NO STATUS FILE')"; done
# Submit a batch job
sbatch job.sh
# Check job queue
squeue -u ${USER}
# Cancel a job
scancel {job_id}