Gadi is NCI's primary supercomputer since January 2020. The supercomputer is composed of an assortment of processors, the majority of those are Cascade Lake processors. Different queues give access to different processors and have a different charging rate
|Queue||Memory||Priority||Charging rate per walltime-hour||CPU per node||Processor type|
|normal||192 GB||Normal||2SU||48||Cascade Lake (CL), 3200 nodes|
|express||192 GB||High||6SU||48||Cascade Lake (CL), 3200 nodes|
|copyq||192 GB||Normal||2SU||1 cpu jobs only||Cascade Lake (CL), 3200 nodes|
|gpuvolta||340 GB||Normal||3SU||48CPUs, 4 GPU||640 Nvidia V100 GPUs, 160 nodes|
More processors will be added in the coming weeks.
NCI does not yet have a User Guide for Gadi. But the notes to get prepared for Gadi will provide you with a wealth of information on the machine and its use.
Getting an account
To get a new account at NCI, you will need to get connected to a NCI project. Before you start the process, talk to your CI or supervisor to know which project code to use. You will need to apply via my.nci.org.au. NCI will send you a password via SMS once your application has been processed, this usually takes under a day to do.
Once you have an account, my.nci.org.au will allow you to ask for membership of other projects you might need. Those could be projects for additional compute time or projects to access data etc.
Connecting to Gadi
To connect to Gadi, you'll need to use a SSH connection to gadi.nci.org.au. If you're using Windows, you'll need to use something like PuTTY, or if you're connecting from linux or mac run on the commandline (substitute abc123 with your own username)
ssh -Y firstname.lastname@example.org
You can make a shortcut for this by editing (or creating) the file ~/.ssh/config and adding the lines:
Host gadi HostName gadi.nci.org.au User abc123 ForwardX11 true ForwardX11Trusted true
This way you just need to type 'ssh gadi' to connect.
If you use more than one project you can swap between them with the command 'switchproj', e.g.
will change your current project to w35.
You can also change your default project by editing the file on Raijin ~/.rashrc, it should have a line like
setenv PROJECT w35
Resources on Raijin
To see how much compute time you have available run the command
nci_account -P $PROJECT -q 2013.q3
To see how much storage space you have available run the command
lquota -P $PROJECT
To run a job on the supercomputer you submit it to a job queue using the 'qsub' command. Jobs are shell script files, they contain special markers to say what resources the job needs.
As an example the script "hello.sh"
#!/bin/bash #PBS -l ncpus=2 #PBS -l walltime=10:00 #PBS -l mem=1gb #PBS -v PROJECT echo "Hello"
says to run with 2 cpus for a maximum time of 10 minutes. The job can use up to 1 GB of memory. Anything after the #PBS lines is what gets run on the supercomputer, in this instance it just prints "Hello" (any output goes to files in the directory you submitted the job named like "hello.sh.o123456", error messages go to files named like "hello.sh.e123456). The command '-v PROJECT' means run using the current project, you can also specify a project to use like '-v PROJECT=w35'.
If the job tries to use more resources than it's asked for it will be automatically stopped. The less resources you ask for the more likely it is that your job will run quickly however, you should try to request an amount close to what the job actually uses.
To see a list of your submitted & currently running jobs run
This also shows how much resources each job has requested & is currently using.
Each job in the queue has a run id number associated with it (this is also printed when you submit a job with qsub). To get more information on a job run
qstat -s 123456 # Show any status information, e.g. why the job isn't currently running qstat -f 123456 # Show full information, including resources requested & environment variables
To remove a job from the queue use qdel
qdel 123456 # Remove the job 123456 from the queue
Changes from Vayu
Vayu was NCI's previous supercomputer. There are some changes that need to be made to run jobs designed for Vayu run on Raijin.
The PBS flags
#PBS -l vmem=2gb #PBS -wd
should be changed to
#PBS -l mem=2gb #PBS -l wd
The environment variable $PROJECT should be set before submitting a job, or a line like
#PBS -v PROJECT=w35
should be added to scripts.
Shared ACCESS data that used to be in the path
is now available under