Here are some common terms you will encounter as you get accustomed to using our ORCA resources:
Batch Scheduler / Job Scheduler: When you submit a job to run on the supercomputer, it gets placed in a queue along with other jobs from other users. A job scheduler (also referred to as a batch scheduler or workload manager) manages this queue and will execute your job once the needed resources become available. ORCA uses SLURM as our job scheduler. More information about SLURM, and how to use it, can be found in our Support article - Running Jobs on Titan.
Compute nodes: These are the nodes where your jobs are run. Titan has over 400 compute nodes to meet a variety of your needs. More infomration about our compute nodes can be found here.
Home folder: Every ORCA user is given a personal folder where files can be stored long-term. Home directories typically have a 20GB quota and is backed up nightly. You can access your home folder by going to - /home/username. This is also the default folder location when you log into Titan.
Login node: This is your entry point into the ORCA supercomputer. Whenever you log into Titan, you are logging into the login node. Here you can perform various tasks, such as - viewing your files & folders, submitting jobs, checking the queue, monitoring your pending and running jobs, etc. Getting to the login node is done through ssh to titan.orca.oru.edu.
Queue: All jobs submitted to the supercomputer have to go through a queue. ORCA maintains several public and private queues. Public queues are accessible by everyone, and generally follows a “first in, first out” rule. Private queues are only accessible to the group for whom the queue was set up. A list of all publicly accessible queues can be found here. Please contact us at support@orca.oru.edu if you have questions regarding which queue offers the best performance for your job.
Titan: Titan refers to the current ORCA supercomputer.
Scratch folder: scratch is meant to be a space for temporary files, even large ones. It is ideal if you are running jobs that do a lot of file access (reading, writing, etc.). Each user has their own scratch folder, usually - /scratch/username. We strongly recommend users to treat scratch as a temporary storage. We do not back up the scratch folder. While ORCA currently does not enforce quotas on /scratch, we do have a two-week deletion policy. If you think you'll need larger storage capacity than the 20GB on your Home folder, please contact us at support@orca.oru.edu.
SLURM: SLURM is the name of batch scheduler (job scheduler) that ORCA uses on our supercomputer. More information about SLURM, and how to use it, can be found on our Support article - Running Jobs on Titan.