Interactive Jobs¶
Overview¶
Interactive sessions are a way to gain access to a compute node from the command line. This is useful for checking and using available software modules, testing submission scripts, debugging code, compiling software, and running programs in real time.
The term "interactive session" in this context refers to jobs run from within the command line on a terminal client. Opening a terminal in an interactive graphical desktop is also equivalent, but these sessions are fixed to the resources allocated to that Open OnDemand (OOD) session. As you'll see below, one has more control over their resources when requesting an interactive session via SSH in a terminal client.
Clusters¶
An interactive session can be requested on any of our three clusters: El Gato, Ocelote, and Puma.
Ocelote and El Gato share the same operating system (CentOS 7), system libraries, and software modules. El Gato typically has shorter wait times, so if you're compiling software or running small test jobs for workflows to run on either El Gato or Ocelote, it may be advantageous to request a session on El Gato for more immediate access.
Puma runs a newer operating system (Rocky Linux 9), has newer system libraries, and its own software modules. If you are using Puma for your production work, you will want to stick to requesting interactive sessions on Puma for testing and compiling. Workflows may not be transferrable between the older clusters and Puma, so make sure to check which software ecosystem you're using to ensure the most predictable results.
How to Request an Interactive Session¶
The interactive
Command¶
We have a built-in shortcut command that will allow you to quickly and easily request a session by simply entering: interactive
The interactive
command is essentially a convenient wrapper for a Slurm command called salloc
. This can be thought of as similar to the sbatch
command, but for interactive jobs rather than batch jobs. When you request a session using interactive, the full salloc
command being executed will be displayed for reference.
(ocelote) [netid@junonia ~]$ interactive
Run "interactive -h for help customizing interactive use"
Submitting with /usr/local/bin/salloc --job-name=interactive --mem-per-cpu=4GB --nodes=1 --ntasks=1 --time=01:00:00 --account=windfall --partition=windfall
salloc: Pending job allocation 531843
salloc: job 531843 queued and waiting for resources
salloc: job 531843 has been allocated resources
salloc: Granted job allocation 531843
salloc: Waiting for resource configuration
salloc: Nodes i16n1 are ready for job
[netid@i16n1 ~]$
Notice in the example above how the command prompt changes once your session starts. When you're on a login node, your prompt will show junonia
or wentletrap
. Once you're in an interactive session, you'll see the name of the compute node you're connected to.
Customizing Your Resources
The command interactive
when run without any arguments will allocate a windfall session using one CPU for one hour which isn't ideal for most use cases. You can modify this by including additional flags. To see the available options, you can use the help flag -h
(ocelote) [netid@junonia ~]$ interactive -h
Usage: /usr/local/bin/interactive [-x] [-g] [-N nodes] [-m memory per core] [-n total number of tasks] [-Q optional qos] [-t hh::mm:ss] [-a account to charge]
Flag | Explanation | Example |
---|---|---|
-a |
This is followed by your group's name and will switch you to using the standard partition. This is highly recommended to keep your sessions from being interrupted and to help them start faster. | -a my_group |
-t |
The amount of time to reserve for your job in the format hhh:mm:ss . |
-t 05:00:00 |
-n |
Total number of tasks (CPUs) to allocate to your job. By default, these CPUs will be allocated on a single node. | -n 16 |
-N |
Total number of nodes (physical computers) to allocate to your job. | -N 2 |
-m |
Total amount of memory per CPU. See CPUs and Memory for more details and information on potential complications. | -m 5gb |
-Q |
Used to access high priority or qualified hours. Only for groups with buy-in or special project hours. | High Priority: -Q user_qos_<PI NETID> Qualified: -Q qual_qos_<PI NETID> |
-g |
Request one GPU. This flag takes no arguments. On Puma, you may be allocated either a v100 or a MIG slice. If you want more control over your resources, you can use salloc directly using GPU batch directives. |
-g |
-x |
Enable X11 forwarding. This flag takes no arguments. | -x |
You may also create your own salloc commands using any desired Slurm directives for maximum customization.
The salloc
Command¶
If interactive
is insufficient to meet your resource requirements (e.g., if you need to request more than one GPU or a GPU MIG slice), you can use the Slurm command salloc
to further customize your job.
The command salloc
expects Slurm directives as input arguments that it uses to customize your interactive session. For comprehensive documentation on using salloc
, see Slurm's official documentation.
Single CPU Example
salloc --account=<YOUR_GROUP> --partition=standard --nodes=1 --ntasks=1 --time=1:00:00 --job-name=interactive
salloc --account=<YOUR_GROUP> --partition=standard --nodes=1 --ntasks=16 --time=1:00:00 --job-name=interactive
Multi-GPU Example (Puma)
salloc --account=<YOUR_GROUP> --partition=gpu_standard --nodes=1 --ntasks=1 --time=1:00:00 --job-name=multi-gpu --gres=gpu:volta:2
GPU MIG Slice Example
salloc --account=<YOUR_GROUP> --partition=gpu_standard --nodes=1 --ntasks=1 --time=1:00:00 --job-name=mig-gpu --gres=gpu:nvidia_a100_80gb_pcie_2g.20gb