Skip to main content

How Do I Write and Submit a Jobscript?

Batch Jobs

Jobs are not run directly from the command line, the user needs to create a job script which specifies both resources, libraries and the job’s application that is to be run.

The script is submitted to SLURM (queueing system). If the requested resources are available on the system, the job will run. If not, it will be placed in a queue until such time as the resources do become available.

Users need to understand how to use the queueing system, how to create the job script, as well as need to check its progress or delete a job from the queueing system.

Before submitting a job script, be sure that you are using resources within limits.

Job Scripts

You can find example job scripts inside /home/jobscripts folder in VALAR and /kuacc/jobscripts folder in KUACC. You need to copy one of these scripts into your home directory ( /kuacc/users/username/ or /home/username) and modify it according to your needs.

This is an example job script for VALAR HPC cluster. Note that a jobscript should start with #!/bin/bash.

CODE
#!/bin/bash 

#SBATCH --job-name=Test 
#SBATCH --nodes=1 
#SBATCH --ntasks-per-node=1 
#SBATCH --mem=40G 
#SBATCH --partition=ai 
#SBATCH --account=ai 
#SBATCH --gres=gpu:tesla_t4:1 
#SBATCH --qos=ai 
#SBATCH --time=24:00:00 
#SBATCH --output=test-%J.log 
#SBATCH --mail-user=username@ku.edu.tr

module load anaconda3/2025.06 
module load cuda/12.8.0 
module load cudnn/9.10.12 
module load gnu9/9.5.0 
module load cmake/3.29.2 

nvcc --version > nvcc_version_info.txt 

 

Jobscript can be divided into three sections.:

  • Requesting resources

  • Loading library and application modules

  • Running your codes

Requesting Resources

This section is where resources are requested and slurm parameters are configured. “#SBATCH” should always be used at the beginning of lines. Also, a flag is used for each request.

CODE
#SBATCH <flag> 
#SBATCH --job-name=Test                                              #Setting a job name 
#SBATCH --nodes=1                                                    #Asking for only one node 
#SBATCH --ntasks-per-node=1                                          #Asking one core on each node, one core 
#SBATCH --mem=40G                                                    #Asking 40G memory in node  
#SBATCH --partition=ai                                               #Running on short queue (max 2hours) 
#SBATCH --qos=ai                                                     #Running on users qos (rules and limits) 
#SBATCH --account=ai                                                 #Running on users partitions (group of nodes) 
#SBATCH --gres=gpu:tesla_t4:1                                        #Asking a tesla_t4 GPU 
#SBATCH --time=1:00:00                                               #Reserving for one hour time limit. 
#SBATCH --output=test-%J.log                                         #Setting a output file name. 
#SBATCH --mail-user=username@ku.edu.tr                               #Where to send emails 

 

Note: A line can be omitted or commented out by adding a second # at the beginning. Similarly, you can add comments by starting the line with #.

 Note: The KUACC HPC partitions listed below are provided as examples. You can view the active partitions on both the VALAR and KUACC HPC clusters using the sinfo command.

Name

MaxTimeLimit

Nodes

MaxJobs

MaxSubmitJob

short

2 hours

19 nodes

Infinite

300

mid

1 days

19 nodes

15

200

long

7 days

10 nodes

10

100

longer

30 days

4 nodes

5

50

ai

7 days

2 nodes

Infinite

Infinite

ilac

Infinite

12 nodes

Infinite

Infinite

cosmos

Infinite

8 nodes

Infinite

Infinite

biyofiz

Infinite

4 nodes

Infinite

Infinite

iui

Infinite

1 node

Infinite

Infinite

Note: The following flags can be used in your job scripts.

Important: All flag syntax starts with two dashes (--). In some editors, these may appear as a single “–” character, but the correct syntax always uses two dashes.

Resource

Flag Syntax

Description

partition

--partition=short

Partition is a queue for jobs.

qos

--qos=users

QOS is the quality of service (limits or priority boost).

time

--time=01:00:00

Time limit for the job.

nodes

--nodes=1

Number of compute nodes for the job.

cpus/cores

--ntasks-per-node=4

Corresponds to number of cores on the compute node.

resource feature

--gres=gpu:1

Request use of GPUs on compute nodes.

memory

--mem=4096

Memory limit per compute node for the job. Do not use with --mem-per-cpu.

memory

--mem-per-cpu=14000

Per core memory limit. Do not use with --mem.

account

--account=users

Users may belong to groups or accounts.

job name

--job-name="hello_test"

Name of the job.

constraint

--constraint=gpu

Limits jobs to nodes with specific features (e.g., gpu).

output file

--output=test.out

Name of file for standard output (stdout).

email address

--mail-user=username@ku.edu.tr

User’s email address.

email notification

--mail-type=ALL

Specifies when email notifications are sent to the user.

Note: –mem ve –mem-per-cpu flags:

  • --mem-per-cpu:
    Requests memory per core.

If N cores are reserved, then the total reserved memory is N × mem-per-cpu.

Default units are megabytes (MB).

Example:

CODE
#SBATCH --ntasks=5 
#SBATCH --mem-per-cpu=20000 
  • Total memory reserved = 5 × 20000 MB = 100000 MB.

    • For gigabyte units, specify G (e.g., 20G).

  • --mem:

Requests total memory per node.

If more than one node (N) is requested, then N × mem is reserved in total.

Default units are megabytes (MB).

Loading library and application modules

Users need to load the application and library modules required for their code. This can be done using the module load command, as shown in the sample job script.

CODE
module load anaconda3/2025.06 
module load cuda/12.8.0 
module load cudnn/9.10.12 
module load gnu9/9.5.0 
module load cmake/3.29.2 

For more information, see the Software Modules page.

Running Code

In this section of the job script, users need to include the command to run their code. For example:

CODE
nvcc --version > nvcc_version_info.txt 

After preparing the job script, it is submitted to Slurm using the sbatch command:

CODE
sbatch jobscript.sh 

 

Command

Syntax

Description

Example

sbatch

sbatch [script]

Submit a batch job.

$ sbatch jobscript.sh

scancel

scancel [job_id]

Kill a running job or cancel a queued one.

$ scancel 123456

squeue

squeue

List running or pending jobs.

$ squeue

squeue -u

squeue -u [username]

List running or pending jobs for a user.

$ squeue -u john

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.