wiki:Workshops/cypress/ManyTaskComputing

Version 1 (modified by fuji, 2 years ago) ( diff )

Many Task Computing

This page introduces examples of scripts for Many-task computing.

Job Array + Many-Task Computing

Cypress job-schedular allows a maximum of 18 concurrently running jobs for normal qos, and 8 jobs for long qos. Even if each job requests a single core, it is counted as one job.

Assuming that we have 100 single-core tasks and each task will run more than 24 hours, you might consider using Job Array and long qos to submit 100 jobs. But Cypress job-schedular allows a maximum of 8 concurrently running jobs.

The example script below submits a Job-Array of 5 array-tasks, and each task run 20 sub-tasks.

#!/bin/bash
#SBATCH --qos=long		# Quality of Service
#SBATCH --job-name=ManyTaskJob  # Job Name
#SBATCH --time=30:00:00		# WallTime
#SBATCH --nodes=1 		# Number of Nodes
#SBATCH --ntasks-per-node=20 	# Number of tasks
#SBATCH --cpus-per-task=1 	# Number of processors per task
#SBATCH --gres=mic:0  		# Number of Co-Processors
#SBATCH --array=0-80:20         # Array of IDs=0,20,40,60,80

# our custom function
cust_func(){
  echo "Do something $1 task"
  sleep 10
}
# For loop $SLURM_NTASKS_PER_NODE times
date
hostname
for i in $(seq $SLURM_NTASKS_PER_NODE)
do
	TASK_ID=$((SLURM_ARRAY_TASK_ID + i))
	cust_func $TASK_ID > log${TASK_ID}.out & # Put a function in the background
done

## Put all cust_func in the background and bash
## would wait until those are completed
## before displaying done message
wait
echo "done"
date

Many MPI jobs in a single job

If you have some MPI jobs that must run concurrently, Many-task computing may be the way to go.

Note: See TracWiki for help on using the wiki.