Changes between Initial Version and Version 1 of Workshops/cypress/SlurmPractice

Aug 22, 2018 12:46:46 PM (3 years ago)


  • Workshops/cypress/SlurmPractice

    v1 v1  
     1= Work with SLURM on Cypress =
     2If you haven't done yet, download Samples by:
     4{{{svn co file:///home/fuji/repos/workshop ./workshop}}}
     6Checkout Sample files onto local machine, (linux shell)
     8{{{svn co svn+ssh:// ./workshop}}}
     11== Introduction to Managed Cluster Computing ==
     12On your desktop you would open a terminal, compile the code using your favorite c compiler and execute the code. You can do this without worry as you are the only person using your computer and you know what demands are being made on your CPU and memory at the time you run your code. On a cluster, many users must share the available resources equitably and simultaneously. It's the job of the resource manager to choreograph this sharing of resources by accepting a description of your program and the resources it requires, searching the available hardware for resources that meet your requirements, and making sure that no one else is given those resources while you are using them.
     14Occasionally the manager will be unable to find the resources you need due to usage by other user. In those instances your job will be "queued", that is the manager will wait until the needed resources become available before running your job. This will also occur if the total resources you request for all your jobs exceed the limits set by the cluster administrator. This ensures that all users have equal access to the cluster.
     18== Serial Job Submission ==
     19Under 'workshop' directory,
     21[fuji@cypress1 ~]$ cd workshop
     22[fuji@cypress1 workshop]$ ls
     23BlasLapack  Eigen3        HeatMass    JobArray1  JobDependencies  MPI     PETSc  precision  Python  ScaLapack  SimpleExample  TestCodes  uBLAS
     24CUDA        FlowInCavity  hybridTest  JobArray2  Matlab           OpenMP  PI     PSE        R       SerialJob  SLU40          TextFiles  VTK
     27Under 'SerialJob' directory,
     29[fuji@cypress1 workshop]$ cd SerialJob
     30[fuji@cypress1 SerialJob]$ ls  slurmscript1  slurmscript2
     34When your code runs on a single core only, your job-script should request a single core.  The python code '' runs on a single core that is,
     37import datetime
     38import socket
     40now =
     41print 'Hello, world!'
     42print now.isoformat()
     43print socket.gethostname()
     46Since this runs for a short time, you can try running it on the login node.
     48[fuji@cypress1 SerialJob]$ python ./
     49Hello, world!
     53This code print a message, time, and the host name.
     55Look at 'slurmscript1'
     57[fuji@cypress1 SerialJob]$ more slurmscript1
     59#SBATCH --qos=workshop            # Quality of Service
     60#SBATCH --partition=workshop      # partition
     61#SBATCH --job-name=python       # Job Name
     62#SBATCH --time=00:01:00         # WallTime
     63#SBATCH --nodes=1               # Number of Nodes
     64#SBATCH --ntasks-per-node=1     # Number of tasks (MPI processes)
     65#SBATCH --cpus-per-task=1       # Number of threads per task (OMP threads)
     67module load anaconda
     71Notice that the SLURM script begins with '''#!/bin/bash'''. This tells the Linux shell what flavor shell interpreter to run. In this example we use BASh (Bourne Again Shell).
     72The choice of interpreter (and subsequent syntax) is up to the user, but every SLURM script should begin this way.
     74For Bash and Shell Script, see
     77In Bash Shell Script, '''#''' and the strings after it are comments.
     78So all '''#SBATCH''' things in the script above are comments for Bash,
     79but those are directives for '''SLURM''' job scheduler.
     81=== qos, partition ===
     82Those two lines determine the quality of service and the partition.
     84#SBATCH --qos=workshop            # Quality of Service
     85#SBATCH --partition=workshop      # partition
     87The default partition is '''defq'''. In '''defq''', you can chose either '''normal''' or '''long''' for '''qos'''.
     88||||||||= '''QOS limits''' =||
     89|| '''QOS name''' || '''maximum job size (node-hours)''' || '''maximum walltime per job''' || '''maximum nodes per user''' ||
     90|| normal      || N/A ||24 hours || 18 ||
     91|| long        || 168 ||168 hours ||  8 ||
     93The differences between '''normal''' and '''long''' are the number of nodes you can request and duration you can run your code.
     94The details will be explained in Parallel Jobs below.
     96If you are using a workshop account, you can use only '''workshop''' qos and partition.
     98=== job-name ===
     100#SBATCH --job-name=python       # Job Name
     102This is the job name that you can specify as you like.
     104=== time ===
     106#SBATCH --time=00:01:00         # WallTime
     108The maximum walltime is specified by #SBATCH --time=T, where T has format h:m:s. 
     109Normally, a job is expected to finish before the specified maximum walltime. 
     110After the walltime reaches the maximum, the job terminates regardless whether the job processes are still running or not.
     112=== Resource Rwquest ===
     114#SBATCH --nodes=1               # Number of Nodes
     115#SBATCH --ntasks-per-node=1     # Number of tasks (MPI processes)
     116#SBATCH --cpus-per-task=1       # Number of threads per task (OMP threads)
     119The resource request '''#SBATCH --nodes=N''' determines how many compute nodes a job are allocated by the scheduler; only 1 node is allocated for this job. 
     121'''#SBATCH --ntasks-per-node=n''' determines the number of tasks for MPI jobs. The details will be explained in Parallel Jobs below.
     123'''#SBATCH --cpus-per-task=c'''  determines the number of cores/threads for a task. The details will be explained in Parallel Jobs below.
     130This script requests one core on one node.
     134There are 124 nodes on Cypress system. Each node has 20 cores.