Changes between Version 77 and Version 78 of cypress/using


Ignore:
Timestamp:
02/01/24 10:45:20 (10 months ago)
Author:
fuji
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • cypress/using

    v77 v78  
    7676#SBATCH --partition=workshop    # Partition
    7777#SBATCH --qos=workshop          # Quality of Service
    78 ##SBATCH --qos=normal          ### Quality of Service (like a queue in PBS)
    7978#SBATCH --time=0-00:01:00     ### Wall clock time limit in Days-HH:MM:SS
    8079#SBATCH --nodes=1             ### Node count required for the job
     
    116115}}}
    117116
    118 There are two more commands we should familiarize ourselves with before we begin. The first is the “squeue” command. This shows us the list of jobs that have been submitted to SLURM that are either currently running or are in the queue waiting to run. The last is the “scancel” command. This allows us to terminate a job that is currently in the queue. To see these commands in action, let's simulate a one hour job by using the sleep command at the end of a new submission script.
     117There are two more commands we should familiarize ourselves with before we begin. The first is the “squeue” command. This shows us the list of jobs that have been submitted to SLURM that are either currently running or are in the queue waiting to run. The last is the “scancel” command. This allows us to terminate a job that is currently in the queue. To see these commands in action, let's simulate a one-hour job by using the sleep command at the end of a new submission script.
    119118{{{#!bash
    120119#!/bin/bash
    121120#SBATCH --job-name=OneHourJob ### Job Name
     121#SBATCH --partition=defq      ### Partition (default is 'defq')
     122#SBATCH --qos=normal          ### Quality of Service (like a queue in PBS)
    122123#SBATCH --time=0-00:01:00     ### Wall clock time limit in Days-HH:MM:SS
    123124#SBATCH --nodes=1             ### Node count required for the job
     
    287288{{{#!bash
    288289#!/bin/bash
    289 #SBATCH --qos=normal
     290#SBATCH --partition=defq      ### Partition (default is 'defq')
     291#SBATCH --qos=normal          ### Quality of Service (like a queue in PBS)
    290292#SBATCH --job-name=MPI_JOB
    291293#SBATCH --time=0-01:00:00
     
    321323{{{#!bash
    322324#!/bin/bash
    323 #SBATCH --qos=normal
     325#SBATCH --partition=defq      ### Partition (default is 'defq')
     326#SBATCH --qos=normal          ### Quality of Service (like a queue in PBS)
    324327#SBATCH --job-name=OMP_JOB
    325328#SBATCH --time=1-00:00:00
     
    342345{{{#!bash
    343346#!/bin/bash
    344 #SBATCH --qos=normal            # Quality of Service
     347#SBATCH --partition=defq      ### Partition (default is 'defq')
     348#SBATCH --qos=normal          ### Quality of Service (like a queue in PBS)
    345349#SBATCH --job-name=hybridTest   # Job Name
    346350#SBATCH --time=00:10:00         # WallTime
     
    370374{{{#!bash
    371375#!/bin/bash
    372 #SBATCH --qos=normal            # Quality of Service
     376#SBATCH --partition=defq      ### Partition (default is 'defq')
     377#SBATCH --qos=normal          ### Quality of Service (like a queue in PBS)
    373378#SBATCH --job-name=nativeTest   # Job Name
    374379#SBATCH --time=00:10:00         # WallTime
     
    475480{{{
    476481#!/bin/bash
    477 #SBATCH --qos=normal            # Quality of Service
     482#SBATCH --partition=defq      ### Partition (default is 'defq')
     483#SBATCH --qos=normal          ### Quality of Service (like a queue in PBS)
    478484#SBATCH --job-name=many-task    # Job Name
    479485#SBATCH --time=24:00:00         # WallTime
     
    645651Last login: Thu Dec 19 12:27:32 2019 from cypress1.cm.cluster
    646652}}}
    647 In the interactive session, evaluate the variables and observe they are set.
     653In the interactive session, evaluate the variables and observe how they are set.
    648654{{{
    649655[fuji@cypress01-089 ~]$ echo MY_PARTITION=$MY_PARTITION, MY_QUEUE=$MY_QUEUE