Changes between Version 2 and Version 3 of cypress/GNUparallel


Ignore:
Timestamp:
09/22/23 16:10:09 (8 months ago)
Author:
fuji
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • cypress/GNUparallel

    v2 v3  
    3939[fuji@cypress1 JobArray2]$ parallel -j 4 sh ::: script??.sh
    4040}}}
     41
     42=== Run Gnu Parallel in Slurm script ===
     43==== Single Node ====
     44The job script below requests 1 node 20 cores (one whole node). Assuming that each task is multi-thread, and '''OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK''' determines the number of threads. '''"-j $SLURM_NTASKS"''' determines the number of concurrently running tasks.
     45{{{
     46#!/bin/bash
     47#SBATCH --partition=defq        # Partition
     48#SBATCH --qos=normal            # Quality of Service
     49#SBATCH --job-name=GNU_Paralle  # Job Name
     50#SBATCH --time=00:10:00         # WallTime
     51#SBATCH --nodes=1               # Number of Nodes
     52#SBATCH --ntasks-per-node=5     # Number of tasks
     53#SBATCH --cpus-per-task=4       # Number of processors per task OpenMP threads()
     54#SBATCH --gres=mic:0            # Number of Co-Processors
     55
     56module load parallel
     57
     58export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
     59
     60parallel --record-env
     61
     62parallel --joblog log \
     63        -j $SLURM_NTASKS \
     64        --workdir $SLURM_SUBMIT_DIR \
     65        --env OMP_NUM_THREADS \
     66        sh ./run_hostname.sh {} ::: `seq 1 100`
     67}}}
     68
     69The example task script is:
     70{{{
     71#!/bin/bash
     72hostname
     73echo $1
     74sleep 1
     75}}}
     76
     77==== Multiple Nodes ====
     78The job script below requests 4 nodes 20 cores for each. Assuming that each task is multi-thread, and '''OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK''' determines the number of threads. '''"-j $TASKS_PER_NODE"''' determines the number of concurrently running tasks per node, which is defined by '''TASKS_PER_NODE=`echo $SLURM_NTASKS / $SLURM_NNODES | bc`'''. '''scontrol show hostname $SLURM_NODELIST > machinefile''' makes a list of nodes and send it to GNU parallel by "--slf $MACHINEFILE".
     79{{{
     80#!/bin/bash
     81#SBATCH --partition=defq        # Partition
     82#SBATCH --qos=normal            # Quality of Service
     83#SBATCH --job-name=GNU_Paralle  # Job Name
     84#SBATCH --time=00:10:00         # WallTime
     85#SBATCH --nodes=4               # Number of Nodes
     86#SBATCH --ntasks-per-node=5     # Number of tasks
     87#SBATCH --cpus-per-task=4       # Number of processors per task OpenMP threads()
     88#SBATCH --gres=mic:0            # Number of Co-Processors
     89
     90module load parallel
     91
     92MACHINEFILE="machinefile"
     93scontrol show hostname $SLURM_NODELIST > machinefile
     94cat $MACHINEFILE
     95export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
     96TASKS_PER_NODE=`echo $SLURM_NTASKS / $SLURM_NNODES | bc`
     97echo "TASKS_PER_NODE=" $TASKS_PER_NODE
     98
     99parallel --record-env
     100
     101parallel --joblog log \
     102        -j $TASKS_PER_NODE \
     103        --slf $MACHINEFILE \
     104        --workdir $SLURM_SUBMIT_DIR \
     105        --sshdelay 0.1 \
     106        --env OMP_NUM_THREADS \
     107        sh ./run_hostname.sh {} ::: `seq 1 100`
     108
     109echo "took $SECONDS sec"
     110}}}