190 | | [[cypress/IntoToHpcWorkshop2015|Return to Workshop Homepage]] |
191 | | |
| 190 | You can also submit your Python job to the batch nodes (compute nodes) on Cypress. Inside your SLURM script, include a command to load the desired Python module. Then invoke '''python''' on your python script. |
| 191 | |
| 192 | {{{#!bash |
| 193 | #!/bin/bash |
| 194 | #SBATCH --qos=workshop # Quality of Service |
| 195 | #SBATCH --partition=workshop # Partition |
| 196 | #SBATCH --job-name=python # Job Name |
| 197 | #SBATCH --time=00:01:00 # WallTime |
| 198 | #SBATCH --nodes=1 # Number of Nodes |
| 199 | #SBATCH --ntasks-per-node=1 # Number of tasks (MPI processes) |
| 200 | #SBATCH --cpus-per-task=1 # Number of threads per task (OMP threads) |
| 201 | |
| 202 | module load anaconda |
| 203 | python mypythonscript.py |
| 204 | }}} |
| 205 | |
| 206 | == Running a Parallel Python Job == |
| 207 | |
| 208 | The exact configuration of you parallel job script will depend on the flavor of parallelism that you choose for you Python script. |
| 209 | |
| 210 | As an example we will use the Parallel Python (pp) package that we installed above. Parallel Python used the shared memory model of parallelism (analogous to to OpenMP). Let's run the [http://www.parallelpython.com/content/view/17/31/ sum of primes] example from the Parallel Python website. |
| 211 | |
| 212 | We need to communicate the number of cores we wish to use to our script. The syntax here is |
| 213 | |
| 214 | {{{ |
| 215 | python sum_primes.py [ncpus] |
| 216 | }}} |
| 217 | |
| 218 | We can communicate the SLURM parameters to the script using the appropriate SLURM environment variable. |
| 219 | |
| 220 | |
| 221 | |
| 222 | [[cypress/IntoToHpcWorkshop2015| Return to Workshop Homepage]] |
| 223 | |