Changes between Version 26 and Version 27 of cypress/R
- Timestamp:
- 08/19/25 14:32:43 (42 hours ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
cypress/R
v26 v27 17 17 }}} 18 18 19 === Using Intel's Math Kernel Library (MKL) === 20 19 21 Observe in the output above that some R modules have names ending with the string 'intel'. 20 22 These modules have been constructed with links to Intel's Math Kernel Library (MKL) for performing certain computations using the Xeon Phi coprocessors. 21 23 See [[cypress/XeonPhi]]. 22 24 23 Also, the later versions of R require a later version, CentOS 7, of the operating system running on compute nodes in their own SLURM partition. 25 === Using CentOS 7 Operating System === 26 27 Also, the later versions of R are available only on compute nodes using the later version, CentOS 7, of the operating system available in a separate SLURM partition. For more information, see [[cypress/using#Requestingpartitioncentos7withglibc2.17|Requesting partition centos7 (batch)]] and also [[cypress/using#Requestingpartitioncentos7|Requesting partition centos7 (interactive)]]. 24 28 25 29 == Running R Interactively == … … 27 31 === Start an interactive session using idev === 28 32 29 In the following we'll use the latest version of R available .33 In the following we'll use the latest version of R available, which runs only on compute nodes in turn running with version CentOS 7 operating system. 30 34 31 35 ==== For Workshop ==== 32 36 33 If the primary group of your account is '''workshop''', then in order to use only 2 cpu's per node and thus allow for sharing the few available nodes in the interactive partition among many users, do this. 34 35 {{{ 36 export MY_PARTITION=workshop 37 export MY_QUEUE=workshop 38 [tulaneID@cypress1 ~]$ idev -c 2 37 If your account is in the group '''workshop''', then in order to use only 2 cpu's per node and thus allow for sharing the few available nodes in the '''workshop7''' partition among many users, do this. 38 39 {{{ 40 [tulaneID@cypress1 ~]$export MY_PARTITION=workshop7 41 [tulaneID@cypress1 ~]$export MY_QUEUE=workshop 42 }}} 43 44 {{{ 45 [tulaneID@cypress1 ~]$idev -c 2 46 Requesting 1 node(s) task(s) to workshop queue of workshop7 partition 47 1 task(s)/node, 2 cpu(s)/task, 0 MIC device(s)/node 48 Time: 0 (hr) 60 (min). 49 0d 0h 60m 50 Submitted batch job 2706829 51 JOBID=2706829 begin on cypress01-009 52 --> Creating interactive terminal session (login) on node cypress01-009. 53 --> You have 0 (hr) 60 (min). 54 --> Assigned Host List : /tmp/idev_nodes_file_tulaneID 55 Last login: Tue Aug 19 10:30:58 2025 from cypress2.cm.cluster 56 [tulaneID@cypress01-009 ~ at 12:12:10]$ 39 57 }}} 40 58 41 59 42 60 ==== Non-workshop ==== 61 62 {{{ 63 export MY_PARTITION=centos7 64 }}} 65 43 66 {{{ 44 67 [tulaneID@cypress1 ~]$ idev … … 50 73 --> Creating interactive terminal session (login) on node cypress01-121. 51 74 --> You have 0 (hr) 60 (min). 52 --> Assigned Host List : /tmp/idev_nodes_file_tu hpc00275 --> Assigned Host List : /tmp/idev_nodes_file_tulaneID 53 76 Last login: Wed Aug 21 15:56:37 2019 from cypress1.cm.cluster 54 77 [tulaneID@cypress01-121 ~]$ … … 58 81 59 82 {{{ 60 [tulaneID@cypress01-121 ~]$ module load R/4. 1.1-intel83 [tulaneID@cypress01-121 ~]$ module load R/4.4.1 61 84 [tulaneID@cypress01-121 ~]$ module list 62 85 Currently Loaded Modulefiles: 63 1) slurm/14.03.0 6) mpfr/4.1.0 11) xz/5.2.2 16) libtiff/4.3.064 2) idev 7) mpc/1.2.1 12) pcre2/10.38 17) R/4.1.1-intel65 3) bbcp/amd64_rhel60 8) gcc/6.3.0 13) tcl/8.6.1166 4) intel-psxe/2019-update1 9) zlib/1.2.8 14) tk/8.6.1167 5) gmp/6.2.1 10) bzip2/1.0.6 15) libpng/1.6.3786 1) slurm/14.03.0 6) mpc/1.2.1 11) pcre2/10.38 16) libtiff/4.6.0 87 2) idev 7) gcc/9.5.0 12) tcl/8.6.11 17) tre/0.8.0 88 3) bbcp/amd64_rhel60 8) zlib/1.2.8 13) tk/8.6.11 18) binutils/2.37 89 4) gmp/6.2.1 9) bzip2/1.0.6 14) libpng/1.6.37 19) java-openjdk/17.0.7+7 90 5) mpfr/4.1.0 10) xz/5.2.2 15) libjpeg-turbo/3.0.1 20) R/4.4.1 68 91 }}} 69 92 … … 73 96 [tulaneID@cypress01-121 ~]$R 74 97 75 R version 4. 1.1 (2021-08-10) -- "Kick Things"76 Copyright (C) 202 1The R Foundation for Statistical Computing77 Platform: x86_64-pc-linux-gnu (64-bit)98 R version 4.4.1 (2024-06-14) -- "Race for Your Life" 99 Copyright (C) 2024 The R Foundation for Statistical Computing 100 Platform: x86_64-pc-linux-gnu 78 101 79 102 R is free software and comes with ABSOLUTELY NO WARRANTY. … … 101 124 }}} 102 125 103 T huswe should first ensure that the required R package, in this case the R package '''doParallel''', is available and installed in your environment. For a range of options for installing R packages - depending on the desired level of reproducibility, see the section [#InstallingRPackages Installing R Packages on Cypress].126 To resolve the above error, we should first ensure that the required R package, in this case the R package '''doParallel''', is available and installed in your environment. For a range of options for installing R packages - depending on the desired level of reproducibility, see the section [#InstallingRPackages Installing R Packages on Cypress]. 104 127 105 128 '''For Workshop''' : 106 If you use a temporary workshop account, use [#RPackageAlternative1 Alternative 1] - responding to the R prompts as needed - for installing R packages such as in the following.129 If your account is in the group '''workshop''', use [[cypress/InstallingRPackages#Alternative1-defaulttohomesub-directory | Alternative 1]] - responding to the R prompts as needed - for installing R packages such as in the following. 107 130 108 131 Thus we need to install the R package '''doParallel'''. … … 119 142 > q() 120 143 Save workspace image? [y/n/c]: n 121 [tuhpc002@cypress01-121 R]$ exit 122 }}} 123 124 Thus, now that we have resolved our package dependency, we can expect future jobs requiring '''doParallel''' to run without errors. 144 [tulaneID@cypress01-121 ~]$exit 145 [tulaneID@cypress1 ~]$ 146 }}} 147 148 Now that we have resolved our package dependency, we can expect future jobs requiring '''doParallel''' to run without errors. 149 150 Also, notice in the above that we have exited the interactive session since we no longer need it to submit batch jobs. 151 152 == Download Sample Scripts == 153 154 If you have not done yet, '''download Sample files''' by: 155 156 {{{[tulaneID@cypress1 ~]$ git clone https://hidekiCCS:@bitbucket.org/hidekiCCS/hpc-workshop.git}}} 157 158 Then use the '''cp''' command to copy the batch scripst and R scripts to your current directory. 159 160 {{{cp hpc-workshop/R/* .}}} 125 161 126 162 == Running a R script in Batch mode == … … 130 166 {{{#!sh 131 167 #!/bin/bash 168 #SBATCH --partition=centos7 # Partition 132 169 #SBATCH --qos=normal # Quality of Service 133 170 #SBATCH --job-name=R # Job Name … … 137 174 #SBATCH --cpus-per-task=1 # Number of threads per task (OMP threads) 138 175 139 module load R/ 3.2.4176 module load R/4.4.1 140 177 Rscript myRscript.R 141 178 }}} 142 179 143 180 '''For Workshop''' : 144 If you use a temporary workshop account, modify the SLURM script like:145 {{{#!sh 146 #!/bin/bash 147 #SBATCH --partition=workshop 148 #SBATCH --qos=workshop 149 ##SBATCH --qos=normal ### Quality of Service (like a queue in PBS)181 If your account is in the group '''workshop''', modify the SLURM script like: 182 {{{#!sh 183 #!/bin/bash 184 #SBATCH --partition=workshop7 # Partition 185 #SBATCH --qos=workshop # Quality of Service 186 ##SBATCH --qos=normal ### Quality of Service (like a queue in PBS) 150 187 #SBATCH --job-name=R # Job Name 151 188 #SBATCH --time=00:01:00 # WallTime … … 154 191 #SBATCH --cpus-per-task=1 # Number of threads per task (OMP threads) 155 192 156 module load R/ 3.2.4193 module load R/4.4.1 157 194 Rscript myRscript.R 158 195 }}} … … 166 203 In the first example, we will use the built in R function '''Sys.getenv( )''' to get the SLURM environment variable from the operating system. 167 204 168 Edit the new file '''bootstrap.R''' to containthe following code.205 Let's look at the downloaded sample file '''bootstrap.R''' containing the following code. 169 206 170 207 {{{#!r … … 200 237 {{{#!sh 201 238 #!/bin/bash 239 #SBATCH --partition=centos7 # Partition 202 240 #SBATCH --qos=normal # Quality of Service 203 241 #SBATCH --job-name=R # Job Name 204 242 #SBATCH --time=00:01:00 # WallTime 205 243 #SBATCH --nodes=1 # Number of Nodes 206 #SBATCH --ntasks-per-node=1 # Number of Tasks per Node244 #SBATCH --ntasks-per-node=1 # Number of Tasks per Node 207 245 #SBATCH --cpus-per-task=16 # Number of threads per task (OMP threads) 208 246 209 module load R/ 3.2.4247 module load R/4.4.1 210 248 211 249 Rscript bootstrap.R … … 213 251 214 252 '''For Workshop''' : 215 If you use a temporary workshop account, modify the SLURM script like:216 {{{#!sh 217 #!/bin/bash 218 #SBATCH --partition=workshop 253 If your account is in the group '''workshop''', modify the SLURM script like: 254 {{{#!sh 255 #!/bin/bash 256 #SBATCH --partition=workshop7 # Partition 219 257 #SBATCH --qos=workshop # Quality of Service 220 258 ##SBATCH --qos=normal ### Quality of Service (like a queue in PBS) … … 225 263 #SBATCH --cpus-per-task=16 # Number of threads per task (OMP threads) 226 264 227 module load R/ 3.2.4265 module load R/4.4.1 228 266 229 267 Rscript bootstrap.R 230 268 }}} 231 269 232 Edit the new file '''bootstrap.sh''' to contain the above SLURM script code andsubmit as shown below.270 The downloaded sample file '''bootstrap.sh''' contains the above SLURM script code, and we can submit as shown below. 233 271 234 272 Also, note that since we did not specify an output file in the SLURM script, the output will be written to slurm-<!JobNumber>.out. For example: 235 273 236 274 {{{ 237 [tulaneID@cypress2 R]$ sbatch bootstrap.sh275 [tulaneID@cypress2 ~]$ sbatch bootstrap.sh 238 276 Submitted batch job 774081 239 [tulaneID@cypress2 R]$ cat slurm-774081.out277 [tulaneID@cypress2 ~]$ cat slurm-774081.out 240 278 Loading required package: foreach 241 279 Loading required package: iterators … … 244 282 elapsed 245 283 2.954 246 [tulaneID@cypress2 R]$284 [tulaneID@cypress2 ~]$ 247 285 }}} 248 286 … … 251 289 === Passing Parameters === 252 290 253 The disadvantage of the above approach is that it is system specific. If we move our code to a machine that uses PBS-Torque as it's manager ( sphynxfor example) we have to change our source code. An alternative method that results in a more portable code base uses command line arguments to pass the value of our environment variables into the script.254 255 Edit the new file '''bootstrapWargs.R''' to containthe following code.291 The disadvantage of the above approach is that it is system specific. If we move our code to a machine that uses PBS-Torque as it's manager ([[https://hpc.loni.org/resources/hpc/system.php?system=QB2|LONI QB2]] for example) we have to change our source code. An alternative method that results in a more portable code base uses command line arguments to pass the value of our environment variables into the script. 292 293 Let's look at the downloaded file '''bootstrapWargs.R''' containing the following code. 256 294 257 295 {{{#!r … … 289 327 {{{#!sh 290 328 #!/bin/bash 329 #SBATCH --partition=centos7 # Partition 291 330 #SBATCH --qos=normal # Quality of Service 292 331 #SBATCH --job-name=R # Job Name … … 296 335 #SBATCH --cpus-per-task=16 # Number of threads per task (OMP threads) 297 336 298 module load R/ 3.2.4337 module load R/4.4.1 299 338 300 339 Rscript bootstrapWargs.R $SLURM_CPUS_PER_TASK … … 302 341 303 342 '''For Workshop''' : 304 If you use a temporary workshop account, modify the SLURM script like:305 {{{#!sh 306 #!/bin/bash 307 #SBATCH --partition=workshop 343 If your account is in the group '''workshop''', modify the SLURM script like: 344 {{{#!sh 345 #!/bin/bash 346 #SBATCH --partition=workshop7 # Partition 308 347 #SBATCH --qos=workshop # Quality of Service 309 348 ##SBATCH --qos=normal ### Quality of Service (like a queue in PBS) … … 314 353 #SBATCH --cpus-per-task=16 # Number of threads per task (OMP threads) 315 354 316 module load R/ 3.2.4355 module load R/4.4.1 317 356 318 357 Rscript bootstrapWargs.R $SLURM_CPUS_PER_TASK 319 358 }}} 320 359 321 Edit the new file '''bootstrapWargs.sh''' to containthe above SLURM script code.360 The downloaded file '''bootstrapWargs.sh''' contains the above SLURM script code. 322 361 323 362 Now submit as in the following.