Changes between Version 31 and Version 32 of cypress/about
- Timestamp:
- 04/30/15 11:20:55 (10 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
cypress/about
v31 v32 99 99 == Requesting memory for your job == 100 100 101 Our standard nodes on Cypress will allow you to use up to 64 GB of memory per node. This should be sufficient for many types of jobs, and you do not need to do anything if your job uses less than this amount of memory. If your jobs require more memory to run, use the `--mem` option of sbatch to request a larger amount of memory. Specify the number of megabytes (MB) of memory per node required. For example, in your sbatch script, you would include the following line to request 128 GB of memory per node:101 Our standard nodes on Cypress will allow you to use up to 64 GB of memory per node. This should be sufficient for many types of jobs, and you do not need to do anything if your job uses less than this amount of memory. If your jobs require more memory to run, use the `--mem` option of sbatch to request a larger amount of memory. To request 128 GB nodes for your job, put the following in your sbatch script: 102 102 103 103 {{{ … … 105 105 }}} 106 106 107 We have nodes with 64 GB, 128 GB, or 256 GB of memory each. We have a limited number of the larger memory nodes, so please only request a larger amount of memory if your job requires it. 107 or, to request 256 GB memory nodes, use the following: 108 109 {{{ 110 #SBATCH --mem=256000 111 }}} 112 113 We have a limited number of the larger memory nodes, so please only request a larger amount of memory if your job requires it. You can ask SLURM for an estimate of the amount of memory used by jobs you have previously run using `sacct -j <jobid> -o maxvmsize` . For example: 114 115 {{{ 116 $ sacct -j 2660 -o maxvmsize 117 MaxVMSize 118 ---------- 119 120 39172520K 121 }}} 122 123 This shows that job 2660 allocated close to 40 GB of memory. 108 124 109 125