| 130 | |
| 131 | ==== Requesting memory for your job ==== |
| 132 | |
| 133 | Our standard nodes on Cypress will allow you to use up to 64 GB of memory per node (3.2 GB per core requested). This should be sufficient for many types of jobs, and you do not need to do anything if your job uses less than this amount of memory. If your jobs require more memory to run, use the `--mem` option of sbatch to request a larger amount of memory. For example, to request 16 GB of memory per node, put the following in your sbatch script: |
| 134 | |
| 135 | {{{ |
| 136 | #SBATCH --mem=16000 |
| 137 | }}} |
| 138 | |
| 139 | |
| 140 | If you need more than 64 GB of memory per node, we have a few larger memory nodes available. To request 128 GB nodes for your job, put the following in your sbatch script: |
| 141 | |
| 142 | {{{ |
| 143 | #SBATCH --mem=128000 |
| 144 | }}} |
| 145 | |
| 146 | or, to request 256 GB memory nodes, use the following: |
| 147 | |
| 148 | {{{ |
| 149 | #SBATCH --mem=256000 |
| 150 | }}} |
| 151 | |
| 152 | We have a limited number of the larger memory nodes, so please only request a larger amount of memory if your job requires it. You can ask SLURM for an estimate of the amount of memory used by jobs you have previously run using `sacct -j <jobid> -o maxvmsize` . For example: |
| 153 | |
| 154 | {{{ |
| 155 | $ sacct -j 2660 -o maxvmsize |
| 156 | MaxVMSize |
| 157 | ---------- |
| 158 | |
| 159 | 39172520K |
| 160 | }}} |
| 161 | |
| 162 | This shows that job 2660 allocated close to 40 GB of memory. |