143 | | == Requesting memory for your job == |
144 | | |
145 | | Our standard nodes on Cypress will allow you to use up to 64 GB of memory per node (3.2 GB per core requested). This should be sufficient for many types of jobs, and you do not need to do anything if your job uses less than this amount of memory. If your jobs require more memory to run, use the `--mem` option of sbatch to request a larger amount of memory. For example, to request 16 GB of memory per node, put the following in your sbatch script: |
146 | | |
147 | | {{{ |
148 | | #SBATCH --mem=16000 |
149 | | }}} |
150 | | |
151 | | |
152 | | If you need more than 64 GB of memory per node, we have a few larger memory nodes available. To request 128 GB nodes for your job, put the following in your sbatch script: |
153 | | |
154 | | {{{ |
155 | | #SBATCH --mem=128000 |
156 | | }}} |
157 | | |
158 | | or, to request 256 GB memory nodes, use the following: |
159 | | |
160 | | {{{ |
161 | | #SBATCH --mem=256000 |
162 | | }}} |
163 | | |
164 | | We have a limited number of the larger memory nodes, so please only request a larger amount of memory if your job requires it. You can ask SLURM for an estimate of the amount of memory used by jobs you have previously run using `sacct -j <jobid> -o maxvmsize` . For example: |
165 | | |
166 | | {{{ |
167 | | $ sacct -j 2660 -o maxvmsize |
168 | | MaxVMSize |
169 | | ---------- |
170 | | |
171 | | 39172520K |
172 | | }}} |
173 | | |
174 | | This shows that job 2660 allocated close to 40 GB of memory. |
175 | | |