107 | | Our standard nodes on Cypress will allow you to use up to 64 GB of memory per node. This should be sufficient for many types of jobs, and you do not need to do anything if your job uses less than this amount of memory. If your jobs require more memory to run, use the `--mem` option of sbatch to request a larger amount of memory. To request 128 GB nodes for your job, put the following in your sbatch script: |
| 107 | Our standard nodes on Cypress will allow you to use up to 64 GB of memory per node (3.2 GB per core requested). This should be sufficient for many types of jobs, and you do not need to do anything if your job uses less than this amount of memory. If your jobs require more memory to run, use the `--mem` option of sbatch to request a larger amount of memory. For example, to request 16 GB of memory per node, put the following in your sbatch script: |
| 108 | |
| 109 | {{{ |
| 110 | #SBATCH --mem=16000 |
| 111 | }}} |
| 112 | |
| 113 | |
| 114 | If you need more than 64 GB of memory per node, we have a few larger memory nodes available. To request 128 GB nodes for your job, put the following in your sbatch script: |