99 | | If your jobs require a significant amount of memory (approximately more than 16 GB per node), it is recommended that you explicitly request the amount of memory desired. To do this, use the `--mem` option of sbatch and specify the number of megabytes (MB) of memory per node required. For example, in your sbatch script, you would include the following line to request 32 GB of memory per node: |
100 | | |
101 | | {{{ |
102 | | #SBATCH --mem=32000 |
103 | | }}} |
104 | | |
105 | | You can request up to 64 GB of memory per node for your jobs in this way. |
106 | | |
107 | | If your jobs require more than 64 GB of memory per node, we have some large memory nodes in an experimental stage that are available for testing. To use these large memory nodes, you will need to add to your job a request for the "bigmem" partition. You would use the following to request 128 GB of memory per node (this is the maximum available at this time): |
| 99 | Our standard nodes on Cypress will allow you to use 3.2 GB of memory per core (64 GB per node). This should be sufficient for many types of jobs, and you do not need to do anything if your job uses less than this amount of memory. If your jobs require more memory to run, use the `--mem` option of sbatch to request a larger amount of memory. Specify the number of megabytes (MB) of memory per node required. For example, in your sbatch script, you would include the following line to request 128 GB of memory per node: |