100 | | If your jobs require more than 64 GB of memory (RAM), please contact us with details of your simulations. We have a number of large-memory nodes in an experimental/testing stage. |
| 100 | |
| 101 | == Requesting memory for your job == |
| 102 | |
| 103 | If your jobs require a significant amount of memory (approximately more than 16 GB per node), it is recommended that you explicitly request the amount of memory desired. To do this, use the `--mem` option of sbatch and specify the number of megabytes (MB) of memory per node required. For example, in your sbatch script, you would include the following line to request 32 GB of memory per node: |
| 104 | |
| 105 | {{{ |
| 106 | #SBATCH --mem=32000 |
| 107 | }}} |
| 108 | |
| 109 | You can request up to 64 GB of memory per node for your jobs in this way. |
| 110 | |
| 111 | If your jobs require more than 64 GB of memory per node, we have some large memory nodes in an experimental stage that are available for testing. To use these large memory nodes, you will need to add to your job a request for the "bigmem" partition. You would use the following to request 128 GB of memory per node (this is the maximum available at this time): |
| 112 | |
| 113 | {{{ |
| 114 | #SBATCH --mem=128000 |
| 115 | #SBATCH --partition=bigmem |
| 116 | }}} |
| 117 | |
| 118 | Due to the very limited number of large memory nodes at this time, only the "interactive", and "normal" QOS's are available when using the "bigmem" partition. |
| 119 | |