| 17 | | == Why is job checkpointing important? == |
| | 17 | A checkpointed job '''must''' be able to perform the following. |
| | 18 | |
| | 19 | * The application must record it's progress (see the following) at one or both of the following times. |
| | 20 | * '''At regular time intervals''' on its own (This option is preferred for very long - or sufficiently long - running jobs where system crashes are more likely.) - '''or''' |
| | 21 | * '''After catching a Signal Terminate signal''' (or '''SIGTERM''') from the Operating System, where the signal is programmed '''in the job script''' to allow for completion of recording before walltime termination. For example, in your job script... |
| | 22 | * either via sbatch directives |
| | 23 | {{{ |
| | 24 | # --- Append to output and error files --- |
| | 25 | #SBATCH --open-mode=append |
| | 26 | # --- Enable automatic requeue --- |
| | 27 | #SBATCH --requeue |
| | 28 | # --- Send SIGTERM 2 minutes before walltime --- |
| | 29 | #SBATCH --signal=TERM@120 |
| | 30 | }}} |
| | 31 | * or bash '''timeout''' command followed by '''requeue''' |
| | 32 | {{{ |
| | 33 | timeout 23h ./my_simulation || scontrol requeue $SLURM_JOB_ID |
| | 34 | }}} |
| | 35 | |
| | 36 | * The application must record both the work already performed as well as the current or recent '''state of execution''' or '''state'''. |
| | 37 | |
| | 38 | * When the job is '''requeued''', the application must read the recorded '''state''' and resume from that point of execution with the previous work preserved. |
| | 39 | |
| | 40 | == Why is job checkpointing important - and beneficial? == |
| 20 | | * Most production clusters enforce strict walltime limits. (See see [wiki:cypress/about#SLURMresourcemanager SLURM (resource manager)].) |
| 21 | | * A parallel MPI job can fail as soon as a single node in use crashes. |
| 22 | | * Cloud-based job queues with high availability can enforce the use of pre-emptible job queues. |
| | 43 | * Checkpointed jobs compensate for strict walltime limits enforced by Cypress, LONI, and most other production clusters. (See see [wiki:cypress/about#SLURMresourcemanager SLURM (resource manager)].) |
| | 44 | * Checkpointed jobs running parallel MPI (especially long running jobs recording at regular intervals) can fail as soon as a single node in use crashes. |
| | 45 | * Checkpointed jobs running in certain cloud-based job queues with high availability can experience strictly enforced job pre-emption (SIGTERM signals). |