| | 16 | For example, the container built [SingularityDockerhub this page]. In an interactive session by '''idev --partition=centos7''', |
| | 17 | |
| | 18 | {{{ |
| | 19 | [fuji@cypress01-013 SingularityTest]$ singularity exec ubuntu_14.04.sif cat /etc/os-release |
| | 20 | NAME="Ubuntu" |
| | 21 | VERSION="14.04.6 LTS, Trusty Tahr" |
| | 22 | ID=ubuntu |
| | 23 | ID_LIKE=debian |
| | 24 | PRETTY_NAME="Ubuntu 14.04.6 LTS" |
| | 25 | VERSION_ID="14.04" |
| | 26 | HOME_URL="http://www.ubuntu.com/" |
| | 27 | SUPPORT_URL="http://help.ubuntu.com/" |
| | 28 | BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" |
| | 29 | }}} |
| | 30 | |
| | 31 | The user in the container is the same as in the host machine. |
| | 32 | |
| | 33 | {{{ |
| | 34 | [fuji@cypress01-013 SingularityTest]$ singularity exec ubuntu_14.04.sif whoami |
| | 35 | fuji |
| | 36 | }}} |
| | 37 | |
| | 38 | Singularity has a dedicated syntax to open an interactive shell prompt in a container: |
| | 39 | |
| | 40 | {{{ |
| | 41 | [fuji@cypress01-013 SingularityTest]$ singularity shell ubuntu_14.04.sif |
| | 42 | Singularity> pwd |
| | 43 | /home/fuji |
| | 44 | Singularity> whoami |
| | 45 | fuji |
| | 46 | Singularity> cat /etc/os-release |
| | 47 | NAME="Ubuntu" |
| | 48 | VERSION="14.04.6 LTS, Trusty Tahr" |
| | 49 | ID=ubuntu |
| | 50 | ID_LIKE=debian |
| | 51 | PRETTY_NAME="Ubuntu 14.04.6 LTS" |
| | 52 | VERSION_ID="14.04" |
| | 53 | HOME_URL="http://www.ubuntu.com/" |
| | 54 | SUPPORT_URL="http://help.ubuntu.com/" |
| | 55 | BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" |
| | 56 | Singularity> exit |
| | 57 | exit |
| | 58 | }}} |
| | 59 | |
| | 60 | === Use host directories === |
| | 61 | The container filesystem is read-only, so if you want to write output files you must do it in a bind-mounted host directory. |
| | 62 | '/home/userid' containing data and software are mounted by default. |
| | 63 | |
| | 64 | Following environ variable sets '/lustre/project/hpcstaff/fuji' as '/home/fuji' in the container. |
| | 65 | {{{ |
| | 66 | export SINGULARITY_BINDPATH=/lustre/project/hpcstaff/fuji:/home/fuji |
| | 67 | }}} |
| | 68 | |
| | 69 | |
| | 70 | === Run containters in a batch job === |
| | 71 | For example, |
| | 72 | {{{ |
| | 73 | #!/bin/bash |
| | 74 | #SBATCH --job-name=Singularity # Job Name |
| | 75 | #SBATCH --partition=centos7 # Partition |
| | 76 | #SBATCH --qos=normal # Quality of Service |
| | 77 | #SBATCH --time=0-00:10:00 # Wall clock time limit in Days-HH:MM:SS |
| | 78 | #SBATCH --nodes=1 # Node count required for the job |
| | 79 | #SBATCH --ntasks-per-node=1 # Number of tasks to be launched per Node |
| | 80 | #SBATCH --cpus-per-task=1 # Number of threads per task (OMP threads) |
| | 81 | |
| | 82 | # Load Singularity module |
| | 83 | module load singularity/3.9.0 |
| | 84 | |
| | 85 | # Set $TMPDIR in containar to /tmp, keeping $TMPDIR in host (/local/tmp/...) |
| | 86 | export SINGULARITYENV_TMPDIR=/tmp |
| | 87 | |
| | 88 | # Mount the lustre directory to home, $TMPDIR to /tmp |
| | 89 | export SINGULARITY_BINDPATH=/lustre/project/hpcstaff/fuji:/home/fuji,$TMPDIR:/tmp |
| | 90 | |
| | 91 | # Run container |
| | 92 | singularity exec ubuntu_14.04.sif ls |
| | 93 | }}} |
| | 94 | |