Version 10 (modified by 15 months ago) ( diff ) | ,
---|
Singularity
Current installation of Singularity on Cypress works on Centos7 nodes only.
Singularity allows you to create and run containers that package up pieces of software in a way that is portable and reproducible. You can build a container using Singularity on your laptop, and then run it on many of the largest HPC clusters like Cypress. Your container is a single file, and you don’t have to worry about how to install all the software you need on each different operating system and system.
Possible uses for Singularity on Cypress
- Run an application built for a different distribution of Linux than Cypress OS (Centos).
- Reproduce an environment to run a workflow created by someone else.
- Run a series of applications (a 'pipeline') that includes applications built on different platforms.
- Run an application from Singularity Hub or Docker Hub or BioContainers without actually installing anything.
Build Singularity Container on Cypress
If you need a singularity image with specific software installed, please request Cypress HPC Consultation through service now.
Run Singularity Container on Cypress
For example, the container built this page. In an interactive session by idev --partition=centos7,
[fuji@cypress01-013 SingularityTest]$ singularity exec ubuntu_14.04.sif cat /etc/os-release NAME="Ubuntu" VERSION="14.04.6 LTS, Trusty Tahr" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 14.04.6 LTS" VERSION_ID="14.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
The user in the container is the same as in the host machine.
[fuji@cypress01-013 SingularityTest]$ singularity exec ubuntu_14.04.sif whoami fuji
Singularity has a dedicated syntax to open an interactive shell prompt in a container:
[fuji@cypress01-013 SingularityTest]$ singularity shell ubuntu_14.04.sif Singularity> pwd /home/fuji Singularity> whoami fuji Singularity> cat /etc/os-release NAME="Ubuntu" VERSION="14.04.6 LTS, Trusty Tahr" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 14.04.6 LTS" VERSION_ID="14.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" Singularity> exit exit
Use host directories
The container filesystem is read-only, so if you want to write output files you must do it in a bind-mounted host directory. '/home/userid' is mounted by default.
Following environmental variable sets '/lustre/project/hpcstaff/fuji' as '/home/fuji' in the container.
export SINGULARITY_BINDPATH=/lustre/project/hpcstaff/fuji:/home/fuji
Run containers in a batch job
For example,
#!/bin/bash #SBATCH --job-name=Singularity # Job Name #SBATCH --partition=centos7 # Partition #SBATCH --qos=normal # Quality of Service #SBATCH --time=0-00:10:00 # Wall clock time limit in Days-HH:MM:SS #SBATCH --nodes=1 # Node count required for the job #SBATCH --ntasks-per-node=1 # Number of tasks to be launched per Node #SBATCH --cpus-per-task=1 # Number of threads per task (OMP threads) # Load Singularity module module load singularity/3.9.0 # Set $TMPDIR in containar to /tmp, keeping $TMPDIR in host (/local/tmp/...) export SINGULARITYENV_TMPDIR=/tmp # Mount the lustre directory to home, $TMPDIR to /tmp export SINGULARITY_BINDPATH=/lustre/project/hpcstaff/fuji:/home/fuji,$TMPDIR:/tmp # Run container singularity exec ubuntu_14.04.sif ls
By default, the cache is stored in ~/.singularity; this location can be customized using the environmental variable SINGULARITY_CACHEDIR. A subcommand, singularity cache, can be used to manage the cache.