wiki:cypress/SingularityDockerhub

Build Singularity Containers from Dockerhub

Docker is a container virtualization environment that can establish development or runtime environments without modifying the environment of the base operating system. It runs on Mac and Windows.

To use Singularity on Cypress, you have to login to one of Centos7 computing nodes.

idev --partition=centos7

Load the module,

module load singularity/3.9.0

Build from Docker Hub, for example ubuntu 14.04

singularity pull docker://ubuntu:14.04

Example : FastDTLmapper

This tool depends on many python packages and requires both python2 and python3. Fortunately, the developer provides Docker Images so we can use it to generate a Singularity image.

On Centos7 computing node, (need 'idev'),

module load singularity/3.9.0
singularity pull docker://ghcr.io/moshi4/fastdtlmapper:latest

This command creates ‘fastdtlmapper_latest.sif’, which is the Singularity Image. You can rename/transfer it as you like.

To run it to show ‘help’,

singularity exec fastdtlmapper_latest.sif FastDTLmapper -h

To run it with the minimum test dataset, Assuming the example dataset is extracted under /lustre/project/group/user/fastdtlmapper, where 'group' is the group name and 'user' is the user name,

export SINGULARITYENV_TMPDIR=/tmp
export SINGULARITY_BINDPATH=/lustre/project/group/user:/home/user,$TMPDIR:/tmp
singularity exec fastdtlmapper_latest.sif FastDTLmapper -i fastdtlmapper/example/minimum_dataset/fasta/ -t fastdtlmapper/example/minimum_dataset/species_tree.nwk -o fastdtlmapper/output_minimum

For Slurm batch jobs,

#!/bin/bash
#SBATCH --job-name=fastDLmapper # Job Name
#SBATCH --partition=centos7    # Partition
#SBATCH --qos=normal           # Quality of Service
#SBATCH --time=0-24:00:00      # Wall clock time limit in Days-HH:MM:SS
#SBATCH --nodes=1              # Node count required for the job
#SBATCH --ntasks-per-node=1    # Number of tasks to be launched per Node
#SBATCH --cpus-per-task=20     # Number of threads per task (OMP threads)
#SBATCH --output=try_fastDLmapper.out       ### File in which to store job output
#SBATCH --error=try_fastDLmapper.err        ### File in which to store job error messages
 
# Load Singularity module
module load singularity/3.9.0
 
# Set $TMPDIR in containar to /tmp, keeping $TMPDIR in host (/local/tmp/...)
export SINGULARITYENV_TMPDIR=/tmp
 
# Mount the lustre directory to home, $TMPDIR to /tmp
export SINGULARITY_BINDPATH=/lustre/project/group/user:/home/user,$TMPDIR:/tmp
 
# Run container
singularity exec fastdtlmapper_latest.sif FastDTLmapper -i fastdtlmapper/example/minimum_dataset/fasta/ -t fastdtlmapper/example/minimum_dataset/species_tree.nwk -o fastdtlmapper/output_minimum
Last modified 14 months ago Last modified on 03/02/23 14:09:21
Note: See TracWiki for help on using the wiki.