|Version 9 (modified by 3 years ago) ( diff ),|
Linux and Mac Command Line
You may transfer files between your workstation and the cluster on the command line using the scp command. This command behaves much like the basic Linux cp command, except you may use a remote address as the source or destination file. The syntax is as follows:
scp source_file destination_file
The following command will copy the file testfile from the /home/remoteuser/ directory on the remote server cypress1.tulane.edu to your workstation's local directory "." (a period represents the current working directory).
user@localhost> scp email@example.com:/home/remoteuser/testfile .
To copy a directory along with all its contents you will need to add the -r recursive flag. The following command will copy the simdata directory and all its contents to your local machine.
user@localhost> scp -r firstname.lastname@example.org:/home/remoteuser/simdata .
There are many graphical file transfer solutions available. The following are the three most popular and are fairly intuitive. Be sure to set each to connect to the cluster using the Secure File Transfer Protocol (SFTP).
Filezillla is available on all platforms. Be careful when downloading and installing as the hosting site, sourceforge, has begun to bundle bloatware with its downloads. FileZilla
Fetch is a full-featured file transfer client for Mac and is free to the academic community Fetch
WinSCP is a free Windows client. WinSCP
Storage on Cypress
Every Cypress user has two locations in which to store data: A small, high security, low performance, personal home directory and a large, secure, group shared Lustre directory.
Storage: home directory
Your home directory on Cypress is intended to store customized source code, binaries, scripts, analyzed results, manuscripts, and other small but important files. This directory is limited to 10 GB (10,000 MB), and is backed up. To view your current quota and usage, run the command:
quota -s -f /home
Please do not use your home directory to perform simulations with heavy I/O (file read/write) usage. Instead, please use your group's Lustre project directory.
Storage: Lustre group project directory
Cypress has a 699 TB Lustre filesystem available for use by active jobs, and for sharing large files for active jobs within a research group. The Lustre filesystem has 2 Object Storage Servers (OSSs) which provide file I/O for 24 Object Storage Targets (OSTs). The Lustre filesystem is available to compute nodes via the 40 Gigabit Ethernet network. The default stripe count is set to 1.
Allocations on this filesystem are provided per project/research group. Each group is given a space allocation of 1 TB and an inode allocation of 1 million (i.e. up to 1 million files or directories) on the Lustre filesystem. If you need additional disk space to run your computations, your PI may request a quota adjustment. To request a quota adjustment, please provide details and an estimate of the disk space used/required by your computations. Allocations are based on demonstrated need and available resources.
The Lustre filesystem is not for bulk or archival storage of data. The filesystem is configured for redundancy and hardware fault tolerance, but is not backed up. If you require space for bulk / archival storage, please contact us, and we will take a look at the available options.
Your group's Lustre project directory will be at:
"your-group-name" is your Linux group name, as returned by the command "id -gn". Your group is free to organize your project directory as desired, but it is recommended to create separate subfolders for different sets of data, or for different groups of simulations.
To view your group's current usage and quota, run the command:
lfs quota -h -g `id -gn` /lustre
To view your own usage, you can run:
lfs quota -h -u `id -un` /lustre
High Performance Data transfer
For high speed transfer of large files (1GB or larger), Cypress is currently equipped with the data transfer tool bbcp. An excellent treatment on the use of BBCP can be found at http://pcbunn.cithep.caltech.edu/bbcp/using_bbcp.htm