Version 5 (modified by 9 years ago) ( diff ) | ,
---|
MATLAB
MATLAB (matrix laboratory) is a proprietary programming language developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, and creation of user interfaces.
You can run your Matlab codes on Cypress clusters but you can't use GUI(Graphical User Interface) on computing nodes.
Running MATLAB interactively
Start an interactive session,
[fuji@cypress2 ~]$ idev Requesting 1 node(s) task(s) to normal queue of defq partition 1 task(s)/node, 20 cpu(s)/task, 2 MIC device(s)/node Time: 0 (hr) 60 (min). Submitted batch job 47343 JOBID=47343 begin on cypress01-063 --> Creating interactive terminal session (login) on node cypress01-063. --> You have 0 (hr) 60 (min). Last login: Mon Jun 8 20:18:50 2015 from cypress1.cm.cluster
Load the module
[fuji@cypress01-063 ~]$ module load matlab
Run MATLAB on the command-line window,
[fuji@cypress01-063 ~]$ matlab MATLAB is selecting SOFTWARE OPENGL rendering. < M A T L A B (R) > Copyright 1984-2015 The MathWorks, Inc. R2015a (8.5.0.197613) 64-bit (glnxa64) February 12, 2015 To get started, type one of these: helpwin, helpdesk, or demo. For product information, visit www.mathworks.com. Academic License >>
You will get to the MATLAB command-line and can run MATLAB code here but no graphics.
Running MATLAB in a batch mode
You can also submit your MATLAB job to the batch nodes (compute nodes) on Cypress. To do so, please first make sure that the MATLAB module has been loaded, and then launch "matlab" with the "-nodesktop -nodisplay -nosplash" option as shown in the sample SLURM job script below.
#!/bin/bash #SBATCH --qos=normal # Quality of Service #SBATCH --job-name=matlab # Job Name #SBATCH --time=24:00:00 # WallTime #SBATCH --nodes=1 # Number of Nodes #SBATCH --ntasks-per-node=1 # Number of tasks (MPI processes) #SBATCH --cpus-per-task=1 # Number of threads per task (OMP threads) module load matlab matlab -nodesktop -nodisplay -nosplash < mymatlabprog.m
Running MATLAB in Parallel with Multithreads
MATLAB supports multithreaded computation for a number of functions and expressions that are combinations of element-wise functions. These functions automatically execute on multiple threads if data size is large enough. Note that on Cypress, in default, MATLAB runs with a single threads, and you have to explicitly specify the number of threads in your code. For example,
% Matlab Test Code "FuncTest.m" % LASTN = maxNumCompThreads(str2num(getenv('SLURM_JOB_CPUS_PER_NODE'))); nth = maxNumCompThreads; fprintf('Number of Threads = %d.\n',nth); N=2^(14); A = randn(N); st = cputime; tic; B = sin(A); realT = toc; cpuT = cputime -st; fprintf('Real Time = %f(sec)\n',realT); fprintf('CPU Time = %f(sec)\n',cpuT); fprintf('Ratio = %f\n',cpuT / realT);
In above code, the line,
LASTN = maxNumCompThreads(str2num(getenv('SLURM_JOB_CPUS_PER_NODE')));
defines the number of threads. The environmental variable, SLURM_JOB_CPUS_PER_NODE has the value set in SLURM script, for example,
#!/bin/bash #SBATCH --qos=normal # Quality of Service #SBATCH --job-name=matlabMT # Job Name #SBATCH --time=1:00:00 # WallTime #SBATCH --nodes=1 # Number of Nodes #SBATCH --ntasks-per-node=1 # Number of tasks (MPI processes) #SBATCH --cpus-per-task=10 # Number of threads per task (OMP threads) module load matlab matlab -nodesktop -nodisplay -nosplash -r "FuncTest; exit;"
The number of cores per process (task) is set by —cpus-per-task=10. This value goes to SLURM_JOB_CPUS_PER_NODE and you can use it to determine the number of threads used in the code.
Explicit parallelism
The parallel computing toolbox is available on Cypress. You can use up to 12 workers for shared parallel operations on a single node in the current MATLAB version. Our license does not include MATLAB Distributed Computing Server. Therefore, multi-node parallel operations are not supported.
Workers are like independent processes. If you want to use 4 workers, you have to request at least 4 tasks within a node.
#!/bin/bash #SBATCH --qos=normal # Quality of Service #SBATCH --job-name=matlabPool # Job Name #SBATCH --time=1:00:00 # WallTime #SBATCH --nodes=1 # Number of Nodes #SBATCH --ntasks-per-node=1 # Number of tasks (MPI processes) #SBATCH --cpus-per-task=4 # Number of threads per task (OMP threads) module load matlab matlab -nodesktop -nodisplay -nosplash -r "CreateWorker; ParforTest; exit;"
CreateWorker.m is a Matlab code to create workers.
% Parallel Tool Box Test "CreateWorker.m" % if isempty(getenv('SLURM_JOB_CPUS_PER_NODE')) nWorker = 1; else nWorker = min(12,str2num(getenv('SLURM_JOB_CPUS_PER_NODE'))); end % Create Workers parpool(nWorker); %
Parfor.m is a sample 'parfor' test code,
% parfor "ParforTest.m" % iter = 10000; sz = 50; a = zeros(1,iter); % fprintf('Computing...\n'); tic; parfor i = 1:iter a(i) = max(svd(randn(sz))); end toc; % poolobj = gcp('nocreate'); % Returns the current pool if one exists. If no pool, do not create new one. if isempty(poolobj) poolobj = gcp; end fprintf('Number of Workers = %d.\n',poolobj.NumWorkers); %
Running MATLAB with Automatic Offload
Internally MATLAB uses Intel MKL Basic Linear Algebra Subroutines (BLAS) and Linear Algebra package (LAPACK) routines to perform the underlying computations when running on Intel processors.
Intel MKL includes Automatic Offload (AO) feature that enables computationally intensive Intel MKL functions to offload partial workload to attached Intel Xeon Phi coprocessors automatically and transparently.
As a result, MATLAB performance can benefit from Intel Xeon Phi coprocessors via the Intel MKL AO feature when problem sizes are large enough to amortize the cost of transferring data to the coprocessors.
In SLURM script, make sure that option —gres=mic:1 is set and intel-psxe module as well as the MATLAB module has been loaded.
#!/bin/bash #SBATCH --qos=normal # Quality of Service #SBATCH --job-name=matlabAO # Job Name #SBATCH --time=1:00:00 # WallTime #SBATCH --nodes=1 # Number of Nodes #SBATCH --ntasks-per-node=1 # Number of tasks (MPI processes) #SBATCH --cpus-per-task=1 # Number of threads per task (OMP threads) #SBATCH --gres=mic:1 # Number of Co-Processors module load matlab module load intel-psxe export MKL_MIC_ENABLE=1 matlab -nodesktop -nodisplay -nosplash -r "MatTest; exit;"
Note that
export MKL_MIC_ENABLE=1
enables Intel MKL Automatic Offload (AO).
The sample cose is below:
% % Matrix test "MatTest.m" % A = rand(10000, 10000); B = rand(10000, 10000); tic; C = A * B; realT = toc; fprintf('Real Time = %f(sec)\n',realT);
Attachments (1)
- MatlabWorkers.jpeg (18.6 KB ) - added by 9 years ago.
Download all attachments as: .zip