Changes between Version 2 and Version 3 of Matlab-Slurm


Ignore:
Timestamp:
Jul 13, 2021 1:54:48 PM (5 months ago)
Author:
fuji
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Matlab-Slurm

    v2 v3  
    33= Submitting Jobs from MATLAB Command Window =
    44This document provides the steps to configure MATLAB to submit jobs to Cypress, retrieve results, and debug errors.
    5 There are two way to do this, one is '''submitting MATLAB jobs from MATLAB running on Cypress''', another is '''submitting MATLAB jobs from your laptop/desktop'''.
     5There are two ways to do this, one is [https://wiki.hpc.tulane.edu/trac/wiki/Matlab-Slurm#MATLABclientonCypress 'submitting MATLAB jobs from MATLAB running on Cypress'], another is [https://wiki.hpc.tulane.edu/trac/wiki/Matlab-Slurm#MATLABclientonyourlocalcomputer 'submitting MATLAB jobs from your laptop/desktop'].
    66
    77== MATLAB client on Cypress ==
     
    3939
    4040=== Installation and Configuration ===
    41 In order to submit MATLAB jobs to Cypress from your laptop/desktop, you need to install a custom Matlab plugin scripts that are configured to interact with the Slurm job scheduler on Cypress.  Use the links below to download the script package.
    42 
     41In order to submit MATLAB jobs to Cypress from your laptop/desktop, you need to install a custom Matlab plugin scripts that are configured to interact with the Slurm job scheduler on Cypress.  Use the link below to download the script package to your laptop/desktop.
     42
     43[https://wiki.hpc.tulane.edu/trac/attachment/wiki/Matlab-Slurm/TU.nonshared.R2021a.zip TU.nonshared.R2021a.zip]
     44
     45Download the archive file and start MATLAB.  The archive file should be unzipped in the location returned by calling
     46
     47{{{
     48>> userpath
     49}}}
     50
     51See [https://www.mathworks.com/help/matlab/ref/userpath.html here] to view or change default user work folder on MATLAB.
     52
     53==== Run the configCluster.m Script to Create a Cluster Profile ====
     54Configure MATLAB to run parallel jobs on your cluster by calling '''configCluster'''.  This only needs to be called '''once''' per version of MATLAB.
     55
     56{{{
     57>> configCluster
     58}}}
     59
     60Submission to the Cypress requires SSH credentials.  You will be prompted for your ssh username and password or identity file (private key).  The username and location of the private key will be stored in MATLAB for future sessions.
     61
     62
     63== Configuring Jobs ==
     64Prior to submitting the job, we can specify various parameters to pass to our jobs, such as queue, e-mail, walltime, etc.
     65
     66{{{
     67>> % Get a handle to the cluster
     68>> c = parcluster;
     69}}}
     70
     71=== Optional ===
     72{{{
     73>> % Specify an account to use for MATLAB jobs
     74>> c.AdditionalProperties.AccountName = 'account-name';
     75}}}
     76
     77{{{
     78>> % Specify e-mail address to receive notifications about your job
     79>> c.AdditionalProperties.EmailAddress = 'user-id@tulane.edu';
     80}}}
     81
     82{{{
     83>> % Specify Debug mode
     84>> c.AdditionalProperties.EnableDebug = 'false';
     85}}}
     86
     87{{{
     88>> % Specify memory to use for MATLAB jobs, per core (MB)
     89>> c.AdditionalProperties.MemUsage = '4000';
     90}}}
     91
     92{{{
     93>> % Specify processors per node
     94>> c.AdditionalProperties.ProcsPerNode = '2';
     95}}}
     96
     97{{{
     98>> % Specify Quality of Service
     99>> c.AdditionalProperties.QoS = 'qos-value';
     100}}}
     101
     102{{{
     103>> % Specify a queue to use for MATLAB jobs                             
     104>> c.AdditionalProperties.QueueName = 'queue-name';
     105}}}
     106
     107{{{
     108>> % Specify the walltime (e.g. 5 hours)
     109>> c.AdditionalProperties.WallTime = '05:00:00';
     110}}}
     111
     112Save changes after modifying !AdditionalProperties for the above changes to persist between MATLAB sessions.
     113{{{
     114>> c.saveProfile
     115}}}
     116
     117To see the values of the current configuration options, display !AdditionalProperties.
     118{{{
     119>> % To view current properties
     120>> c.AdditionalProperties
     121}}}
     122
     123
     124Unset a value when no longer needed.
     125{{{
     126>> % Turn off email notifications
     127>> c.AdditionalProperties.EmailAddress = '';
     128>> c.saveProfile
     129}}}
     130
     131
     132== INTERACTIVE JOBS - MATLAB client on Cypress ==
     133To run an interactive pool job on the cluster, continue to use parpool as you’ve done before.
     134{{{
     135>> % Get a handle to the cluster
     136>> c = parcluster;
     137}}}
     138{{{
     139>> % Open a pool of 64 workers on the cluster
     140>> p = c.parpool(64);
     141}}}
     142Rather than running local on the local machine, the pool can now run across multiple nodes on the cluster.
     143{{{
     144>> % Run a parfor over 1000 iterations
     145>> parfor idx = 1:1000
     146      a(idx) = …
     147   end
     148}}}
     149Once we’re done with the pool, delete it.
     150{{{
     151>> % Delete the pool
     152>> p.delete
     153}}}
     154
     155== INDEPENDENT BATCH JOB ==
     156Use the batch command to submit asynchronous jobs to the cluster.  The batch command will return a job object which is used to access the output of the submitted job.  See the MATLAB documentation for more help on batch.
     157{{{
     158>> % Get a handle to the cluster
     159>> c = parcluster;
     160}}}
     161{{{
     162>> % Submit job to query where MATLAB is running on the cluster
     163>> j = c.batch(@pwd, 1, {}, …
     164       'CurrentFolder','.', 'AutoAddClientPath',false);
     165}}}
     166{{{
     167>> % Query job for state
     168>> j.State
     169}}}
     170{{{
     171>> % If state is finished, fetch the results
     172>> j.fetchOutputs{:}
     173}}}
     174{{{
     175>> % Delete the job after results are no longer needed
     176>> j.delete
     177}}}
     178To retrieve a list of currently running or completed jobs, call parcluster to retrieve the cluster object.  The cluster object stores an array of jobs that were run, are running, or are queued to run.  This allows us to fetch the results of completed jobs.  Retrieve and view the list of jobs as shown below.
     179{{{
     180>> c = parcluster;
     181>> jobs = c.Jobs;
     182}}}
     183Once we’ve identified the job we want, we can retrieve the results as we’ve done previously.
     184fetchOutputs is used to retrieve function output arguments; if calling batch with a script, use load instead.   Data that has been written to files on the cluster needs be retrieved directly from the file system (e.g. via ftp).
     185To view results of a previously completed job:
     186{{{
     187>> % Get a handle to the job with ID 2
     188>> j2 = c.Jobs(2);
     189}}}
     190NOTE: You can view a list of your jobs, as well as their IDs, using the above c.Jobs command. 
     191{{{
     192>> % Fetch results for job with ID 2
     193>> j2.fetchOutputs{:}
     194}}}
     195
     196== PARALLEL BATCH JOB ==
     197Users can also submit parallel workflows with the batch command.  Let’s use the following example for a parallel job, which is saved as {{{parallel_example.m}}}.   
     198
     199{{{
     200function t = parallel_example(iter)
     201
     202if nargin==0, iter = 8; end
     203
     204disp('Start sim')
     205
     206t0 = tic;
     207parfor idx = 1:iter
     208     A(idx) = idx;
     209     pause(2)
     210end
     211t = toc(t0);
     212
     213disp('Sim Completed')
     214}}}
     215
     216This time when we use the batch command, to run a parallel job, we’ll also specify a MATLAB Pool.   
     217{{{
     218>> % Get a handle to the cluster
     219>> c = parcluster;
     220}}}
     221{{{
     222>> % Submit a batch pool job using 4 workers for 16 simulations
     223>> j = c.batch(@parallel_example, 1, {16}, 'Pool',4, …
     224       'CurrentFolder','.', 'AutoAddClientPath',false);
     225}}}
     226{{{
     227>> % View current job status
     228>> j.State
     229}}}
     230{{{
     231>> % Fetch the results after a finished state is retrieved
     232>> j.fetchOutputs{:}
     233ans =
     234        8.8872
     235}}}
     236The job ran in 8.89 seconds using four workers.  Note that these jobs will always request N+1 CPU cores, since one worker is required to manage the batch job and pool of workers.   For example, a job that needs eight workers will consume nine CPU cores.   
     237We’ll run the same simulation but increase the Pool size.  This time, to retrieve the results later, we’ll keep track of the job ID.
     238NOTE: For some applications, there will be a diminishing return when allocating too many workers, as the overhead may exceed computation time.
     239{{{   
     240>> % Get a handle to the cluster
     241>> c = parcluster;
     242}}}
     243{{{
     244>> % Submit a batch pool job using 8 workers for 16 simulations
     245>> j = c.batch(@parallel_example, 1, {16}, 'Pool', 8, …
     246       'CurrentFolder','.', 'AutoAddClientPath',false);
     247}}}
     248{{{
     249>> % Get the job ID
     250>> id = j.ID
     251id =
     252        4
     253}}}
     254{{{
     255>> % Clear j from workspace (as though we quit MATLAB)
     256>> clear j
     257}}}
     258
     259Once we have a handle to the cluster, we’ll call the findJob method to search for the job with the specified job ID.   
     260{{{
     261>> % Get a handle to the cluster
     262>> c = parcluster;
     263}}}
     264{{{
     265>> % Find the old job
     266>> j = c.findJob('ID', 4);
     267}}}
     268{{{
     269>> % Retrieve the state of the job
     270>> j.State
     271ans =
     272finished
     273}}}
     274{{{
     275>> % Fetch the results
     276>> j.fetchOutputs{:};
     277ans =
     2784.7270
     279}}}
     280The job now runs in 4.73 seconds using eight workers.  Run code with different number of workers to determine the ideal number to use.
     281Alternatively, to retrieve job results via a graphical user interface, use the Job Monitor (Parallel > Monitor Jobs).
     282
     283== DEBUGGING ==
     284If a serial job produces an error, call the getDebugLog method to view the error log file.  When submitting independent jobs, with multiple tasks, specify the task number. 
     285{{{
     286>> c.getDebugLog(j.Tasks(3))
     287}}}
     288For Pool jobs, only specify the job object.
     289{{{
     290>> c.getDebugLog(j)
     291}}}
     292When troubleshooting a job, the cluster admin may request the scheduler ID of the job.  This can be derived by calling schedID
     293{{{
     294>> schedID(j)
     295ans =
     29625539
     297}}}
     298
     299
     300== TO LEARN MORE ==
     301To learn more about the MATLAB Parallel Computing Toolbox, check out these resources:
     302* [http://www.mathworks.com/products/parallel-computing/code-examples.html Parallel Computing Coding Examples]
     303* [http://www.mathworks.com/help/distcomp/index.html Parallel Computing Documentation]
     304* [http://www.mathworks.com/products/parallel-computing/index.html Parallel Computing Overview]
     305* [http://www.mathworks.com/products/parallel-computing/tutorials.html Parallel Computing Tutorials]
     306* [http://www.mathworks.com/products/parallel-computing/videos.html Parallel Computing Videos]
     307* [http://www.mathworks.com/products/parallel-computing/webinars.html Parallel Computing Webinars]
     308