156 | | Use the batch command to submit asynchronous jobs to the cluster. The batch command will return a job object which is used to access the output of the submitted job. See the MATLAB documentation for more help on batch. |
157 | | {{{ |
158 | | >> % Get a handle to the cluster |
159 | | >> c = parcluster; |
160 | | }}} |
161 | | {{{ |
162 | | >> % Submit job to query where MATLAB is running on the cluster |
163 | | >> j = c.batch(@pwd, 1, {}, … |
164 | | 'CurrentFolder','.', 'AutoAddClientPath',false); |
165 | | }}} |
166 | | {{{ |
167 | | >> % Query job for state |
168 | | >> j.State |
169 | | }}} |
170 | | {{{ |
171 | | >> % If state is finished, fetch the results |
172 | | >> j.fetchOutputs{:} |
173 | | }}} |
174 | | {{{ |
175 | | >> % Delete the job after results are no longer needed |
176 | | >> j.delete |
177 | | }}} |
178 | | To retrieve a list of currently running or completed jobs, call parcluster to retrieve the cluster object. The cluster object stores an array of jobs that were run, are running, or are queued to run. This allows us to fetch the results of completed jobs. Retrieve and view the list of jobs as shown below. |
179 | | {{{ |
180 | | >> c = parcluster; |
181 | | >> jobs = c.Jobs; |
182 | | }}} |
183 | | Once we’ve identified the job we want, we can retrieve the results as we’ve done previously. |
184 | | fetchOutputs is used to retrieve function output arguments; if calling batch with a script, use load instead. Data that has been written to files on the cluster needs be retrieved directly from the file system (e.g. via ftp). |
185 | | To view results of a previously completed job: |
186 | | {{{ |
187 | | >> % Get a handle to the job with ID 2 |
188 | | >> j2 = c.Jobs(2); |
189 | | }}} |
190 | | NOTE: You can view a list of your jobs, as well as their IDs, using the above c.Jobs command. |
191 | | {{{ |
192 | | >> % Fetch results for job with ID 2 |
193 | | >> j2.fetchOutputs{:} |
194 | | }}} |
195 | | |
196 | | == PARALLEL BATCH JOB == |
197 | | Users can also submit parallel workflows with the batch command. Let’s use the following example for a parallel job, which is saved as {{{parallel_example.m}}}. |
198 | | |
199 | | {{{ |
200 | | function t = parallel_example(iter) |
201 | | |
202 | | if nargin==0, iter = 8; end |
203 | | |
| 148 | Use the {{{batch}}} command to submit asynchronous jobs to the cluster. The batch command will return a job object which is used to access the output of the submitted job. See the MATLAB documentation for more help on [https://www.mathworks.com/help/parallel-computing/batch.html batch]. |
| 149 | |
| 150 | === Running Serial Job === |
| 151 | Let’s use the following example for a serial job, which is saved as {{{serial_example.m}}}. |
| 152 | {{{ |
| 153 | % Serial Example |
216 | | This time when we use the batch command, to run a parallel job, we’ll also specify a MATLAB Pool. |
| 166 | |
| 167 | {{{ |
| 168 | >> % Get a handle to the cluster |
| 169 | >> c = parcluster; |
| 170 | }}} |
| 171 | |
| 172 | {{{ |
| 173 | >> % Below, submit a batch job that calls the 'mywave.m' script. |
| 174 | >> % Also set the parameter AutoAddClientPath to false so that Matlab won't complain when paths on |
| 175 | >> % your desktop don't exist on the cluster compute nodes (this is expected and can be ignored). |
| 176 | |
| 177 | >> myjob = batch(c,'serial_example','AutoAddClientPath',false) |
| 178 | }}} |
| 179 | |
| 180 | {{{ |
| 181 | >> % Wait for the job to finish. |
| 182 | >> wait(myjob) |
| 183 | }}} |
| 184 | |
| 185 | {{{ |
| 186 | >> % display the job diary (This is the Matlab standard output text, if any) |
| 187 | >> diary(myjob) |
| 188 | --- Start Diary --- |
| 189 | Start sim |
| 190 | Sim Completed |
| 191 | |
| 192 | --- End Diary --- |
| 193 | }}} |
| 194 | |
| 195 | {{{ |
| 196 | >> % load the 'A' array (computed in serial_example) from the results of job 'myjob': |
| 197 | >> load(myjob,'A'); |
| 198 | >> A |
| 199 | |
| 200 | A = |
| 201 | |
| 202 | 1 2 3 4 5 6 7 8 |
| 203 | }}} |
| 204 | |
| 205 | |
| 206 | {{{ |
| 207 | >> % Query job for state |
| 208 | >> myjob.State |
| 209 | }}} |
| 210 | |
| 211 | {{{ |
| 212 | >> % If state is finished, fetch the results |
| 213 | >> mayjob.fetchOutputs{:} |
| 214 | ans = |
| 215 | |
| 216 | struct with fields: |
| 217 | |
| 218 | A: [1 2 3 4 5 6 7 8] |
| 219 | ans: [1×1 struct] |
| 220 | idx: 8 |
| 221 | t: 16.0127 |
| 222 | t0: 1626209674755060 |
| 223 | }}} |
| 224 | {{{ |
| 225 | >> % Delete the job after results are no longer needed |
| 226 | >> mayjob.delete |
| 227 | }}} |
| 228 | To retrieve a list of currently running or completed jobs, call parcluster to retrieve the cluster object. The cluster object stores an array of jobs that were run, are running, or are queued to run. This allows us to fetch the results of completed jobs. Retrieve and view the list of jobs as shown below. |
| 229 | |
| 230 | {{{ |
| 231 | >> c = parcluster; |
| 232 | >> jobs = c.Jobs; |
| 233 | }}} |
| 234 | |
| 235 | Once we’ve identified the job we want, we can retrieve the results as we’ve done previously. |
| 236 | fetchOutputs is used to retrieve function output arguments; if calling batch with a script, use load instead. Data that has been written to files on the cluster needs be retrieved directly from the file system (e.g. via ftp). |
| 237 | To view results of a previously completed job: |
| 238 | |
| 239 | {{{ |
| 240 | >> % Get a handle to the job with ID 2 |
| 241 | >> j2 = c.Jobs(2); |
| 242 | }}} |
| 243 | |
| 244 | NOTE: You can view a list of your jobs, as well as their IDs, using the above c.Jobs command. |
| 245 | |
| 246 | {{{ |
| 247 | >> % Fetch results for job with ID 2 |
| 248 | >> j2.fetchOutputs{:} |
| 249 | }}} |
| 250 | |
| 251 | === Running Parallel Job == |
| 252 | Users can also submit parallel workflows with the batch command. Let’s use the following example for a parallel job, which is saved as {{{parallel_example.m}}} that uses the '''parfor''' statement to parallelize the '''for''' loop |
| 253 | |
| 254 | {{{ |
| 255 | disp('Start sim') |
| 256 | |
| 257 | t0 = tic; |
| 258 | parfor idx = 1:8 |
| 259 | A(idx) = idx; |
| 260 | pause(2) |
| 261 | end |
| 262 | t = toc(t0); |
| 263 | |
| 264 | disp('Sim Completed') |
| 265 | }}} |
| 266 | |
| 267 | In the next example, we will run a parallel job using 8 processors on a single node. |
| 268 | This time when we use the batch command, to run a parallel job, we’ll also specify a MATLAB Pool. |
| 269 | |
221 | | {{{ |
222 | | >> % Submit a batch pool job using 4 workers for 16 simulations |
223 | | >> j = c.batch(@parallel_example, 1, {16}, 'Pool',4, … |
224 | | 'CurrentFolder','.', 'AutoAddClientPath',false); |
225 | | }}} |
| 274 | |
| 275 | {{{ |
| 276 | >> % Submit a batch pool job using 8 workers for 8 iterations |
| 277 | >> myjob = batch(c,'parallel_example','pool', 8, 'AutoAddClientPath',false) |
| 278 | }}} |
| 279 | |
232 | | >> j.fetchOutputs{:} |
233 | | ans = |
234 | | 8.8872 |
235 | | }}} |
236 | | The job ran in 8.89 seconds using four workers. Note that these jobs will always request N+1 CPU cores, since one worker is required to manage the batch job and pool of workers. For example, a job that needs eight workers will consume nine CPU cores. |
237 | | We’ll run the same simulation but increase the Pool size. This time, to retrieve the results later, we’ll keep track of the job ID. |
238 | | NOTE: For some applications, there will be a diminishing return when allocating too many workers, as the overhead may exceed computation time. |
239 | | {{{ |
240 | | >> % Get a handle to the cluster |
241 | | >> c = parcluster; |
242 | | }}} |
243 | | {{{ |
244 | | >> % Submit a batch pool job using 8 workers for 16 simulations |
245 | | >> j = c.batch(@parallel_example, 1, {16}, 'Pool', 8, … |
246 | | 'CurrentFolder','.', 'AutoAddClientPath',false); |
247 | | }}} |
| 287 | >> myjob.fetchOutputs{:} |
| 288 | ans = |
| 289 | |
| 290 | struct with fields: |
| 291 | |
| 292 | A: [1 2 3 4 5 6 7 8] |
| 293 | t: 2.3438 |
| 294 | t0: 1626210921701584 |
| 295 | }}} |
| 296 | |
| 297 | The job ran in 2.3438 seconds using eight workers. '''Note that these jobs will always request N+1 CPU cores''', since one worker is required to manage the batch job and pool of workers. For example, a job that needs eight workers will consume nine CPU cores. |
| 298 | |
| 299 | |
| 300 | ==== retrieve the results later ==== |