Parallel Computing Toolbox™ Release Notes
How to Contact MathWorks Latest news: www.mathworks.com Sales and services: www.mathworks.com/sales_and_services User community: www.mathworks.com/matlabcentral Technical support: www.mathworks.com/support/contact_us Phone: 508-647-7000 The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 01760-2098 Parallel Computing Toolbox™ Release Notes © COPYRIGHT 2006–2015 by The MathWorks, Inc. The software described in this document is furnished under a license agreement.
Contents R2015a Support for mapreduce function on any cluster that supports parallel pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2 Sparse arrays with GPU-enabled functions . . . . . . . . . . . . . . 1-2 Additional GPU-enabled MATLAB functions . . . . . . . . . . . . . 1-2 pagefun support for mrdivide and inv functions on GPUs . . 1-3 Enhancements to GPU-enabled linear algebra functions . . .
Upgrade parallel computing products together . . . . . . . . . . 1-6 R2014b Parallelization of mapreduce on local workers . . . . . . . . . . . 2-2 Additional GPU-enabled MATLAB functions, including accumarray, histc, cummax, and cummin . . . . . . . . . . . . . . . 2-2 pagefun support for mldivide on GPUs . . . . . . . . . . . . . . . . . 2-3 Additional MATLAB functions for distributed arrays, including fft2, fftn, ifft2, ifftn, cummax, cummin, and diff . . . . . . . . . . . . . . . . . . . . . . .
Additional GPU enabled Image Processing Toolbox functions: bwdist, imreconstruct, iradon, radon . . . . . . . . . . . . . . . . 3-2 Enhancements to GPU-enabled MATLAB functions: filter (IIR filters); pagefun (additional functions supported); interp1, interp2, conv2, reshape (performance improvements) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2 Duplication of an existing job, containing some or all of its tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
More MATLAB functions enabled for distributed arrays: permute, ipermute, and sortrows . . . . . . . . . . . . . . . . . . . . 4-7 Enhancements to MATLAB functions enabled for GPUs, including ones, zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8 gputimeit Function to Time GPU Computations . . . . . . . . . 4-8 New GPU Random Number Generator NormalTransform Option: Box-Muller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8 Upgraded MPICH2 Version . . . . . .
R2012b More MATLAB functions enabled for GPUs, including convn, cov, and normest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gpuArray Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MATLAB Code on the GPU . . . . . . . . . . . . . . . . . . . . . . . . . 6-2 6-2 6-2 GPU-enabled functions in Neural Network Toolbox, Phased Array System Toolbox, and Signal Processing Toolbox . .
Cluster Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New Cluster Profile Manager . . . . . . . . . . . . . . . . . . . . . . . . Programming with Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . Profiles in Compiled Applications . . . . . . . . . . . . . . . . . . . . 7-8 7-8 7-9 7-10 Enhanced GPU Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . GPUArray Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Task Error Properties Updated . . . . . . . . . . . . . . . . . . . . . . . . 8-6 R2011a Deployment of Local Workers . . . . . . . . . . . . . . . . . . . . . . . . . 9-2 New Desktop Indicator for MATLAB Pool Status . . . . . . . . . 9-2 Enhanced GPU Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Static Methods to Create GPUArray . . . . . . . . . . . . . . . . . . . GPUArray Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . GPUArray Indexing . . . . . . . .
Generic Scheduler Interface Enhancements . . . . . . . . . . . . Decode Functions Provided with Product . . . . . . . . . . . . . . Enhanced Example Scripts . . . . . . . . . . . . . . . . . . . . . . . . . 10-2 10-2 10-4 batch Now Able to Run Functions . . . . . . . . . . . . . . . . . . . . . 10-4 batch and matlabpool Accept Scheduler Object . . . . . . . . . 10-4 Enhanced Functions for Distributed Arrays . . . . . . . . . . . . qr Supports Distributed Arrays . . . . . . . . . . . . . . . . . . . .
R2009b New Distributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-2 Renamed codistributor Functions . . . . . . . . . . . . . . . . . . . . . 12-2 Enhancements to Admin Center . . . . . . . . . . . . . . . . . . . . . . 12-4 Adding or Updating File Dependencies in an Open MATLAB Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-4 Updated globalIndices Function . . . . . . . . . . . . . . . . . . . . . .
R2008b MATLAB Compiler Product Support for Parallel Computing Toolbox Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-2 14-2 spmd Construct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-2 Composite Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-3 Configuration Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
parfor Syntax Has Single Usage . . . . . . . . . . . . . . . . . . . . . . Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-4 15-4 dfeval Now Destroys Its Job When Finished . . . . . . . . . . . . 15-5 R2007b New Parallel for-Loops (parfor-Loops) . . . . . . . . . . . . . . . . . Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-2 16-2 Configurations Manager and Dialogs . . . . . . . . . . . . . . . . . .
Enhanced MATLAB Functions . . . . . . . . . . . . . . . . . . . . . . . . 17-4 darray Function Replaces distributor Function . . . . . . . . . 17-4 rand Seeding Unique for Each Task or Lab . . . . . . . . . . . . . 17-5 Single-Threaded Computations on Workers . . . . . . . . . . . . . 17-5 R2006b xiv Contents Support for Windows Compute Cluster Server (CCS) . . . . . 18-2 Windows 64 Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18-2 Parallel Job Enhancements . . . .
R2015a Version: 6.
R2015a Support for mapreduce function on any cluster that supports parallel pools You can now run parallel mapreduce on any cluster that supports a parallel pool. For more information, see “Run mapreduce on a Parallel Pool”. Sparse arrays with GPU-enabled functions This release supports sparse arrays on a GPU. You can create a sparse gpuArray either by calling sparse with a gpuArray input, or by calling gpuArray with a sparse input. The following functions support sparse gpuArrays.
cdf2rdf gpuArray.freqspace histcounts idivide inpolygon isdiag ishermitian issymmetric istril istriu legendre nonzeros nthroot pinv planerot poly polyarea polyder polyfit polyint polyval polyvalm Note the following for some of these functions: • gpuArray.freqspace is a static constructor method. For a list of MATLAB® functions that support gpuArray, see “Run Built-In Functions on a GPU ”.
R2015a information about cluster discovery, see “Discover Clusters”. For information about configuring and verifying the required DNS SRV record on your network, see “DNS SRV Record”. MS-MPI support for local and MJS clusters On 64-bit Windows® platforms, Microsoft® MPI (MS-MPI) is now the default MPI implementation for local clusters on the client machine. For MATLAB job scheduler (MJS) clusters on Windows platforms, you can use MS-MPI by specifying the -useMSMPI flag with the startjobmanager command.
Improved profiler accuracy for GPU code The MATLAB profiler now reports more accurate timings for code running on a GPU. For related information, see “Measure Performance on the GPU”. Upgraded CUDA Toolkit version The parallel computing products are now using CUDA® Toolkit version 6.5. To compile CUDA code for CUDAKernel or CUDA MEX files, you must use toolkit version 6.5.
R2015a Compatibility Considerations Calling matlabpool now generates an error. You should instead use parpool to create a parallel pool. Upgrade parallel computing products together This version of Parallel Computing Toolbox software is accompanied by a corresponding new version of MATLAB Distributed Computing Server software.
R2014b Version: 6.
R2014b Parallelization of mapreduce on local workers If you have Parallel Computing Toolbox installed, and your default cluster profile specifies a local cluster, then execution of mapreduce opens a parallel pool and distributes tasks to the pool workers. Note If your default cluster profile specifies some other cluster, the mapreduce function does not use a parallel pool. For more information, see Run mapreduce on a Local Cluster.
pagefun support for mldivide on GPUs For gpuArray inputs, pagefun is enhanced to support @mldivide for square matrix divisors of sizes up to 32-by-32.
R2014b Data Analysis on Hadoop clusters using mapreduce Parallel Computing Toolbox and MATLAB Distributed Computing Server support the use of Hadoop® clusters for the execution environment of mapreduce applications.
Compatibility Considerations GPU devices on 32-bit Windows machines are still supported in this release, but in a future release support will be completely removed for these devices.
R2014a Version: 6.
R2014a Number of local workers no longer limited to 12 You can now run a local cluster of more than 12 workers on your client machine. Unless you adjust the cluster profile, the default maximum size for a local cluster is the same as the number of computational cores on the machine.
Note the following enhancements for some of these functions: • filter now supports IIR filtering. • pagefun is enhanced to support most element-wise gpuArray functions. Also, these functions are supported: @ctranspose, @fliplr, @flipud, @mtimes, @rot90, @transpose. • rand(___,'like',P) returns a gpuArray of random values of the same underlying class as the gpuArray P. This enhancement also applies to randi, randn. For a list of MATLAB functions that support gpuArray, see Run Built-In Functions on a GPU.
R2014a Note the following enhancements for some of these functions: • ifft and randi are new in support of distributed and codistributed arrays. • rand(___,'like',D) returns a distributed or codistributed array of random values of the same underlying class as the distributed or codistributed array D. This enhancement also applies to randi, randn, and eye. For a list of MATLAB functions that support distributed arrays, see MATLAB Functions on Distributed and Codistributed Arrays.
Compatibility Considerations Calling matlabpool continues to work in this release, but now generates a warning. You should instead use parpool to create a parallel pool. Removed Support for parallel.cluster.Mpiexec Support for clusters of type parallel.cluster.Mpiexec has been removed. Compatibility Considerations Any attempt to use parallel.cluster.Mpiexec clusters now generates an error. As an alternative, consider using the generic scheduler interface, parallel.cluster.Generic.
R2013b Version: 6.
R2013b parpool: New command-line interface (replaces matlabpool), desktop indicator, and preferences for easier interaction with a parallel pool of MATLAB workers • “Parallel Pool” on page 4-2 • “New Desktop Pool Indicator” on page 4-3 • “New Parallel Preferences” on page 4-4 Parallel Pool Replaces MATLAB Pool Parallel pool syntax replaces MATLAB pool syntax for executing parallel language constructs such as parfor, spmd, Composite, and distributed. The pool is represented in MATLAB by a parallel.
Asynchronous Function Evaluation on Parallel Pool You can evaluate functions asynchronously on one or all workers of a parallel pool. Use parfeval to evaluate a function on only one worker, or use parfevalOnAll to evaluate a function on all workers in the pool. parfeval or parfevalOnAll returns an object called a future, from which you can get the outputs of the asynchronous function evaluation.
R2013b Icon color and tool tips let you know if the pool is busy or ready, how large it is, and when it might time out. You can click the icon to start a pool, stop a pool, or access your parallel preferences. New Parallel Preferences Your MATLAB preferences now include a group of settings for parallel preferences. These settings control general behavior of clusters and parallel pools for your MATLAB session.
Compatibility Considerations The default preference setting is to automatically start a pool when a parallel language construct requires it. If you want to make sure a pool does not start automatically, you must change your parallel preference setting. You can also work around this by making sure to explicitly start a parallel pool with parpool before encountering any code that needs a pool. By default, a parallel pool will shut down if idle for 30 minutes.
R2013b interp2 pagefun Note the following for some of these functions: • pagefun allows you to iterate over the pages of a gpuArray, applying @mtimes to each page. For more information, see the pagefun reference page, or type help pagefun. For complete lists of functions that support gpuArray, see Run Built-In Functions on a GPU.
More MATLAB functions enabled for distributed arrays: permute, ipermute, and sortrows The following functions now support distributed arrays with all forms of codistributor (1D and 2DBC), or are enhanced in their support for this release: ipermute permute sortrows cast zeros ones nan inf true false Note the following enhancements for some of these functions: • ipermute, permute, and sortrows support distributed arrays for the first time in this release.
R2013b Enhancements to MATLAB functions enabled for GPUs, including ones, zeros The following functions are enhanced in their support for gpuArray data: zeros ones nan inf eye true false cast Note the following enhancements for these functions: • Z = zeros(___,'like',P) returns a gpuArray of zeros of the same complexity as gpuArray P, and same underlying class as P if class is not specified. The same behavior applies to the other constructor functions listed in this table.
This new option is the default 'NormalTransform' setting when using the Philox4x32-10 or Threefry4x64-20 generator. The following commands, therefore, use 'BoxMuller' for 'NormalTransform': parallel.gpu.rng(0,'Philox4x32-10') parallel.gpu.
R2013b Compatibility Considerations In R2013b, any use of gpuDevice to select a GPU with compute capability 1.3, generates a warning. The device is still supported in this release, but in a future release support will be completely removed for these 1.3 devices. Discontinued Support for parallel.cluster.Mpiexec Support for clusters of type parallel.cluster.Mpiexec is being discontinued. Compatibility Considerations In R2013b, any use of parallel.cluster.Mpiexec clusters generates a warning.
R2013a Version: 6.
R2013a GPU-enabled functions in Image Processing Toolbox and Phased Array System Toolbox More toolboxes offer enhanced functionality for some of their functions to perform computations on a GPU. For specific information about these other toolboxes, see their respective release notes. Parallel Computing Toolbox is required to access this functionality.
• arrayfun and bsxfun support indexing and accessing variables of outer functions from within nested functions. • arrayfun supports singleton expansion of all arguments for all operations. For more information, see the arrayfun reference page. • mldivide and mrdivide support all rectangular arrays. • svd can perform economy factorizations. For complete lists of MATLAB functions that support gpuArray, see Built-In Functions That Support gpuArray.
R2013a code files are necessary for their execution, then automatically attaches those files to the MATLAB pool job so that the code is available to the workers. When you use the MATLAB editor to update files on the client that are attached to a matlabpool, those updates are automatically propagated to the workers in the pool.
R2012b Version: 6.
R2012b More MATLAB functions enabled for GPUs, including convn, cov, and normest • “gpuArray Support” on page 6-2 • “MATLAB Code on the GPU” on page 6-2 gpuArray Support The following functions are enhanced to support gpuArray data, or are expanded in their support: bitget bitset cond convn cov issparse mpower nnz normest pow2 var The following functions are not methods of the gpuArray class, but they now work with gpuArray data: blkdiag cross iscolumn ismatrix isrow isvector std For complete lists o
toolboxes, see their respective release notes. Parallel Computing Toolbox is required to access this functionality. Performance improvements to GPU-enabled MATLAB functions and random number generation The performance of some MATLAB functions and random number generation on GPU devices is improved in this release. You now have a choice of three random generators on the GPU: the combined multiplicative recursive MRG32K3A, the Philox4x32-10, and the Threefry4x64-20.
R2012b • mrdivide is now fully supported, and is no longer limited to accepting only scalars for its second argument. • sparse is now fully supported for all distribution types. This release also offers improved performance of fft functions for long vectors as distributed arrays. Detection of MATLAB Distributed Computing Server clusters that are available for connection from user desktops through Profile Manager You can let MATLAB discover clusters for you.
Command Window. Now this text is appended to the task’s Diary property as the text is generated, rather than waiting until the task is complete. You can read this property at any time. Diary information is accumulated only if the job’s CaptureDiary property value is true. (Note: This feature is not yet available for SOA jobs on HPC Server clusters.
R2012a Version: 6.
R2012a New Programming Interface This release provides a new programming interface for accessing clusters, jobs, and tasks. General Concepts and Phrases This table maps some of the concepts and phrases from the old interface to the new.
Previous Scheduler Objects New Cluster Objects localscheduler parallel.cluster.Local lsfscheduler parallel.cluster.LSF mpiexec parallel.cluster.Mpiexec pbsproscheduler parallel.cluster.PBSPro torquescheduler parallel.cluster.Torque For information on each of the cluster objects, see the parallel.Cluster reference page. Previous Job Objects New Job Objects job (distributed job) parallel.job.MJSIndependentJob matlabpooljob parallel.job.
R2012a Functions and Methods This table compares some functions and methods of the old interface to those of the new. Many functions do not have a name change in the new interface, and are not listed here. Not all functions are available for all cluster types.
Previous Scheduler Properties New Cluster Properties IsUsingSecureCommunication HasSecureCommunication SchedulerHostname, MasterName, Hostname Host ParallelSubmissionWrapperScript CommunicatingJobWrapper ClusterSize NumWorkers NumberOfBusyWorkers NumBusyWorkers NumberOfIdleWorkers NumIdleWorkers SubmitFcn IndependentSubmitFcn ParallelSubmitFcn CommunicatingSubmitFcn DestroyJobFcn DeleteJobFcn DestroyTaskFcn DeleteTaskFcn Previous Job Properties New Job Properties FileDependencies At
R2012a help parallel.job.CJSCommunicatingJob help parallel.task.CJSTask help parallel.job.MJSIndependentJob help parallel.job.MJSCommunicatingJob help parallel.task.MJSTask There might be slight changes in the supported format for properties whose names are still the same. To get help on an individual method or property, the general form of the command is: help parallel.obj-type.
Creating and Finding Jobs and Tasks In the old interface, to create or find jobs or tasks, you could specify property name and value pairs in structures or cell arrays, and then provide the structure or cell array as an input argument to the function you used. In the new interface, property names and values must be pairs of separate arguments, with the property name as a string expression and its value of the appropriate type.
R2012a properties, from which the toolbox calculates which wrapper to use. For more information about these properties and their possible values, in MATLAB type help parallel.cluster.LSF.CommunicatingJobWrapper You can change the LSF to PBSPro or Torque in this command, as appropriate. Enhanced Example Scripts An updated set of example scripts is available in the product for using a generic scheduler with the new programming interface.
The Cluster Profile Manager lets you create, edit, validate, import, and export profiles, among other actions. To open the Cluster Profile Manager, select Parallel > Manage Cluster Profiles. For more information about cluster profiles and the Cluster Profile Manager, see Cluster Profiles. Programming with Profiles These commands provide access to profiles and the ability to create cluster objects. Function Description p = parallel.clusterProfiles List of your profiles parallel.
R2012a Function Description parallel.exportProfile Export profiles to specified .settings file Profiles in Compiled Applications Because compiled applications include the current profiles of the user who compiles, in most cases the application has the profiles it needs. When other profiles are needed, a compiled application can also import profiles that have been previously exported to a .settings file. The new ParallelProfile key supports exported parallel configuration .
Note the following enhancements and restrictions to some of these functions: • GPUArray usage now supports all data types supported by MATLAB, except int64 and uint64. • For the list of functions that bsxfun supports, see the bsxfun reference page. • The full range of syntax is now supported for fft, fft2, fftn, ifft, ifft2, and ifftn. • eig now supports all matrices, symmetric or not. • issorted supports only vectors, not matrices.
R2012a g = gpuDevice(idx) . . . g = gpuDevice(idx) % Resets GPU device, clears data To deselect the current device, use gpuDevice([ ]) with an empty argument (as opposed to no argument). This clears the GPU of all arrays and kernels, and invalidates variables in the workspace that point to such data. Asynchronous GPU Calculations and Wait All GPU calculations now run asynchronously with MATLAB.
0.8632 0.2307 0.7481 0.7008 0.9901 0.7516 0.0420 0.5059 existsOnGPU(R) 1 reset(g); % Resets GPU device existsOnGPU(R) 0 R % View GPUArray contents Data no longer exists on the GPU. Any attempt to use the data from R generates an error.
R2012a Set CUDA Kernel Constant Memory The new setConstantMemory method on the CUDAKernel object lets you set kernel constant memory from MATLAB. For more information, see the setConstantMemory reference page. Latest NVIDIA CUDA Device Driver This version of Parallel Computing Toolbox GPU functionality supports only the latest NVIDIA CUDA device driver. Compatibility Considerations Earlier versions of the toolbox supported earlier versions of CUDA device driver.
R2011b Version: 5.
R2011b New Job Monitor The Job Monitor is a tool that lets you track the jobs you have submitted to a cluster. It displays the jobs for the scheduler determined by your selection of a parallel configuration. Open the Job Monitor from the MATLAB desktop by selecting Parallel > Job Monitor. Right-click a job in the list to select a command from the context menu for that job: For more information about the Job Monitor and its capabilities, see Job Monitor.
Number of Local Workers Increased to Twelve You can now run up to 12 local workers on your MATLAB client machine. If you do not specify the number of local workers in a command or configuration, the default number of local workers is determined by the value of the local scheduler's ClusterSize property, which by default equals the number of computational cores on the client machine.
R2011b The following functions set the GPU random number generator seed and stream: parallel.gpu.rng parallel.gpu.RandStream Also, arrayfun called with GPUArray data now supports rand, randi, and randn. For more information about using arrayfun to generate random matrices on the GPU, see Generating Random Numbers on the GPU.
3 4 5 9 7 2 To see that M is a GPUArray, use the whos or class function. MATLAB Code on the GPU GPU support is extended to include the following MATLAB code in functions called by arrayfun to run on the GPU: rand randi randn xor Also, the handle passed to arrayfun can reference a simple function, a subfunction, a nested function, or an anonymous function. The function passed to arrayfun can call any number of its subfunctions.
R2011b Compatibility Considerations If you have scripts or functions that use message identifiers that changed, you must update the code to use the new identifiers. Typically, message identifiers are used to turn off specific warning messages, or in code that uses a try/catch statement and performs an action based on a specific error identifier. For example, the 'distcomp:old:ID' identifier has changed to 'parallel:similar:ID'.
Compatibility Considerations In past releases, when there was no error, the Error property contained an MException object with empty data fields, generated by MException('', ''). Now to determine if a task is error-free, you can query the Error property itself to see if it is empty: didTaskError = ~isempty(t.Error) where t is the task object you are examining.
R2011a Version: 5.
R2011a Deployment of Local Workers MATLAB Compiler generated standalone executables and libraries from parallel applications can now launch up to eight local workers without requiring MATLAB Distributed Computing Server software. New Desktop Indicator for MATLAB Pool Status When you first open a MATLAB pool from your desktop session, an indicator appears in the lower-right corner of the desktop to show that this desktop session is connected to an open pool.
subsasgn subsindex subsref vertcat and all the plotting related functions. GPUArray Indexing Because GPUArray now supports subsasgn and subsref, you can index into a GPUArray for assigning and reading individual elements. For example, create a GPUArray and assign the value of an element: n = 1000; D = parallel.gpu.GPUArray.eye(n); D(1,n) = pi Create a GPUArray and read the value of an element back into the MATLAB workspace: m = 500; D = parallel.gpu.GPUArray.
R2011a Distributed Array Support Newly Supported Functions The following functions are enhanced to support distributed arrays, supporting all forms of codistributor (1-D and 2DBC): arrayfun cat reshape Enhanced mtimes Support The mtimes function now supports distributed arrays that use a 2-D block-cyclic (2DBC) distribution scheme, and distributed arrays that use 1-D distribution with a distribution dimension greater than 2.
Search for Cluster Head Nodes Using Active Directory The findResource function can search Active Directory to identify your cluster head node. For more information, see the findResource reference page. Enhanced Admin Center Support You can now start and stop mdce services on remote hosts from Admin Center. For more information, see Start mdce Service. New Remote Cluster Access Object New functionality is available that lets you mirror job data from a remote cluster to a local data location.
R2010b Version: 5.
R2010b GPU Computing This release provides the ability to perform calculations on a graphics processing unit (GPU). Features include the ability to: • Use a GPU array interface with several MATLAB built-in functions so that they automatically execute with single- or double-precision on the GPU — functions including mldivide, mtimes, fft, etc.
These functions are included on the workers' path. If your submit functions make use of the definitions in these decode functions, you do not have to provide your own decode functions. For example, to use the standard decode function for distributed jobs, in your submit function set MDCE_DECODE_FUNCTION to 'parallel.cluster.generic.distributedDecodeFcn'. For information on using the generic scheduler interface with submit and decode functions, see Use the Generic Scheduler Interface.
R2010b Enhanced Example Scripts This release provides new sets of example scripts for using the generic scheduler interface.
Enhanced Functions for Distributed Arrays qr Supports Distributed Arrays The qr function now supports distributed arrays. For restrictions on this functionality, type help distributed/qr mldivide Enhancements The mldivide function (\) now supports rectangular distributed arrays. Formerly, only square matrices were supported as distributed arrays.
R2010b transpose and ctranspose Support 2dbc In addition to their original support for 1-D distribution schemes, the functionsctranspose and transpose now support 2-D block-cyclic ('2dbc') distributed arrays. Inf and NaN Support Multiple Formats Distributed and codistributed arrays now support nan, NaN, inf and Inf for not-anumber and infinity values with the following functions: Infinity Value Not-a-Number codistributed.Inf codistributed.NaN codistributed.inf codistributed.nan distributed.
R2010a Version: 4.
R2010a New Save and Load Abilities for Distributed Arrays You now have the ability to save distributed arrays from the client to a single MAT-file. Subsequently, in the client you can load a distributed array from that file and have it automatically distributed to the MATLAB pool workers. The pool size and distribution scheme of the array do not have to be the same when you load the array as they were when you saved it.
New Remote Startup of mdce Process New command-line functionality allows you to remotely start up MATLAB Distributed Computing Server processes on cluster machines from the desktop computer. For more information, see the remotemdce reference page. Obtaining mdce Process Version An enhancement to the mdce command lets you get the command version by executing mdce -version For more information on this command, see the mdce reference page.
R2009b Version: 4.
R2009b New Distributed Arrays A new form of distributed arrays provides direct access from the client to data stored on the workers in a MATLAB pool. Distributed arrays have the same appearance and rules of indexing as regular arrays. You can distribute an existing array from the client workspace with the command D = distributed(X) where X is an array in the client, and D is a distributed array with its data on the workers in the MATLAB pool.
Old Function Name New Function Name codcolon codistributed.colon codistributed(..., 'convert') codistributed(...) codistributed(...) without 'convert' option codistributed.build codistributed(L, D) using distribution scheme of D to define that of L codistributed.build(L, getCodistributor(D)) codistributor('1d', ...) Still available, but can also use codistributor1d codistributor('2d', ...) codistributor('2dbc', ...
R2009b Enhancements to Admin Center Admin Center has several small enhancements, including more conveniently located menu choices, modified dialog boxes, properties dialog boxes for listed items, etc. Adding or Updating File Dependencies in an Open MATLAB Pool Enhancements to the matlabpool command let you add or update file dependencies in a running MATLAB pool.
• JobDescriptionFile — A string set to the name of the XML file defining a base state for job creation Compatibility Considerations CCS is now just one of multiple versions of HPC Server. While 'ccs' is still acceptable as a type of scheduler for the findResource function, you can also use 'hpcserver' for this purpose. In the Configurations Manager, the new scheduler type is available by selecting File > New > hpcserver (ccs).
R2009a Version: 4.
R2009a Number of Local Workers Increased to Eight You can now run up to eight local workers on your MATLAB client machine. If you do not specify the number of local workers in a command or configuration, the default number of local workers is determined by the value of the local scheduler's ClusterSize property, which by default is equal to the number of computational cores on the client machine.
the MATLAB Distributed Computing Server, see the online installation instructions at http://www.mathworks.com/distconfig. New Benchmarking Demos New benchmarking demos for Parallel Computing Toolbox can help you understand and evaluate performance of the parallel computing products. You can access these demos in the Help Browser under the Parallel Computing Toolbox node: expand the nodes for Demos then Benchmarks.
R2008b Version: 4.
R2008b MATLAB Compiler Product Support for Parallel Computing Toolbox Applications This release offers the ability to convert Parallel Computing Toolbox applications, using MATLAB Compiler, into executables and shared libraries that can access MATLAB Distributed Computing Server. For information on this update to MATLAB Compiler, see Applications Created with Parallel Computing Toolbox Can Be Compiled.
Compatibility Considerations Because spmd is a new keyword, it will conflict with any user-defined functions or variables of the same name. If you have any code with functions or variables named spmd, you must rename them. Composite Objects Composite objects provide direct access from the client (desktop) program to data that is stored on labs in the MATLAB pool. The data of variables assigned inside an spmd block is available via Composites in the client.
R2008b • DestroyTaskFcn • CancelJobFcn • CancelTaskFcn New toolbox functions to accommodate this ability are: • getJobSchedulerData • setJobSchedulerData For more information on this new functionality, see Manage Jobs with Generic Scheduler. Changed Function Names for Codistributed Arrays What was known in previous releases as distributed arrays are henceforth called codistributed arrays. Some functions related to constructing and accessing codistributed arrays have changed names in this release.
For more information about this option and others, see the matlabpool reference page.
R2008a Version: 3.
R2008a Renamed Functions for Product Name Changes As of result of the product name changes, some function names are changing in this release. Compatibility Considerations Two function names are changed to correspond to the new product names: • dctconfig has been renamed pctconfig. • dctRunOnAll has been renamed pctRunOnAll. New batch Function The new batch function allows you to offload work from the client to one or more workers.
memory allocation was approximately 50 MB. The new higher limits are approximately 600 MB on 32-bit systems and 2 GB on 64-bit systems. See Object Data Size Limitations. Changed Function Names for Distributed Arrays Several functions related to distributed arrays have changed names in this release. Compatibility Considerations The following table summarizes the changes in function names relating to distributed arrays.
R2008a findResource Now Sets Properties According to Configuration The findResource function now sets the properties on the object it creates according to the configuration identified in the function call. Compatibility Considerations In past releases, findResource could use a configuration to identify a scheduler, but did not apply the configuration settings to the scheduler object properties.
dfeval Now Destroys Its Job When Finished When finished performing its distributed evaluation, the dfeval function now destroys the job it created. Compatibility Considerations If you have any scripts that rely on a job and its data still existing after the completion of dfeval, or that destroy the job after dfeval, these scripts will no longer work.
R2007b Version: 3.
R2007b New Parallel for-Loops (parfor-Loops) New parallel for-loop (parfor-loop) functionality automatically executes a loop body in parallel on dynamically allocated cluster resources, allowing interleaved serial and parallel code. For details of new parfor functionality, see Parallel for-Loops (parfor) in the Distributed Computing Toolbox™ documentation. Limitations P-Code Scripts You can call P-code script files from within a parfor-loop, but P-code script cannot contain a parfor-loop.
Compatibility Considerations This new feature has no impact on how configurations are used in a program, only on how configurations are created and shared among users. In previous versions of the product, you modified your configurations by editing the file matlabroot/toolbox/ distcomp/user/distcompUserConfig.m. Now the configuration data is stored as part of your MATLAB software preferences.
R2007a Version: 3.
R2007a Local Scheduler and Workers A local scheduler allows you to schedule jobs and run up to four workers or labs on a single MATLAB client machine without requiring engine licenses. These workers/labs can run distributed jobs or parallel jobs, including pmode sessions, for all products for which the MATLAB client is licensed. This local scheduler and its workers do not require a job manager or third-party scheduler. New pmode Interface The interactive parallel mode (pmode) has a new interface.
Vectorized Task Creation The createTask function can now create a vector of tasks in a single call when you provide a cell array of cell arrays for input arguments. For full details, see the createTask reference page. Compatibility Considerations In previous versions of the distributed computing products, if your task function had an input argument that was a cell array of cell arrays, your code will need to be modified to run the same way in this release.
R2007a function returns jobs sequenced by their ID, unless otherwise specified. This change makes job manager behavior consistent with the behavior of third-party schedulers. Compatibility Considerations In previous versions of the distributed computing products, when using a job manager, jobs were arranged in the Jobs property or by findJob according to the status of the job.
Compatibility Considerations In the previous version of the toolbox, the distributor function was used to define how an array was distributed. In many cases, you can replace a call to distributor with a call to darray.
R2006b Version: 3.
R2006b Support for Windows Compute Cluster Server (CCS) Distributed Computing Toolbox software and MATLAB Distributed Computing Engine™ software now let you program jobs and run them on a Windows Compute Cluster Server. For information about programming in the toolbox to use Windows Compute Cluster Server (CCS) as your scheduler, see the findResource reference page. Windows 64 Support The distributed computing products now support Windows 64 (Win64) for both MATLAB client and MATLAB worker machines.
efficient use of memory and faster processing, especially for large data sets. For more information, see “Distributed Arrays and SPMD”. parfor: Parallel for-Loops Parallel for-loops let you run a for-loop across your labs simultaneously. For more information, see Using a for-Loop Over a Distributed Range (for-drange) or the parfor reference page.
R2006b stopworker mdce_def.bat UNIX and Macintosh Utilities Renamed In previous versions of the distributed computing products, the MDCE utilities for UNIX and Macintosh computers were called by nodestatus.sh startjobmanager.sh stopjobmanager.sh startworker.sh stopworker.sh You can now call these with the following commands: nodestatus startjobmanager stopjobmanager startworker stopworker Note: For UNIX and Macintosh, mdce and mdce_def.sh have not been moved or renamed.
Tasks property of the job contains four task objects. The first task in the job's Tasks property corresponds to the task run by the lab whose labindex is 1, and so on, so that the ID property for the task object and labindex for the lab that ran that task have the same value. Therefore, the sequence of results returned by the getAllOutputArguments function corresponds to the value of labindex and to the order of tasks in the job's Tasks property.