Specifications

Defining, Solving, and Assessing
Optimization Problems
The Optimization Toolbox includes the
most widely used methods for performing
minimization and maximization. The toolbox
implements both standard and large-scale
algorithms, enabling you to solve problems
by exploiting their sparsity or structure. The
command-line interface gives you access to tools
to define, run, and assess your optimization.
You can further manipulate and diagnose
your optimization using the diagnostic
outputs from the optimization methods.
Using an output function, you can also write
results to files, create your own stopping
criteria, and write your own graphical user
interfaces to run the toolbox solvers.
Nonlinear Optimization and Multi-
Objective Optimization
Unconstrained Nonlinear Optimization
The Optimization Toolbox uses three
methods to solve unconstrained nonlinear
minimization problems: quasi-Newton,
Nelder-Mead, and trust-region.
The Quasi-Newton method uses a mixed
quadratic and cubic line search procedure
and the BFGS formula for updating the
approximation of the Hessian matrix.
Nelder-Mead is a direct-search method that
uses only function values (does not require
derivatives) and handles non-smooth objec-
tive functions.
Trust-region is used for large-scale problems
where sparsity or structure can be exploited.
The trust region method, which is based
on an interior-reflective Newton method,
enables you to calculate Hessian-times-vector
products in a function without having to
form the Hessian matrix explicitly. You can
also adjust the bandwidth of the precondi-
tioner used in the Newton iteration.
Constrained Nonlinear Optimization
Constrained nonlinear optimization prob-
lems are composed of nonlinear objective
functions and may be subject to linear and
nonlinear constraints. The Optimization
Toolbox uses two methods to solve these
problems: trust-region and active set sequen-
tial quadratic programming.
Trust-region is used for bound constrained
problems or linear equalities only.
Active set sequential quadratic programming
is used for general nonlinear optimization.
An optimization routine is run at the command line (left), and calls M-files defining the
objective function (top right) and constraint equations (bottom right).