Ceres

class Ceres(*args)

Interface to Ceres Solver.

This class exposes the solvers from the non-linear least squares optimization library [ceres2012].

More details about least squares algorithms are available here.

Algorithms are also available for general unconstrained optimization.

Parameters
problemOptimizationProblem

Optimization problem to solve, either least-squares or general (unconstrained).

algoNamestr

The identifier of the algorithm. Use GetAlgorithmNames() to list available names.

Notes

Solvers use first order derivative information.

As for constraint support, only the trust-region solvers allow for bound constraints:

Algorithm

Method type

Problem type support

Constraint support

LEVENBERG_MARQUARDT

trust-region

least-squares

bounds

DOGLEG

trust-region

least-squares

bounds

STEEPEST_DESCENT

line-search

least-squares, general

none

NONLINEAR_CONJUGATE_GRADIENT

line-search

least-squares, general

none

LBFGS

line-search

least-squares, general

none

BFGS

line-search

least-squares, general

none

Ceres least squares solver can be further tweaked thanks to the following ResourceMap parameters, refer to nlls solver options for more details.

Key

Type

Ceres-minimizer_type

str

Ceres-line_search_direction_type

str

Ceres-line_search_type

str

Ceres-nonlinear_conjugate_gradient_type

str

Ceres-max_lbfgs_rank

int

Ceres-use_approximate_eigenvalue_bfgs_scaling

bool

Ceres-line_search_interpolation_type

str

Ceres-min_line_search_step_size

float

Ceres-line_search_sufficient_function_decrease

float

Ceres-max_line_search_step_contraction

float

Ceres-min_line_search_step_contraction

float

Ceres-max_num_line_search_step_size_iterations

int

Ceres-max_num_line_search_direction_restarts

int

Ceres-line_search_sufficient_curvature_decrease

float

Ceres-max_line_search_step_expansion

float

Ceres-trust_region_strategy_type

str

Ceres-dogleg_type

str

Ceres-use_nonmonotonic_steps

bool

Ceres-max_consecutive_nonmonotonic_steps

int

Ceres-max_num_iterations

int

Ceres-max_solver_time_in_seconds

float

Ceres-num_threads

int

Ceres-initial_trust_region_radius

float

Ceres-max_trust_region_radius

float

Ceres-min_trust_region_radius

float

Ceres-min_relative_decrease

float

Ceres-min_lm_diagonal

float

Ceres-max_lm_diagonal

float

Ceres-max_num_consecutive_invalid_steps

int

Ceres-function_tolerance

float

Ceres-gradient_tolerance

float

Ceres-parameter_tolerance

float

Ceres-preconditioner_type

str

Ceres-visibility_clustering_type

str

Ceres-dense_linear_algebra_library_type

str

Ceres-sparse_linear_algebra_library_type

str

Ceres-num_linear_solver_threads

int

Ceres-use_explicit_schur_complement

bool

Ceres-use_postordering

bool

Ceres-dynamic_sparsity

bool

Ceres-min_linear_solver_iterations

int

Ceres-max_linear_solver_iterations

int

Ceres-eta

float

Ceres-jacobi_scaling

bool

Ceres-use_inner_iterations

bool

Ceres-inner_iteration_tolerance

float

Ceres-logging_type

str

Ceres-minimizer_progress_to_stdout

bool

Ceres-trust_region_problem_dump_directory

str

Ceres-trust_region_problem_dump_format_type

str

Ceres-check_gradients

bool

Ceres-gradient_check_relative_precision

float

Ceres-gradient_check_numeric_derivative_relative_step_size

float

Ceres-update_state_every_iteration

bool

Ceres unconstrained solver can be further tweaked using the following ResourceMap parameters, refer to gradient solver options for more details.

Key

Type

Ceres-line_search_direction_type

str

Ceres-line_search_type

str

Ceres-nonlinear_conjugate_gradient_type

str

Ceres-max_lbfgs_rank

int

Ceres-use_approximate_eigenvalue_bfgs_scaling

bool

Ceres-line_search_interpolation_type

str

Ceres-min_line_search_step_size

float

Ceres-line_search_sufficient_function_decrease

float

Ceres-max_line_search_step_contraction

float

Ceres-min_line_search_step_contraction

float

Ceres-max_num_line_search_step_size_iterations

int

Ceres-max_num_line_search_direction_restarts

int

Ceres-line_search_sufficient_curvature_decrease

float

Ceres-max_line_search_step_expansion

float

Ceres-max_num_iterations

int

Ceres-max_solver_time_in_seconds

float

Ceres-function_tolerance

float

Ceres-gradient_tolerance

float

Ceres-parameter_tolerance

float

Ceres-logging_type

str

Ceres-minimizer_progress_to_stdout

bool

Examples

List available algorithms:

>>> import openturns as ot
>>> print(ot.Ceres.GetAlgorithmNames())
[LEVENBERG_MARQUARDT,DOGLEG,...

Solve a least-squares problem:

>>> dim = 2
>>> residualFunction = ot.SymbolicFunction(['x0', 'x1'], ['10*(x1-x0^2)', '1-x0'])
>>> problem = ot.LeastSquaresProblem(residualFunction)
>>> problem.setBounds(ot.Interval([-3.0] * dim, [5.0] * dim))
>>> algo = ot.Ceres(problem, 'LEVENBERG_MARQUARDT')  
>>> algo.setStartingPoint([0.0] * dim)  
>>> algo.run()  
>>> result = algo.getResult()  
>>> x_star = result.getOptimalPoint()  
>>> y_star = result.getOptimalValue()  

Or, solve a general optimization problem:

>>> dim = 4
>>> linear = ot.SymbolicFunction(['x1', 'x2', 'x3', 'x4'], ['(x1-1)^2+(x2-2)^2+(x3-3)^2+(x4-4)^2'])
>>> problem = ot.OptimizationProblem(linear)
>>> algo = ot.Ceres(problem, 'BFGS')  
>>> algo.setStartingPoint([0.0] * 4)  
>>> algo.run()  
>>> result = algo.getResult()  
>>> x_star = result.getOptimalPoint()  
>>> y_star = result.getOptimalValue()  

Methods

GetAlgorithmNames()

Accessor to the list of algorithms provided, by names.

IsAvailable()

Ask whether Ceres support is available.

computeLagrangeMultipliers(self, x)

Compute the Lagrange multipliers of a problem at a given point.

getAlgorithmName(self)

Accessor to the algorithm name.

getClassName(self)

Accessor to the object’s name.

getId(self)

Accessor to the object’s id.

getMaximumAbsoluteError(self)

Accessor to maximum allowed absolute error.

getMaximumConstraintError(self)

Accessor to maximum allowed constraint error.

getMaximumEvaluationNumber(self)

Accessor to maximum allowed number of evaluations.

getMaximumIterationNumber(self)

Accessor to maximum allowed number of iterations.

getMaximumRelativeError(self)

Accessor to maximum allowed relative error.

getMaximumResidualError(self)

Accessor to maximum allowed residual error.

getName(self)

Accessor to the object’s name.

getProblem(self)

Accessor to optimization problem.

getResult(self)

Accessor to optimization result.

getShadowedId(self)

Accessor to the object’s shadowed id.

getStartingPoint(self)

Accessor to starting point.

getVerbose(self)

Accessor to the verbosity flag.

getVisibility(self)

Accessor to the object’s visibility state.

hasName(self)

Test if the object is named.

hasVisibleName(self)

Test if the object has a distinguishable name.

run(self)

Launch the optimization.

setAlgorithmName(self, algoName)

Accessor to the algorithm name.

setMaximumAbsoluteError(self, …)

Accessor to maximum allowed absolute error.

setMaximumConstraintError(self, …)

Accessor to maximum allowed constraint error.

setMaximumEvaluationNumber(self, …)

Accessor to maximum allowed number of evaluations.

setMaximumIterationNumber(self, …)

Accessor to maximum allowed number of iterations.

setMaximumRelativeError(self, …)

Accessor to maximum allowed relative error.

setMaximumResidualError(self, …)

Accessor to maximum allowed residual error.

setName(self, name)

Accessor to the object’s name.

setProblem(self, problem)

Accessor to optimization problem.

setProgressCallback(self, \*args)

Set up a progress callback.

setResult(self, result)

Accessor to optimization result.

setShadowedId(self, id)

Accessor to the object’s shadowed id.

setStartingPoint(self, startingPoint)

Accessor to starting point.

setStopCallback(self, \*args)

Set up a stop callback.

setVerbose(self, verbose)

Accessor to the verbosity flag.

setVisibility(self, visible)

Accessor to the object’s visibility state.

__init__(self, \*args)

Initialize self. See help(type(self)) for accurate signature.

static GetAlgorithmNames()

Accessor to the list of algorithms provided, by names.

Returns
namesDescription

List of algorithm names provided, according to its naming convention.

The trust region methods are not able to solve general optimization problems, in that case a warning is printed and the default line search method is used instead.

Examples

>>> import openturns as ot
>>> print(ot.Ceres.GetAlgorithmNames())
[LEVENBERG_MARQUARDT,DOGLEG,STEEPEST_DESCENT,NONLINEAR_CONJUGATE_GRADIENT,LBFGS,BFGS]
static IsAvailable()

Ask whether Ceres support is available.

Returns
availablebool

Whether Ceres support is available.

computeLagrangeMultipliers(self, x)

Compute the Lagrange multipliers of a problem at a given point.

Parameters
xsequence of float

Point at which the Lagrange multipliers are computed.

Returns
lagrangeMultipliersequence of float

Lagrange multipliers of the problem at the given point.

Notes

The Lagrange multipliers \vect{\lambda} are associated with the following Lagrangian formulation of the optimization problem:

\cL(\vect{x}, \vect{\lambda}_{eq}, \vect{\lambda}_{\ell}, \vect{\lambda}_{u}, \vect{\lambda}_{ineq}) = J(\vect{x}) + \Tr{\vect{\lambda}}_{eq} g(\vect{x}) + \Tr{\vect{\lambda}}_{\ell} (\vect{x}-\vect{\ell})^{+} + \Tr{\vect{\lambda}}_{u} (\vect{u}-\vect{x})^{+} + \Tr{\vect{\lambda}}_{ineq}  h^{+}(\vect{x})

where \vect{\alpha}^{+}=(\max(0,\alpha_1),\hdots,\max(0,\alpha_n)).

The Lagrange multipliers are stored as (\vect{\lambda}_{eq}, \vect{\lambda}_{\ell}, \vect{\lambda}_{u}, \vect{\lambda}_{ineq}), where:
  • \vect{\lambda}_{eq} is of dimension 0 if there is no equality constraint, else of dimension the dimension of g(\vect{x}) ie the number of scalar equality constraints

  • \vect{\lambda}_{\ell} and \vect{\lambda}_{u} are of dimension 0 if there is no bound constraint, else of dimension of \vect{x}

  • \vect{\lambda}_{eq} is of dimension 0 if there is no inequality constraint, else of dimension the dimension of h(\vect{x}) ie the number of scalar inequality constraints

The vector \vect{\lambda} is solution of the following linear system:

\Tr{\vect{\lambda}}_{eq}\left[\dfrac{\partial g}{\partial\vect{x}}(\vect{x})\right]+
\Tr{\vect{\lambda}}_{\ell}\left[\dfrac{\partial (\vect{x}-\vect{\ell})^{+}}{\partial\vect{x}}(\vect{x})\right]+
\Tr{\vect{\lambda}}_{u}\left[\dfrac{\partial (\vect{u}-\vect{x})^{+}}{\partial\vect{x}}(\vect{x})\right]+
\Tr{\vect{\lambda}}_{ineq}\left[\dfrac{\partial h}{\partial\vect{x}}(\vect{x})\right]=-\dfrac{\partial J}{\partial\vect{x}}(\vect{x})

If there is no constraint of any kind, \vect{\lambda} is of dimension 0, as well as if no constraint is active.

getAlgorithmName(self)

Accessor to the algorithm name.

Returns
algoNamestr

The identifier of the algorithm.

getClassName(self)

Accessor to the object’s name.

Returns
class_namestr

The object class name (object.__class__.__name__).

getId(self)

Accessor to the object’s id.

Returns
idint

Internal unique identifier.

getMaximumAbsoluteError(self)

Accessor to maximum allowed absolute error.

Returns
maximumAbsoluteErrorfloat

Maximum allowed absolute error, where the absolute error is defined by \epsilon^a_n=\|\vect{x}_{n+1}-\vect{x}_n\|_{\infty} where \vect{x}_{n+1} and \vect{x}_n are two consecutive approximations of the optimum.

getMaximumConstraintError(self)

Accessor to maximum allowed constraint error.

Returns
maximumConstraintErrorfloat

Maximum allowed constraint error, where the constraint error is defined by \gamma_n=\|g(\vect{x}_n)\|_{\infty} where \vect{x}_n is the current approximation of the optimum and g is the function that gathers all the equality and inequality constraints (violated values only)

getMaximumEvaluationNumber(self)

Accessor to maximum allowed number of evaluations.

Returns
Nint

Maximum allowed number of evaluations.

getMaximumIterationNumber(self)

Accessor to maximum allowed number of iterations.

Returns
Nint

Maximum allowed number of iterations.

getMaximumRelativeError(self)

Accessor to maximum allowed relative error.

Returns
maximumRelativeErrorfloat

Maximum allowed relative error, where the relative error is defined by \epsilon^r_n=\epsilon^a_n/\|\vect{x}_{n+1}\|_{\infty} if \|\vect{x}_{n+1}\|_{\infty}\neq 0, else \epsilon^r_n=-1.

getMaximumResidualError(self)

Accessor to maximum allowed residual error.

Returns
maximumResidualErrorfloat

Maximum allowed residual error, where the residual error is defined by \epsilon^r_n=\frac{\|f(\vect{x}_{n+1})-f(\vect{x}_{n})\|}{\|f(\vect{x}_{n+1})\|} if \|f(\vect{x}_{n+1})\|\neq 0, else \epsilon^r_n=-1.

getName(self)

Accessor to the object’s name.

Returns
namestr

The name of the object.

getProblem(self)

Accessor to optimization problem.

Returns
problemOptimizationProblem

Optimization problem.

getResult(self)

Accessor to optimization result.

Returns
resultOptimizationResult

Result class.

getShadowedId(self)

Accessor to the object’s shadowed id.

Returns
idint

Internal unique identifier.

getStartingPoint(self)

Accessor to starting point.

Returns
startingPointPoint

Starting point.

getVerbose(self)

Accessor to the verbosity flag.

Returns
verbosebool

Verbosity flag state.

getVisibility(self)

Accessor to the object’s visibility state.

Returns
visiblebool

Visibility flag.

hasName(self)

Test if the object is named.

Returns
hasNamebool

True if the name is not empty.

hasVisibleName(self)

Test if the object has a distinguishable name.

Returns
hasVisibleNamebool

True if the name is not empty and not the default one.

run(self)

Launch the optimization.

setAlgorithmName(self, algoName)

Accessor to the algorithm name.

Parameters
algoNamestr

The identifier of the algorithm.

setMaximumAbsoluteError(self, maximumAbsoluteError)

Accessor to maximum allowed absolute error.

Parameters
maximumAbsoluteErrorfloat

Maximum allowed absolute error, where the absolute error is defined by \epsilon^a_n=\|\vect{x}_{n+1}-\vect{x}_n\|_{\infty} where \vect{x}_{n+1} and \vect{x}_n are two consecutive approximations of the optimum.

setMaximumConstraintError(self, maximumConstraintError)

Accessor to maximum allowed constraint error.

Parameters
maximumConstraintErrorfloat

Maximum allowed constraint error, where the constraint error is defined by \gamma_n=\|g(\vect{x}_n)\|_{\infty} where \vect{x}_n is the current approximation of the optimum and g is the function that gathers all the equality and inequality constraints (violated values only)

setMaximumEvaluationNumber(self, maximumEvaluationNumber)

Accessor to maximum allowed number of evaluations.

Parameters
Nint

Maximum allowed number of evaluations.

setMaximumIterationNumber(self, maximumIterationNumber)

Accessor to maximum allowed number of iterations.

Parameters
Nint

Maximum allowed number of iterations.

setMaximumRelativeError(self, maximumRelativeError)

Accessor to maximum allowed relative error.

Parameters
maximumRelativeErrorfloat

Maximum allowed relative error, where the relative error is defined by \epsilon^r_n=\epsilon^a_n/\|\vect{x}_{n+1}\|_{\infty} if \|\vect{x}_{n+1}\|_{\infty}\neq 0, else \epsilon^r_n=-1.

setMaximumResidualError(self, maximumResidualError)

Accessor to maximum allowed residual error.

Parameters
Maximum allowed residual error, where the residual error is defined by

\epsilon^r_n=\frac{\|f(\vect{x}_{n+1})-f(\vect{x}_{n})\|}{\|f(\vect{x}_{n+1})\|} if \|f(\vect{x}_{n+1})\|\neq 0, else \epsilon^r_n=-1.

setName(self, name)

Accessor to the object’s name.

Parameters
namestr

The name of the object.

setProblem(self, problem)

Accessor to optimization problem.

Parameters
problemOptimizationProblem

Optimization problem.

setProgressCallback(self, \*args)

Set up a progress callback.

Can be used to programmatically report the progress of an optimization.

Parameters
callbackcallable

Takes a float as argument as percentage of progress.

Examples

>>> import sys
>>> import openturns as ot
>>> rosenbrock = ot.SymbolicFunction(['x1', 'x2'], ['(1-x1)^2+100*(x2-x1^2)^2'])
>>> problem = ot.OptimizationProblem(rosenbrock)
>>> solver = ot.OptimizationAlgorithm(problem)
>>> solver.setStartingPoint([0, 0])
>>> solver.setMaximumResidualError(1.e-3)
>>> solver.setMaximumIterationNumber(100)
>>> def report_progress(progress):
...     sys.stderr.write('-- progress=' + str(progress) + '%\n')
>>> solver.setProgressCallback(report_progress)
>>> solver.run()
setResult(self, result)

Accessor to optimization result.

Parameters
resultOptimizationResult

Result class.

setShadowedId(self, id)

Accessor to the object’s shadowed id.

Parameters
idint

Internal unique identifier.

setStartingPoint(self, startingPoint)

Accessor to starting point.

Parameters
startingPointPoint

Starting point.

setStopCallback(self, \*args)

Set up a stop callback.

Can be used to programmatically stop an optimization.

Parameters
callbackcallable

Returns an int deciding whether to stop or continue.

Examples

>>> import openturns as ot
>>> rosenbrock = ot.SymbolicFunction(['x1', 'x2'], ['(1-x1)^2+100*(x2-x1^2)^2'])
>>> problem = ot.OptimizationProblem(rosenbrock)
>>> solver = ot.OptimizationAlgorithm(problem)
>>> solver.setStartingPoint([0, 0])
>>> solver.setMaximumResidualError(1.e-3)
>>> solver.setMaximumIterationNumber(100)
>>> def ask_stop():
...     return True
>>> solver.setStopCallback(ask_stop)
>>> solver.run()
setVerbose(self, verbose)

Accessor to the verbosity flag.

Parameters
verbosebool

Verbosity flag state.

setVisibility(self, visible)

Accessor to the object’s visibility state.

Parameters
visiblebool

Visibility flag.