TNC

class TNC(*args)

Truncated Newton Constrained solver.

Tunrcated-Newton method Non-linear optimizer. This solver uses no derivative information and only supports bound constraints.

Available constructors:

TNC(problem)

TNC(problem, scale, offset, maxCGit, eta, stepmx, accuracy, fmin, rescale)

Parameters:
problemOptimizationProblem

Optimization problem to solve.

specificParametersTNCSpecificParameters

Parameters for this solver.

scalesequence of float

Scaling factors to apply to each variables

offsetsequence of float

Constant to subtract to each variable

maxCGitint

Maximum number of hessian*vector evaluation per main iteration

etafloat

Severity of the line search.

stepmxfloat

Maximum step for the line search. may be increased during call

accuracyfloat

Relative precision for finite difference calculations

fminfloat

Minimum function value estimate.

rescalefloat

f scaling factor (in log10) used to trigger f value rescaling

Examples

>>> import openturns as ot
>>> model = ot.SymbolicFunction(['E', 'F', 'L', 'I'], ['-F*L^3/(3*E*I)'])
>>> bounds = ot.Interval([1.0]*4, [2.0]*4)
>>> problem = ot.OptimizationProblem(model, ot.Function(), ot.Function(), bounds)
>>> algo = ot.TNC(problem)
>>> algo.setStartingPoint([1.0] * 4)
>>> algo.run()
>>> result = algo.getResult()
Attributes:
thisown

The membership flag

Methods

computeLagrangeMultipliers(x) Compute the Lagrange multipliers of a problem at a given point.
getAccuracy() Accessor to accuracy parameter.
getClassName() Accessor to the object’s name.
getEta() Accessor to eta parameter.
getFmin() Accessor to fmin parameter.
getId() Accessor to the object’s id.
getMaxCGit() Accessor to maxCGit parameter.
getMaximumAbsoluteError() Accessor to maximum allowed absolute error.
getMaximumConstraintError() Accessor to maximum allowed constraint error.
getMaximumEvaluationNumber() Accessor to maximum allowed number of evaluations.
getMaximumIterationNumber() Accessor to maximum allowed number of iterations.
getMaximumRelativeError() Accessor to maximum allowed relative error.
getMaximumResidualError() Accessor to maximum allowed residual error.
getName() Accessor to the object’s name.
getOffset() Accessor to offset parameter.
getProblem() Accessor to optimization problem.
getRescale() Accessor to rescale parameter.
getResult() Accessor to optimization result.
getScale() Accessor to scale parameter.
getShadowedId() Accessor to the object’s shadowed id.
getStartingPoint() Accessor to starting point.
getStepmx() Accessor to stepmx parameter.
getVerbose() Accessor to the verbosity flag.
getVisibility() Accessor to the object’s visibility state.
hasName() Test if the object is named.
hasVisibleName() Test if the object has a distinguishable name.
run() Launch the optimization.
setAccuracy(accuracy) Accessor to accuracy parameter.
setEta(eta) Accessor to eta parameter.
setFmin(fmin) Accessor to fmin parameter.
setMaxCGit(maxCGit) Accessor to maxCGit parameter.
setMaximumAbsoluteError(maximumAbsoluteError) Accessor to maximum allowed absolute error.
setMaximumConstraintError(maximumConstraintError) Accessor to maximum allowed constraint error.
setMaximumEvaluationNumber(…) Accessor to maximum allowed number of evaluations.
setMaximumIterationNumber(maximumIterationNumber) Accessor to maximum allowed number of iterations.
setMaximumRelativeError(maximumRelativeError) Accessor to maximum allowed relative error.
setMaximumResidualError(maximumResidualError) Accessor to maximum allowed residual error.
setName(name) Accessor to the object’s name.
setOffset(offset) Accessor to offset parameter.
setProblem(problem) Accessor to optimization problem.
setProgressCallback(*args) Set up a progress callback.
setRescale(rescale) Accessor to rescale parameter.
setResult(result) Accessor to optimization result.
setScale(scale) Accessor to scale parameter.
setShadowedId(id) Accessor to the object’s shadowed id.
setStartingPoint(startingPoint) Accessor to starting point.
setStepmx(stepmx) Accessor to stepmx parameter.
setStopCallback(*args) Set up a stop callback.
setVerbose(verbose) Accessor to the verbosity flag.
setVisibility(visible) Accessor to the object’s visibility state.
__init__(*args)

Initialize self. See help(type(self)) for accurate signature.

computeLagrangeMultipliers(x)

Compute the Lagrange multipliers of a problem at a given point.

Parameters:
xsequence of float

Point at which the Lagrange multipliers are computed.

Returns:
lagrangeMultipliersequence of float

Lagrange multipliers of the problem at the given point.

Notes

The Lagrange multipliers \vect{\lambda} are associated with the following Lagrangian formulation of the optimization problem:

\cL(\vect{x}, \vect{\lambda}_{eq}, \vect{\lambda}_{\ell}, \vect{\lambda}_{u}, \vect{\lambda}_{ineq}) = J(\vect{x}) + \Tr{\vect{\lambda}}_{eq} g(\vect{x}) + \Tr{\vect{\lambda}}_{\ell} (\vect{x}-\vect{\ell})^{+} + \Tr{\vect{\lambda}}_{u} (\vect{u}-\vect{x})^{+} + \Tr{\vect{\lambda}}_{ineq}  h^{+}(\vect{x})

where \vect{\alpha}^{+}=(\max(0,\alpha_1),\hdots,\max(0,\alpha_n)).

The Lagrange multipliers are stored as (\vect{\lambda}_{eq}, \vect{\lambda}_{\ell}, \vect{\lambda}_{u}, \vect{\lambda}_{ineq}), where:
  • \vect{\lambda}_{eq} is of dimension 0 if there is no equality constraint, else of dimension the dimension of g(\vect{x}) ie the number of scalar equality constraints
  • \vect{\lambda}_{\ell} and \vect{\lambda}_{u} are of dimension 0 if there is no bound constraint, else of dimension of \vect{x}
  • \vect{\lambda}_{eq} is of dimension 0 if there is no inequality constraint, else of dimension the dimension of h(\vect{x}) ie the number of scalar inequality constraints

The vector \vect{\lambda} is solution of the following linear system:

\Tr{\vect{\lambda}}_{eq}\left[\dfrac{\partial g}{\partial\vect{x}}(\vect{x})\right]+
\Tr{\vect{\lambda}}_{\ell}\left[\dfrac{\partial (\vect{x}-\vect{\ell})^{+}}{\partial\vect{x}}(\vect{x})\right]+
\Tr{\vect{\lambda}}_{u}\left[\dfrac{\partial (\vect{u}-\vect{x})^{+}}{\partial\vect{x}}(\vect{x})\right]+
\Tr{\vect{\lambda}}_{ineq}\left[\dfrac{\partial h}{\partial\vect{x}}(\vect{x})\right]=-\dfrac{\partial J}{\partial\vect{x}}(\vect{x})

If there is no constraint of any kind, \vect{\lambda} is of dimension 0, as well as if no constraint is active.

getAccuracy()

Accessor to accuracy parameter.

Returns:
accuracyfloat

Relative precision for finite difference calculations

if <= machine_precision, set to sqrt(machine_precision).

getClassName()

Accessor to the object’s name.

Returns:
class_namestr

The object class name (object.__class__.__name__).

getEta()

Accessor to eta parameter.

Returns:
etafloat

Severity of the line search.

if < 0 or > 1, set to 0.25.

getFmin()

Accessor to fmin parameter.

Returns:
fminfloat

Minimum function value estimate.

getId()

Accessor to the object’s id.

Returns:
idint

Internal unique identifier.

getMaxCGit()

Accessor to maxCGit parameter.

Returns:
maxCGitint

Maximum number of hessian*vector evaluation per main iteration

if maxCGit = 0, the direction chosen is -gradient

if maxCGit < 0, maxCGit is set to max(1,min(50,n/2)).

getMaximumAbsoluteError()

Accessor to maximum allowed absolute error.

Returns:
maximumAbsoluteErrorfloat

Maximum allowed absolute error, where the absolute error is defined by \epsilon^a_n=\|\vect{x}_{n+1}-\vect{x}_n\|_{\infty} where \vect{x}_{n+1} and \vect{x}_n are two consecutive approximations of the optimum.

getMaximumConstraintError()

Accessor to maximum allowed constraint error.

Returns:
maximumConstraintErrorfloat

Maximum allowed constraint error, where the constraint error is defined by \gamma_n=\|g(\vect{x}_n)\|_{\infty} where \vect{x}_n is the current approximation of the optimum and g is the function that gathers all the equality and inequality constraints (violated values only)

getMaximumEvaluationNumber()

Accessor to maximum allowed number of evaluations.

Returns:
Nint

Maximum allowed number of evaluations.

getMaximumIterationNumber()

Accessor to maximum allowed number of iterations.

Returns:
Nint

Maximum allowed number of iterations.

getMaximumRelativeError()

Accessor to maximum allowed relative error.

Returns:
maximumRelativeErrorfloat

Maximum allowed relative error, where the relative error is defined by \epsilon^r_n=\epsilon^a_n/\|\vect{x}_{n+1}\|_{\infty} if \|\vect{x}_{n+1}\|_{\infty}\neq 0, else \epsilon^r_n=-1.

getMaximumResidualError()

Accessor to maximum allowed residual error.

Returns:
maximumResidualErrorfloat

Maximum allowed residual error, where the residual error is defined by \epsilon^r_n=\frac{\|f(\vect{x}_{n+1})-f(\vect{x}_{n})\|}{\|f(\vect{x}_{n+1})\|} if \|f(\vect{x}_{n+1})\|\neq 0, else \epsilon^r_n=-1.

getName()

Accessor to the object’s name.

Returns:
namestr

The name of the object.

getOffset()

Accessor to offset parameter.

Returns:
offsetPoint

Constant to subtract to each variable

if empty, the constant are (min-max)/2 for interval bounded

variables and x for the others.

getProblem()

Accessor to optimization problem.

Returns:
problemOptimizationProblem

Optimization problem.

getRescale()

Accessor to rescale parameter.

Returns:
rescalefloat

f scaling factor (in log10) used to trigger f value rescaling

if 0, rescale at each iteration

if a big value, never rescale

if < 0, rescale is set to 1.3.

getResult()

Accessor to optimization result.

Returns:
resultOptimizationResult

Result class.

getScale()

Accessor to scale parameter.

Returns:
scalePoint

Scaling factors to apply to each variable

if empty, the factors are min-max for interval bounded variables

and 1+|x] for the others.

getShadowedId()

Accessor to the object’s shadowed id.

Returns:
idint

Internal unique identifier.

getStartingPoint()

Accessor to starting point.

Returns:
startingPointPoint

Starting point.

getStepmx()

Accessor to stepmx parameter.

Returns:
stepmxfloat

Maximum step for the line search. may be increased during call

if too small, will be set to 10.0.

getVerbose()

Accessor to the verbosity flag.

Returns:
verbosebool

Verbosity flag state.

getVisibility()

Accessor to the object’s visibility state.

Returns:
visiblebool

Visibility flag.

hasName()

Test if the object is named.

Returns:
hasNamebool

True if the name is not empty.

hasVisibleName()

Test if the object has a distinguishable name.

Returns:
hasVisibleNamebool

True if the name is not empty and not the default one.

run()

Launch the optimization.

setAccuracy(accuracy)

Accessor to accuracy parameter.

Parameters:
accuracyfloat

Relative precision for finite difference calculations

if <= machine_precision, set to sqrt(machine_precision).

setEta(eta)

Accessor to eta parameter.

Parameters:
etafloat

Severity of the line search.

if < 0 or > 1, set to 0.25.

setFmin(fmin)

Accessor to fmin parameter.

Parameters:
fminfloat

Minimum function value estimate.

setMaxCGit(maxCGit)

Accessor to maxCGit parameter.

Parameters:
maxCGitint

Maximum number of hessian*vector evaluation per main iteration

if maxCGit = 0, the direction chosen is -gradient

if maxCGit < 0, maxCGit is set to max(1,min(50,n/2)).

setMaximumAbsoluteError(maximumAbsoluteError)

Accessor to maximum allowed absolute error.

Parameters:
maximumAbsoluteErrorfloat

Maximum allowed absolute error, where the absolute error is defined by \epsilon^a_n=\|\vect{x}_{n+1}-\vect{x}_n\|_{\infty} where \vect{x}_{n+1} and \vect{x}_n are two consecutive approximations of the optimum.

setMaximumConstraintError(maximumConstraintError)

Accessor to maximum allowed constraint error.

Parameters:
maximumConstraintErrorfloat

Maximum allowed constraint error, where the constraint error is defined by \gamma_n=\|g(\vect{x}_n)\|_{\infty} where \vect{x}_n is the current approximation of the optimum and g is the function that gathers all the equality and inequality constraints (violated values only)

setMaximumEvaluationNumber(maximumEvaluationNumber)

Accessor to maximum allowed number of evaluations.

Parameters:
Nint

Maximum allowed number of evaluations.

setMaximumIterationNumber(maximumIterationNumber)

Accessor to maximum allowed number of iterations.

Parameters:
Nint

Maximum allowed number of iterations.

setMaximumRelativeError(maximumRelativeError)

Accessor to maximum allowed relative error.

Parameters:
maximumRelativeErrorfloat

Maximum allowed relative error, where the relative error is defined by \epsilon^r_n=\epsilon^a_n/\|\vect{x}_{n+1}\|_{\infty} if \|\vect{x}_{n+1}\|_{\infty}\neq 0, else \epsilon^r_n=-1.

setMaximumResidualError(maximumResidualError)

Accessor to maximum allowed residual error.

Parameters:
Maximum allowed residual error, where the residual error is defined by

\epsilon^r_n=\frac{\|f(\vect{x}_{n+1})-f(\vect{x}_{n})\|}{\|f(\vect{x}_{n+1})\|} if \|f(\vect{x}_{n+1})\|\neq 0, else \epsilon^r_n=-1.

setName(name)

Accessor to the object’s name.

Parameters:
namestr

The name of the object.

setOffset(offset)

Accessor to offset parameter.

Parameters:
offsetsequence of float

Constant to subtract to each variable

if empty, the constant are (min-max)/2 for interval bounded

variables and x for the others.

setProblem(problem)

Accessor to optimization problem.

Parameters:
problemOptimizationProblem

Optimization problem.

setProgressCallback(*args)

Set up a progress callback.

Can be used to programmatically report the progress of an optimization.

Parameters:
callbackcallable

Takes a float as argument as percentage of progress.

Examples

>>> import sys
>>> import openturns as ot
>>> rosenbrock = ot.SymbolicFunction(['x1', 'x2'], ['(1-x1)^2+100*(x2-x1^2)^2'])
>>> problem = ot.OptimizationProblem(rosenbrock)
>>> solver = ot.OptimizationAlgorithm(problem)
>>> solver.setStartingPoint([0, 0])
>>> solver.setMaximumResidualError(1.e-3)
>>> solver.setMaximumIterationNumber(100)
>>> def report_progress(progress):
...     sys.stderr.write('-- progress=' + str(progress) + '%\n')
>>> solver.setProgressCallback(report_progress)
>>> solver.run()
setRescale(rescale)

Accessor to rescale parameter.

Parameters:
rescalefloat

f scaling factor (in log10) used to trigger f value rescaling

if 0, rescale at each iteration

if a big value, never rescale

if < 0, rescale is set to 1.3.

setResult(result)

Accessor to optimization result.

Parameters:
resultOptimizationResult

Result class.

setScale(scale)

Accessor to scale parameter.

Parameters:
scalesequence of float

Scaling factors to apply to each variable

if empty, the factors are min-max for interval bounded variables

and 1+|x] for the others.

setShadowedId(id)

Accessor to the object’s shadowed id.

Parameters:
idint

Internal unique identifier.

setStartingPoint(startingPoint)

Accessor to starting point.

Parameters:
startingPointPoint

Starting point.

setStepmx(stepmx)

Accessor to stepmx parameter.

Parameters:
stepmxfloat

Maximum step for the line search. may be increased during call

if too small, will be set to 10.0.

setStopCallback(*args)

Set up a stop callback.

Can be used to programmatically stop an optimization.

Parameters:
callbackcallable

Returns an int deciding whether to stop or continue.

Examples

>>> import openturns as ot
>>> rosenbrock = ot.SymbolicFunction(['x1', 'x2'], ['(1-x1)^2+100*(x2-x1^2)^2'])
>>> problem = ot.OptimizationProblem(rosenbrock)
>>> solver = ot.OptimizationAlgorithm(problem)
>>> solver.setStartingPoint([0, 0])
>>> solver.setMaximumResidualError(1.e-3)
>>> solver.setMaximumIterationNumber(100)
>>> def ask_stop():
...     return True
>>> solver.setStopCallback(ask_stop)
>>> solver.run()
setVerbose(verbose)

Accessor to the verbosity flag.

Parameters:
verbosebool

Verbosity flag state.

setVisibility(visible)

Accessor to the object’s visibility state.

Parameters:
visiblebool

Visibility flag.

thisown

The membership flag