Optimization Algorithms¶
The method is used in the following context:
 is a vector of
unknown variables, 
 a vector considered to be well known
or where uncertainty is negligible, and
 is the scalar variable of interest.
The objective here is to determine the extreme (minimum and maximum)
values of 
 when 
 varies.
It is possible to use some optimization algorithms. We give the
principle of the TNC (Truncated Newton Constrained) algorithm which
minimizes a function with variables subject to bounds, using gradient
information.
Truncated-Newton methods are a family of methods suitable for solving
large nonlinear optimization problems. At each iteration, the current
estimate of the solution is updated by approximately solving the
Newton equations using an iterative algorithm. This results in a
doubly iterative method: an outer iteration for the nonlinear
optimization problem and an inner iteration for the Newton equations.
The inner iteration is typically stopped or truncated before the
solution to the Newton equations is obtained.
The TNC algorithm resolves:
![\min_{\vect{x} \in [\vect{a},\vect{b}] \in \overline{\Rset}^n} f(\vect{x})](../../_images/math/06180e23f33538a522bc3a9196d6706a79344ae2.svg) and proceeds as follows under the proper regularity of the objective
function
and proceeds as follows under the proper regularity of the objective
function  :
:
The Taylor development of second order of  around
 leads to the determination of the iterate
 such as:
(1)¶
The equation (1) is truncated: the iterative research of
 is stopped as soon as 
verifies :
At last, the iteration  is defined by:
where  is the parameter stepmx.
API:
- See - TNC
- See the available optimization algorithms. 
Examples:
References:
 OpenTURNS
      OpenTURNS