KrigingAlgorithm

class KrigingAlgorithm(*args)

Kriging algorithm.

Available constructors:

KrigingAlgorithm(inputSample, outputSample, basis, covarianceModel, normalize=True, keepCovariance=True)

KrigingAlgorithm(inputSample, inputTransformation, outputSample, basis, covarianceModel, keepCovariance=True)

KrigingAlgorithm(inputSample, outputSample, multivariateBasis, covarianceModel, normalize=True, keepCovariance=True)

KrigingAlgorithm(inputSample, inputTransformation, outputSample, multivariateBasis, covarianceModel, keepCovariance=True)

Parameters:

inputSample, outputSample : 2-d sequence of float

The samples (\vect{x}_k)_{1 \leq k \leq N} \in \Rset^d and (\vect{y}_k)_{1 \leq k \leq N}\in \Rset^p.

inputTransformation : NumericalMathFunction

Function T that helps to normalize the input sample.

If used, the meta model is built on the transformed data.

basis : Basis

Functional basis to estimate the trend: (\varphi_j)_{1 \leq j \leq n_1}: \Rset^d \rightarrow \Rset.

If p>1, the same basis is used for each marginal output.

multivariateBasis : collection of Basis

Collection of p functional basis: one basis for each marginal output: \left[(\varphi_j^1)_{1 \leq j \leq n_1}, \dots, (\varphi_j^p)_{1 \leq j \leq n_p}\right]

If the trend is not estimated, the collection must be empty.

covarianceModel : CovarianceModel

Covariance model of the normal process.

normalize : bool, optional

Indicates whether the input sample has to be normalized.

OpenTURNS uses the transformation fixed by the User in inputTransformation or the empirical mean and variance of the input sample. Default is set in resource map key GeneralizedLinearModelAlgorithm-NormalizeData

keepCovariance : bool, optional

Indicates whether the covariance matrix has to be stored in the result structure KrigingResult.

Default is True.

Notes

We suppose we have a sample (\vect{x}_k, \vect{y}_k)_{1 \leq k \leq N} where \vect{y}_k = \cM(\vect{x}_k) for all k, with \cM:\Rset^d \mapsto \Rset^p the model.

The meta model Kriging is based on the same principles as those of the generalized linear model: it assumes that the sample (\vect{y}_k)_{1 \leq k \leq N} is considered as the trace of a normal process \vect{Y}(\omega, \vect{x}) on (\vect{x}_k)_{1 \leq k \leq N}. The normal process \vect{Y}(\omega, \vect{x}) is defined by:

(1)\vect{Y}(\omega, \vect{x}) = \vect{\mu}(\vect{x}) + W(\omega, \vect{x})

where:

\vect{\mu}(\vect{x}) = \left(
  \begin{array}{l}
    \mu_1(\vect{x}) \\
    \dots  \\
    \mu_p(\vect{x}) 
   \end{array}
 \right)

with \mu_l(\vect{x}) = \sum_{j=1}^{n_l} \alpha_j^l \varphi_j^l(\vect{x}) and \varphi_j^l: \Rset^d \rightarrow \Rset the trend functions.

W is a normal process of dimension p with zero mean and covariance function C = C(\vect{\theta}, \vect{\sigma}, \mat{R}, \vect{\lambda}) (see CovarianceModel for the notations).

The estimation of the parameters \alpha_j^l, \vect{\theta}, \vect{\sigma} are made by the GeneralizedLinearModelAlgorithm class.

The Kriging algorithm makes the generalized linear model interpolary on the input samples. The Kriging meta model \tilde{\cM} is defined by:

\tilde{\cM}(\vect{x}) =  \vect{\mu}(\vect{x}) + \Expect{\vect{Y}(\omega, \vect{x})\, | \,  \cC}

where \cC is the condition \vect{Y}(\omega, \vect{x}_k) = \vect{y}_k for each k \in [1, N].

(1) writes:

\tilde{\cM}(\vect{x}) = \vect{\mu}(\vect{x}) + \Cov{\vect{Y}(\omega, \vect{x}), (\vect{Y}(\omega, \vect{x}_1), \dots, \vect{Y}(\omega, \vect{x}_N))} \vect{\gamma}

where \Cov{\vect{Y}(\omega, \vect{x}), (\vect{Y}(\omega, \vect{x}_1), \dots, \vect{Y}(\omega, \vect{x}_N))} = \left( \mat{C}( \vect{x},  \vect{x}_1) | \dots | \mat{C}( \vect{x},  \vect{x}_N)  \right) is a matrix in \cM_{p,NP}(\Rset) and \vect{\gamma} = \mat{C}^{-1}(\vect{y}-\vect{m}).

A known centered gaussian observation noise \epsilon_k can be taken into account with setNoise():

\hat{\vect{y}}_k = \vect{y}_k + \epsilon_k, \epsilon_k \sim \mathcal{N}(0, \tau_k^2)

Examples

Create the model \cM: \Rset \mapsto \Rset and the samples:

>>> import openturns as ot
>>> # use of Hmat implementation
>>> # ot.ResourceMap.Set('KrigingAlgorithm-LinearAlgebra', 'HMAT')
>>> f = ot.NumericalMathFunction(['x'], ['x * sin(x)'])
>>> inputSample = ot.NumericalSample([[1.0], [3.0], [5.0], [6.0], [7.0], [8.0]])
>>> outputSample = f(inputSample)

Create the algorithm:

>>> basis = ot.ConstantBasisFactory().build()
>>> covarianceModel = ot.SquaredExponential(1)
>>> algo = ot.KrigingAlgorithm(inputSample, outputSample, basis, covarianceModel)
>>> algo.run()

Get the resulting meta model:

>>> result = algo.getResult()
>>> metamodel = result.getMetaModel()

Methods

getClassName() Accessor to the object’s name.
getDistribution() Accessor to the joint probability density function of the physical input vector.
getId() Accessor to the object’s id.
getInputSample() Accessor to the input sample.
getLogLikelihoodFunction() Accessor to the log-likelihood function that writes as argument of the covariance’s model parameters.
getName() Accessor to the object’s name.
getNoise() Observation noise variance accessor.
getOptimizationSolver() Accessor to solver used to optimize the covariance model parameters.
getOptimizeParameters() Accessor to the covariance model parameters optimization flag.
getOutputSample() Accessor to the output sample.
getResult() Get the results of the metamodel computation.
getShadowedId() Accessor to the object’s shadowed id.
getVisibility() Accessor to the object’s visibility state.
hasName() Test if the object is named.
hasVisibleName() Test if the object has a distinguishable name.
run() Compute the response surface.
setDistribution(distribution) Accessor to the joint probability density function of the physical input vector.
setName(name) Accessor to the object’s name.
setNoise(noise) Observation noise variance accessor.
setOptimizationSolver(solver) Accessor to the solver used to optimize the covariance model parameters.
setOptimizeParameters(optimizeParameters) Accessor to the covariance model parameters optimization flag.
setShadowedId(id) Accessor to the object’s shadowed id.
setVisibility(visible) Accessor to the object’s visibility state.
__init__(*args)
getClassName()

Accessor to the object’s name.

Returns:

class_name : str

The object class name (object.__class__.__name__).

getDistribution()

Accessor to the joint probability density function of the physical input vector.

Returns:

distribution : Distribution

Joint probability density function of the physical input vector.

getId()

Accessor to the object’s id.

Returns:

id : int

Internal unique identifier.

getInputSample()

Accessor to the input sample.

Returns:

inputSample : NumericalSample

The input sample (\vect{x}_k)_{1 \leq k \leq N}.

getLogLikelihoodFunction()

Accessor to the log-likelihood function that writes as argument of the covariance’s model parameters.

Returns:

logLikelihood : NumericalMathFunction

The log-likelihood function as a function of (\vect{\theta}, \vect{\sigma}).

Notes

The log-likelihood function may be useful for some postprocessing: maximization using external optimizers for example.

Examples

Create the model \cM: \Rset \mapsto \Rset and the samples:

>>> import openturns as ot
>>> f = ot.NumericalMathFunction(['x0'], ['f0'], ['x0 * sin(x0)'])
>>> inputSample = ot.NumericalSample([[1.0], [3.0], [5.0], [6.0], [7.0], [8.0]])
>>> outputSample = f(inputSample)

Create the algorithm:

>>> basis = ot.ConstantBasisFactory().build()
>>> covarianceModel = ot.SquaredExponential(1)
>>> algo = ot.KrigingAlgorithm(inputSample, outputSample, basis, covarianceModel)
>>> algo.run()

Get the log-likelihood function:

>>> likelihoodFunction = algo.getLogLikelihoodFunction()
getName()

Accessor to the object’s name.

Returns:

name : str

The name of the object.

getNoise()

Observation noise variance accessor.

Returns:

noise : sequence of positive float

The noise variance \tau_k^2 of each output value.

getOptimizationSolver()

Accessor to solver used to optimize the covariance model parameters.

Returns:

algorithm : OptimizationSolver

Solver used to optimize the covariance model parameters.

getOptimizeParameters()

Accessor to the covariance model parameters optimization flag.

Returns:

optimizeParameters : bool

Whether to optimize the covariance model parameters.

getOutputSample()

Accessor to the output sample.

Returns:

outputSample : NumericalSample

The output sample (\vect{y}_k)_{1 \leq k \leq N} .

getResult()

Get the results of the metamodel computation.

Returns:

result : KrigingResult

Structure containing all the results obtained after computation and created by the method run().

getShadowedId()

Accessor to the object’s shadowed id.

Returns:

id : int

Internal unique identifier.

getVisibility()

Accessor to the object’s visibility state.

Returns:

visible : bool

Visibility flag.

hasName()

Test if the object is named.

Returns:

hasName : bool

True if the name is not empty.

hasVisibleName()

Test if the object has a distinguishable name.

Returns:

hasVisibleName : bool

True if the name is not empty and not the default one.

run()

Compute the response surface.

Notes

It computes the kriging response surface and creates a KrigingResult structure containing all the results.

setDistribution(distribution)

Accessor to the joint probability density function of the physical input vector.

Parameters:

distribution : Distribution

Joint probability density function of the physical input vector.

setName(name)

Accessor to the object’s name.

Parameters:

name : str

The name of the object.

setNoise(noise)

Observation noise variance accessor.

Parameters:

noise : sequence of positive float

The noise variance \tau_k^2 of each output value.

setOptimizationSolver(solver)

Accessor to the solver used to optimize the covariance model parameters.

Parameters:

algorithm : OptimizationSolver

Solver used to optimize the covariance model parameters.

Examples

Create the model \cM: \Rset \mapsto \Rset and the samples:

>>> import openturns as ot
>>> input_data = ot.Uniform(-1.0, 2.0).getSample(10)
>>> model = ot.NumericalMathFunction('x', 'x-1+sin(_pi*x/(1+0.25*x^2))')
>>> output_data = model(input_data)

Create the Kriging algorithm with the optimizer option:

>>> basis = ot.Basis([ot.NumericalMathFunction('x', '0.0')])
>>> thetaInit = 1.0
>>> covariance = ot.GeneralizedExponential([thetaInit], 2.0)
>>> optimizer = ot.TNC()
>>> bounds = ot.Interval(1e-2,1e2)
>>> optimProblem = ot.OptimizationProblem()
>>> optimProblem.setBounds(bounds)
>>> optimizer.setProblem(optimProblem)
>>> algo = ot.KrigingAlgorithm(input_data, output_data, basis, covariance)
>>> algo.setOptimizationSolver(optimizer)
setOptimizeParameters(optimizeParameters)

Accessor to the covariance model parameters optimization flag.

Parameters:

optimizeParameters : bool

Whether to optimize the covariance model parameters.

setShadowedId(id)

Accessor to the object’s shadowed id.

Parameters:

id : int

Internal unique identifier.

setVisibility(visible)

Accessor to the object’s visibility state.

Parameters:

visible : bool

Visibility flag.