GaussianProcessFitter

(Source code, svg)

../../_images/GaussianProcessFitter.svg
class GaussianProcessFitter(*args)

Fit gaussian process models

Refer to Gaussian process regression.

Warning

This class is experimental and likely to be modified in future releases. To use it, import the openturns.experimental submodule.

Parameters:
inputSample, outputSampleSample or 2d-array

The samples (\vect{x}_k)_{1 \leq k \leq \sampleSize} \in \Rset^\inputDim and (\vect{y}_k)_{1 \leq k \leq \sampleSize}\in \Rset^{\outputDim}.

covarianceModelCovarianceModel

Covariance model of the Gaussian process. See notes for the details.

basisBasis

Functional basis to estimate the trend: (\varphi_j)_{1 \leq j \leq b}: \Rset^\inputDim \rightarrow \Rset.

The same basis is used for each marginal output.

Default value is Basis(0), i.e. no trend to estimate.

Methods

BuildDistribution(inputSample)

Recover the distribution, with metamodel performance in mind.

getClassName()

Accessor to the object's name.

getDistribution()

Accessor to the joint probability density function of the physical input vector.

getInputSample()

Accessor to the input sample.

getKeepCholeskyFactor()

Keep Cholesky factor accessor.

getMethod()

Accessor to the linear algebra method.

getName()

Accessor to the object's name.

getOptimizationAlgorithm()

Accessor to solver used to optimize the covariance model parameters.

getOptimizationBounds()

Optimization bounds accessor.

getOptimizeParameters()

Accessor to the covariance model parameters optimization flag.

getOutputSample()

Accessor to the output sample.

getReducedLogLikelihoodFunction()

Accessor to the log-likelihood function that writes as argument of the covariance's model parameters.

getResult()

Get the results of the metamodel computation.

getWeights()

Return the weights of the input sample.

hasName()

Test if the object is named.

run()

Compute the response surface.

setDistribution(distribution)

Accessor to the joint probability density function of the physical input vector.

setKeepCholeskyFactor(keepCholeskyFactor)

Keep Cholesky factor setter.

setMethod(method)

Accessor to the linear algebra method.

setName(name)

Accessor to the object's name.

setOptimizationAlgorithm(solver)

Accessor to the solver used to optimize the covariance model parameters.

setOptimizationBounds(optimizationBounds)

Optimization bounds accessor.

setOptimizeParameters(optimizeParameters)

Accessor to the covariance model parameters optimization flag.

Notes

Refer to Gaussian process regression (Step 1) to get all the notations and the theoretical aspects. We only detail here the notions related to the class.

We suppose we have a sample (\vect{x}_k, \vect{y}_k)_{1 \leq k \leq \sampleSize} where \vect{y}_k = \model(\vect{x}_k) for all k, with \model:\Rset^\inputDim \mapsto
\Rset^{\outputDim} a given function.

The class creates the Gaussian process \vect{Y}(\omega, \vect{x}) such that the sample (\vect{y}_k)_{1 \leq k \leq \sampleSize} is considered as its restriction on (\vect{x}_k)_{1 \leq k \leq \sampleSize}. It is defined by:

\vect{Y}(\omega, \vect{x}) = \vect{\mu}(\vect{x}) + \vect{W}(\omega, \vect{x})

where \vect{\mu} = (\mu_1, \dots, \mu_\outputDim) with \mu_\ell(\vect{x}) = \sum_{j=1}^{b}
\beta_j^\ell \varphi_j(\vect{x}) and \varphi_j: \Rset^\inputDim \rightarrow \Rset the trend function basis for 1 \leq j \leq b and 1 \leq \ell \leq \outputDim.

Furthermore, \vect{W} is a Gaussian process of dimension \outputDim with zero mean and a specified covariance model.

The GaussianProcessFitter class estimates the coefficients \beta_j^\ell and \vect{p} where \vect{p} is the vector of the parameters of the covariance model that has been declared as active: see openturns.CovarianceModel to get details on the activation of the estimation of the other parameters.

The estimation is done by maximizing the reduced log-likelihood of the mode, defined in (1).

The default optimizer is Cobyla and can be changed thanks to the setOptimizationAlgorithm() method. User could also change the default optimization solver by setting the entry GaussianProcessFitter-DefaultOptimizationAlgorithm of ResourceMap to one of the NLopt solver names.

It is also possible to proceed as follows:

  • ask for the reduced log-likelihood function thanks to the getReducedLogLikelihoodFunction() method,

  • optimize it with respect to the parameters \vect{\theta} and \vect{\sigma} using any optimization algorithms (that can take into account some additional constraints if needed),

  • set the optimal parameter value into the covariance model used in the GaussianProcessFitter,

  • tell the algorithm not to optimize the parameter using the setOptimizeParameters() method.

The behaviour of the reduction is controlled by the following keys in ResourceMap:

  • ResourceMap.SetAsBool(‘GaussianProcessFitter-UseAnalyticalAmplitudeEstimate’, True) to use the reduction associated to \sigma. It has no effect if d>1 or if d=1 and \sigma is not part of \vect{p},

  • ResourceMap.SetAsBool(‘GaussianProcessFitter-UnbiasedVariance’, True) allows one to use the unbiased estimate of \sigma where \dfrac{1}{\sampleSize} is replaced by \dfrac{1}{\sampleSize-\outputDim} in the optimality condition for \sigma.

With huge samples, the hierarchical matrix implementation could be used if hmat-oss support has been enabled.

This implementation, which is based on a compressed representation of an approximated covariance matrix (and its Cholesky factor), has a better complexity both in terms of memory requirements and floating point operations. To use it, the entry GaussianProcessFitter-LinearAlgebra of the openturns.ResourceMap class should be instancied to HMAT. Default value of the key is LAPACK.

Examples

Create the model \model: \Rset \mapsto \Rset and the samples:

>>> import openturns as ot
>>> import openturns.experimental as otexp
>>> g = ot.SymbolicFunction(['x'], ['x + x * sin(x)'])
>>> inputSample = ot.Sample([[1.0], [3.0], [5.0], [6.0], [7.0], [8.0]])
>>> outputSample = g(inputSample)

Create the algorithm:

>>> g1 = ot.SymbolicFunction(['x'], ['sin(x)'])
>>> g2 = ot.SymbolicFunction(['x'], ['x'])
>>> g3 = ot.SymbolicFunction(['x'], ['cos(x)'])
>>> basis = ot.Basis([g1, g2, g3])
>>> covarianceModel = ot.SquaredExponential([1.0])
>>> covarianceModel.setActiveParameter([])
>>> algo = otexp.GaussianProcessFitter(inputSample, outputSample, covarianceModel, basis)
>>> algo.run()

Get the resulting metamodel which is the trend function of the Gaussian process:

>>> result = algo.getResult()
>>> metamodel = result.getMetaModel()
__init__(*args)
static BuildDistribution(inputSample)

Recover the distribution, with metamodel performance in mind.

For each marginal, find the best 1-d continuous parametric model else fallback to the use of a nonparametric one.

The selection is done as follow:

  • We start with a list of all parametric models (all factories)

  • For each model, we estimate its parameters if feasible.

  • We check then if model is valid, ie if its Kolmogorov score exceeds a threshold fixed in the MetaModelAlgorithm-PValueThreshold ResourceMap key. Default value is 5%

  • We sort all valid models and return the one with the optimal criterion.

For the last step, the criterion might be BIC, AIC or AICC. The specification of the criterion is done through the MetaModelAlgorithm-ModelSelectionCriterion ResourceMap key. Default value is fixed to BIC. Note that if there is no valid candidate, we estimate a non-parametric model (KernelSmoothing or Histogram). The MetaModelAlgorithm-NonParametricModel ResourceMap key allows selecting the preferred one. Default value is Histogram

One each marginal is estimated, we use the Spearman independence test on each component pair to decide whether an independent copula. In case of non independence, we rely on a NormalCopula.

Parameters:
sampleSample

Input sample.

Returns:
distributionDistribution

Input distribution.

getClassName()

Accessor to the object’s name.

Returns:
class_namestr

The object class name (object.__class__.__name__).

getDistribution()

Accessor to the joint probability density function of the physical input vector.

Returns:
distributionDistribution

Joint probability density function of the physical input vector.

getInputSample()

Accessor to the input sample.

Returns:
inputSampleSample

Input sample of a model evaluated apart.

getKeepCholeskyFactor()

Keep Cholesky factor accessor.

Returns:
keepCholeskybool

Tells whether we keep or not the final Cholesky factor.

getMethod()

Accessor to the linear algebra method.

Returns:
linAlgMethodint

The used linear algebra method to fit the model:

  • otexp.GaussianProcessFitterResult.LAPACK or 0: using LAPACK to fit the model,

  • otexp.GaussianProcessFitterResult.HMAT or 1: using HMAT to fit the model.

getName()

Accessor to the object’s name.

Returns:
namestr

The name of the object.

getOptimizationAlgorithm()

Accessor to solver used to optimize the covariance model parameters.

Returns:
algorithmOptimizationAlgorithm

Solver used to optimize the covariance model parameters. Default optimizer is Cobyla

getOptimizationBounds()

Optimization bounds accessor.

Returns:
boundsInterval

Bounds for covariance model parameter optimization.

getOptimizeParameters()

Accessor to the covariance model parameters optimization flag.

Returns:
optimizeParametersbool

Whether to optimize the covariance model parameters.

getOutputSample()

Accessor to the output sample.

Returns:
outputSampleSample

Output sample of a model evaluated apart.

getReducedLogLikelihoodFunction()

Accessor to the log-likelihood function that writes as argument of the covariance’s model parameters.

Returns:
logLikelihoodFunction

The log-likelihood function degined in (1) as a function of the active parameters of the covariance model.

Notes

The log-likelihood function may be useful for some postprocessing: maximization using external optimizers for example.

Examples

Create the model \model: \Rset \mapsto \Rset and the samples:

>>> import openturns as ot
>>> import openturns.experimental as otexp
>>> g = ot.SymbolicFunction(['x0'], ['x0 * sin(x0)'])
>>> inputSample = ot.Sample([[1.0], [3.0], [5.0], [6.0], [7.0], [8.0]])
>>> outputSample = g(inputSample)

Create the algorithm:

>>> basis = ot.ConstantBasisFactory().build()
>>> covarianceModel = ot.SquaredExponential(1)
>>> algo = otexp.GaussianProcessFitter(inputSample, outputSample, covarianceModel, basis)
>>> algo.run()

Get the log-likelihood function:

>>> likelihoodFunction = algo.getReducedLogLikelihoodFunction()
getResult()

Get the results of the metamodel computation.

Returns:
resultGaussianProcessFitterResult

Structure containing all the results obtained after computation and created by the method run().

getWeights()

Return the weights of the input sample.

Returns:
weightssequence of float

The weights of the points in the input sample.

hasName()

Test if the object is named.

Returns:
hasNamebool

True if the name is not empty.

run()

Compute the response surface.

Notes

It computes the response surface and creates a GaussianProcessFitterResult structure containing all the results.

setDistribution(distribution)

Accessor to the joint probability density function of the physical input vector.

Parameters:
distributionDistribution

Joint probability density function of the physical input vector.

setKeepCholeskyFactor(keepCholeskyFactor)

Keep Cholesky factor setter.

Parameters:
keepCholeskybool

Tells whether we keep or not the final Cholesky factor.

setMethod(method)

Accessor to the linear algebra method.

Parameters:
linAlgMethodint

The used linear algebra method to fit the model:

  • otexp.GaussianProcessFitterResult.LAPACK or 0: using LAPACK to fit the model,

  • otexp.GaussianProcessFitterResult.HMAT or 1: using HMAT to fit the model.

setName(name)

Accessor to the object’s name.

Parameters:
namestr

The name of the object.

setOptimizationAlgorithm(solver)

Accessor to the solver used to optimize the covariance model parameters.

Parameters:
algorithmOptimizationAlgorithm

Solver used to optimize the covariance model parameters.

setOptimizationBounds(optimizationBounds)

Optimization bounds accessor.

Parameters:
boundsInterval

Bounds for covariance model parameter optimization.

Notes

Parameters involved by this method are:

  • Scale parameters,

  • Amplitude parameters if output dimension is greater than one or analytical sigma disabled,

  • Additional parameters.

Lower & upper bounds are defined in resource map. Default lower upper bounds value for all parameters is 10^{-2} and defined thanks to the GaussianProcessFitter-DefaultOptimizationLowerBound resource map key.

For scale parameters, default upper bounds are set as 2 times the difference between the max and min values of X for each coordinate, X being the (transformed) input sample. The value 2 is defined in resource map (GaussianProcessFitter-DefaultOptimizationScaleFactor).

Finally for other parameters (amplitude,…), default upper bound is set to 100 (corresponding resource map key is GaussianProcessFitter-DefaultOptimizationUpperBound)

setOptimizeParameters(optimizeParameters)

Accessor to the covariance model parameters optimization flag.

Parameters:
optimizeParametersbool

Whether to optimize the covariance model parameters.

Examples using the class

Gaussian Process Regression vs KrigingAlgorithm

Gaussian Process Regression vs KrigingAlgorithm

Create a general linear model metamodel

Create a general linear model metamodel

Gaussian Process Regression: multiple input dimensions

Gaussian Process Regression: multiple input dimensions

Gaussian Process Regression : quick-start

Gaussian Process Regression : quick-start

Gaussian Process-based active learning for reliability

Gaussian Process-based active learning for reliability

Advanced Gaussian process regression

Advanced Gaussian process regression

Gaussian Process Regression: choose an arbitrary trend

Gaussian Process Regression: choose an arbitrary trend

Gaussian Process Regression: choose a polynomial trend on the beam model

Gaussian Process Regression: choose a polynomial trend on the beam model

Gaussian Process Regression : cantilever beam model

Gaussian Process Regression : cantilever beam model

Gaussian Process Regression: surrogate model with continuous and categorical variables

Gaussian Process Regression: surrogate model with continuous and categorical variables

Gaussian Process Regression: choose a polynomial trend

Gaussian Process Regression: choose a polynomial trend

Gaussian process fitter: configure the optimization solver

Gaussian process fitter: configure the optimization solver

Gaussian Process Regression: use an isotropic covariance kernel

Gaussian Process Regression: use an isotropic covariance kernel

Gaussian process regression: draw the likelihood

Gaussian process regression: draw the likelihood

Gaussian Process Regression : generate trajectories from the metamodel

Gaussian Process Regression : generate trajectories from the metamodel

Gaussian Process Regression: metamodel of the Branin-Hoo function

Gaussian Process Regression: metamodel of the Branin-Hoo function

Example of multi output Gaussian Process Regression on the fire satellite model

Example of multi output Gaussian Process Regression on the fire satellite model

Sequentially adding new points to a Gaussian Process metamodel

Sequentially adding new points to a Gaussian Process metamodel

Gaussian Process Regression: propagate uncertainties

Gaussian Process Regression: propagate uncertainties

EfficientGlobalOptimization examples

EfficientGlobalOptimization examples