Gaussian process regression¶
Gaussian process regression (also known as Kriging) is a Bayesian technique that aims at approximating functions (most often in order to surrogate it because it is expensive to evaluate).
Let be a model and a dataset
where
for all
.
The objective is to build a metamodel , using a Gaussian process that interpolates the data
set. To build this metamodel, we follow the steps:
Step 1: Gaussian process fitting: we build the Gaussian process
defined by
where
is the trend function and
is a Gaussian process of dimension
with zero mean and covariance function
. We show how to take into account a noise observed on the output values of the function
;
Step 2: Gaussian Process Regression: we condition the Gaussian process
to the data set by considering the Gaussian Process Regression denoted by
where
is the condition
for
;
Step 3: Gaussian Process Regression metamodel and its exploitation: we define the metamodel as
. Note that this metamodel is interpolating the data set when the noise parameter is set to zero. We can use the conditional covariance in order to quantify the error of the metamodel, that is the variation of the Gaussian vector at a given point.
Note that the Gaussian process regression of a function with multivariate output ()
is done without simply looping over each output component.
Step 1: Gaussian process fitting with and without noise¶
The first step creates the Gaussian process such that the sample
is considered as its restriction on
. It is defined by:
where:
with and
the trend functions basis for
and
.
Furthermore, is a Gaussian process of dimension
with zero mean and
covariance function
,
where (see Covariance models for more details on the notations):
is the scale parameter vector,
is the standard deviation parameter vector,
is the spatial correlation matrix between the components of
,
gathers some additional parameters specific to each covariance model.
Then, we have to estimate the coefficients and
where
is the vector of parameters of the covariance model (a subset of
) that has been declared as
active: by default, the full vectors
and
. See
openturns.CovarianceModel to get details on the activation of the estimation of the other
parameters.
The estimation is done by maximizing the reduced log-likelihood of the model (see its expression below).
Estimation of the parameters: We want to estimate all the parameters for
and
, and the vector of parameters
.
We note:
and:
(1)¶
where .
The likelihood of the Gaussian process on the data set is defined by:
(2)¶
Let be the Cholesky factor of
: it means that
is the lower triangular
matrix with positive diagonal such that
.
Therefore:
(3)¶
where is a constant independent of
and
.
The maximization of (3) leads to the following optimality condition for
:
This expression of as a function of
is taken as a general relation
between
and
and is substituted into (1), leading to
a reduced log-likelihood function depending solely on
.
In the particular case where and
is a part of
, then a further reduction is possible. In this case, if
is the vector
in which
has been substituted by 1, then:
showing that is a function of
only. The optimality condition
for
reads:
which leads to a further reduction of the log-likelihood function where both and
are replaced by their expression in terms of
.
This step is performed by the class GaussianProcessFitter.
Add a noise to the output values: We show how to take into account the fact that the output values
of the function are not known precisely. This noise is modeled by normal distribution with zero mean and a
covariance matrix that can be
specific to the output
. It means that each output
is considered as the
realization of the random vector
defined by:
where is the true (and unknown) value of the model at
.
If the covariance matrices are different, the noise is heteroscedastic.
On the contrary, the noise is homoscedastic.
The noise is introduced during the step of the parameters estimation: in the likelihood expression defined
in (2), the covariance matrix of the process defined in (1) is transformed
into the covariance matrix defined by:
Thus the covariance matrix of the noise has been added on the bloc-diagonal of the initial covariance matrix.
Note that the noise is taken into account to estimate the parameters only. The final covariance model of the
process is still defined by the initial covariance function once the parameters have been
estimated.
Use the method setNoise of the class GaussianProcessFitter.
Note that the noise is different from the nugget effect: refer to Covariance models to get more details on covariance models and the introduction of a nugget factor, and in particular see equation (5). To be short, the nugget effect modifies the diagonal of the covariance matrix and this modification remains even once the parameters of the covariance model have been estimated. The resulting process is less smooth than the initial one without nugget effect.
Step 2: Gaussian Process Regression¶
Once the Gaussian process has been estimated, the Gaussian process regression
aims at conditioning it to the data set: we make the Gaussian process approximation become
interpolating over the dataset.
The final Gaussian process regression denoted by is defined by:
(4)¶
where is the condition
for
.
Then, is a Gaussian process, which mean is defined by:
where:
and is defined by:
(5)¶
where:
Finally, we get the following mean of the Gaussian process regression at the point :
(6)¶
The covariance matrix of at the point
is defined by:
(7)¶
with .
When computed on any sample , the covariance matrix is
defined by:
(8)¶
where .
This step is performed by the class GaussianProcessRegression.
Step 3: Gaussian Process Regression metamodel and its exploitation¶
The Gaussian Process Regression metamodel is defined by:
(9)¶
We can use the conditional covariance of in order to quantify the predictive
uncertainty of the metamodel. The
GaussianProcessConditionalCovariance provides all the
services to get the predictive error at any point.
OpenTURNS