Gaussian process regression¶
Gaussian process regression (also known as Kriging) is a Bayesian technique that aims at approximating functions (most often in order to surrogate it because it is expensive to evaluate).
Let be a model and a dataset
where
for all
.
The objective is to build a metamodel , using a Gaussian process that interpolates the data
set. To build this metamodel, we follow the steps:
Step 1: Gaussian process fitting: we build the Gaussian process
defined by
where
is the trend function and
is a Gaussian process of dimension
with zero mean and covariance function
;
Step 2: Gaussian Process Regression: we condition the Gaussian process
to the data set by considering the Gaussian Process Regression denoted by
where
is the condition
for
;
Step 3: Gaussian Process Regression metamodel and its exploitation: we define the metamodel as
. Note that this metamodel is interpolating the data set. We can use the conditional covariance in order to quantify the error of the metamodel, that is the variation of the Gaussian vector at a given point.
Note that the implementation of a
Gaussian process regression deals with vector-valued functions
(), without simply looping over
each output.
Step 1: Gaussian process fitting¶
The first step creates the Gaussian process such that the sample
is considered as its restriction on
. It is defined by:
where:
with and
the trend functions basis for
and
.
Furthermore, is a Gaussian process of dimension
with zero mean and
covariance function
, where (see
Covariance models for more details on the notations):
is the scale vector,
is the standard deviation vector,
is the spatial correlation matrix between the components of
,
gather some additional parameters specific to each covariance model.
Then, we have to estimate the coefficients and
where
is the vector of parameters of the covariance model (a subset of
) that has been declared as
active: by default, the full vectors
and
. See
openturns.CovarianceModel
to get details on the activation of the estimation of the other
parameters.
The estimation is done by maximizing the reduced log-likelihood of the model (see its expression below).
Estimation of the parameters: We want to estimate all the parameters for
and
, and
.
We note:
and:
where .
The likelihood of the Gaussian process on the data set is defined by:
Let be the Cholesky factor of
, i.e. the lower triangular
matrix with positive diagonal such that
.
Therefore:
(1)¶
The maximization of (1) leads to the following optimality condition for :
This expression of as a function of
is taken as a general relation
between
and
and is substituted into (1), leading to
a reduced log-likelihood function depending solely on
.
In the particular case where and
is a part of
,
then a further reduction is possible. In this case, if
is the vector
in which
has been substituted by 1, then:
showing that is a function of
only, and the optimality condition
for
reads:
which leads to a further reduction of the log-likelihood function where both and
are replaced by their expression in terms of
.
This step is performed by the class GaussianProcessFitter
.
Step 2: Gaussian Process Regression¶
Once the Gaussian process has been estimated, the Gaussian process regression
aims at conditioning it to the data set: we make the Gaussian process approximation become
interpolating over the dataset.
The final Gaussian process regression denoted by is defined by:
(2)¶
where is the condition
for
.
Then, is a Gaussian process, which mean is defined by:
where:
and is defined by:
(3)¶
where:
Finally, we get the following mean of the Gaussian process regression at the point :
(4)¶
The covariance matrix of at the point
is defined by:
(5)¶
with .
When computed on the sample , the covariance matrix is
defined by:
(6)¶
where .
This step is performed by the class GaussianProcessRegression
.
Step 3: Gaussian Process Regression metamodel and its exploitation¶
The Gaussian Process Regression metamodel is defined by:
(7)¶
We can use the conditional covariance of in order to quantify the error of the metamodel. The
GaussianProcessConditionalCovariance
provides all the services to get the error at any point.