Functional Chaos Expansion¶
Introduction¶
Accounting for the joint probability density function (PDF) of the input random vector , one seeks the joint PDF of output random vector . This may be achieved using Monte Carlo (MC) simulation (see Monte Carlo simulation). However, the MC method may require a large number of model evaluations, i.e. a great computational cost, in order to obtain accurate results.
A possible solution to overcome this problem is to project the model in a suitable functional space, such as the Hilbert space of square-integrable functions with respect to . More precisely, we may consider an expansion of the model onto an orthonormal basis of . As an example of this type of expansion, one can mention expansions by wavelets, polynomials, etc.
The principles of the building of a functional chaos expansion are described in the sequel.
Model¶
We consider the output random vector:
where is the model, is the input random vector which distribution is , is the input dimension, is the output dimension. We assume that has finite variance i.e. .
When , the functional chaos algorithm is used on each marginal of , using the same multivariate orthonormal basis for all the marginals. Thus, the method is detailed here for a scalar output and .
Iso-probabilistic transformation¶
Let be an isoprobabilistic transformation (see Isoprobabilistic transformations) such that where is the distribution of the standardized random vector . The distribution is called the measure below. As we will see soon, this distribution defines the scalar product that defines the orthogonality property of the functional basis. Let be the function defined by the equation:
Therefore .
Hilbert space¶
We introduce the scalar product:
for any . For a continuous random variable, the scalar product is:
For a discrete random variable, the scalar product is:
The associated norm is defined by:
for any . Based on this scalar product, the functional space is a Hilbert space.
Orthonormal basis¶
In this section, we introduce an orthonormal basis of the previous Hilbert space. Let be a set of functions. This set is orthonormal with respect to if:
(1)¶
for any where is the Kronecker symbol:
See StandardDistributionPolynomialFactory
for more details on the available
orthonormal bases.
In the library, we choose a basis which is orthonormal with respect to , so that the equation (1) is satisfied. Furthermore, we require that the first element be:
(2)¶
The orthogonality of the functions imply:
for any non-zero . The equation (2) implies:
for any .
Functional chaos expansion¶
The functional chaos expansion of h is (see [lemaitre2010] page 39):
where is a set of coefficients. We cannot compute an infinite set of coefficients: we can only compute a finite subset of these. The truncated functional chaos expansion is:
where . Thus is represented by a finite subset of coefficients in a truncated basis .
A specific choice of can be done using one enumeration rule, as presented in Chaos basis enumeration strategies. If the number of coefficients, , is too large, this can lead to overfitting. This may happen e.g. if the total polynomial order we choose is too large. In order to limit this effect, one method is to select the coefficients which best predict the output, as presented in Sparse least squares metamodel.
Convergence of the expansion¶
In this section, we introduce the conditions which ensures that the expansion converges to the function.
The orthonormal expansion of any function converges in norm to , i.e.:
if and only if the basis is a complete orthonormal system (see [sullivan2015], page 139, [dahlquist2008], theorem 4.5.16 page 456 and [rudin1987], section 4.24 page 85). In this case, the closure of the vector space spanned by the orthogonal functions is equal to the whole set of square integrable functions with respect to :
(3)¶
There are known sufficient conditions which ensure this property. For example, if the support of is bounded, then the basis is a complete orthonormal system.
There exists some infinite set of orthonormal polynomials which are not complete, e.g. those derived from the log-normal distribution (see [ernst2012]). In this case, the expansion may not converge to the function. Nevertheless, even without any guarantee, it is possible that the meta model built using the basis may be a good approximation of .
Define and estimate the coefficients¶
In this section, we review two equivalent methods to define the coefficients of the expansion:
using a least squares problem,
using integration.
Both methods can be introduced and then discretized using a sample.
The vector of coefficients is the solution of the linear least-squares problem:
(4)¶
The equation (4) means that the coefficients
minimize the quadratic error between the model
and the functional approximation.
For more details of the approximation based on least squares, see the
LeastSquaresStrategy
class.
Let us discretize the solution of the linear least squares problem. Let be the sample size. Let be an i.i.d. sample from the random vector . Let be the standardized input sample. Let be the corresponding output sample. Let be the vector of output observations of the model. Let be the design matrix, defined by:
for and . Assume that the design matrix has full rank. The discretized linear least squares problem is:
The solution is:
The choice of basis has a major impact on the conditioning of the least-squares problem (4). Indeed, if the basis is orthonormal, then the design matrix of the least squares problem is well-conditioned.
The problem can be equivalently solved using the scalar product (see [dahlquist2008] theorem 4.5.13 page 454):
(5)¶
for . These equations express the coefficients of the orthogonal projection of the function onto the vector space spanned by the orthogonal functions in the basis. Since the definition of the scalar product is based on an expectation, this amounts to approximate an integral using a quadrature rule.
The equation (5) means that each coefficient is the
scalar product of the model with the k-th element of the orthonormal basis
.
For more details on the PCE based on quadrature, see the
IntegrationStrategy
class.
Let us discretize the solution of the problem based on the scalar product. This can be done by considering a quadrature rule that makes it possible to approximate the integral. Let be the sample size. Let be the nodes of the quadrature rule and let be the weights. The quadrature rule is:
for .
Several algorithms are available to compute the coefficients :
see
IntegrationExpansion
for an algorithm based on quadrature,see
LeastSquaresExpansion
for an algorithm based on the least squares problem,see
FunctionalChaosAlgorithm
for an algorithm that can manage both methods.
The two methods to define the coefficients of the expansion are equivalent: the solution of the equations (4) and (5) produce the same coefficients . This is different when we estimate these coefficients based on a sample. In this discretized framework, the solution of the two methods can be different. It can be shown, however, that the limit of the two estimators are equal when the sample size tends to infinity (see [lemaitre2010] eq. 3.48 page 66). Moreover, the two discretized methods are equivalent if the sample points satisfy an empirical orthogonality condition (see [lemaitre2010] eq. 3.49 page 66).
A step-by-step method¶
Three steps are required in order to create a functional chaos algorithm:
define the multivariate orthonormal basis;
truncate the multivariate orthonormal basis;
evaluate the coefficients.
These steps are presented in more detail below.
Step 1 - Define the multivariate orthonormal basis: the multivariate orthonornal basis is built as the tensor product of orthonormal univariate families.
The univariate bases may be:
polynomials: the associated distribution can be continuous or discrete. Note that it is possible to build the polynomial family orthonormal to any arbitrary univariate distribution under some conditions. For more details on this basis, see
StandardDistributionPolynomialFactory
;Haar wavelets: they approximate functions with discontinuities. For details on this basis, see
HaarWaveletFactory
;Fourier series: for more details on this basis, see
FourierSeriesFactory
.
Furthermore, the numbering of the multivariate orthonormal basis
is given by an enumerate function
which defines a way to generate the collection of polynomial degrees used
for the univariate polynomials: an enumerate function
represents a bijection .
See LinearEnumerateFunction
or
HyperbolicAnisotropicEnumerateFunction
for more details
on this topic.
Step 2 - Truncate the multivariate orthonormal basis: a
strategy must be chosen for the selection of the different terms of the
multivariate basis. The selected terms are gathered in the subset .
For information about the possible strategies, see FixedStrategy
and CleaningStrategy
.
Step 3 - Evaluate the coefficients: a projectionStrategy must be chosen for the estimation of the coefficients .
The meta model¶
The meta model of g can be defined using the isoprobabilistic transformation :
(6)¶
More details are available on these topics.
See
StandardDistributionPolynomialFactory
for more details on the available constructions of the truncated multivariate orthogonal basisSee
FunctionalChaosAlgorithm
for more details on the computation of the coefficients.
There are many ways to use the functional chaos expansion. In the next two sections, we present two examples:
using the expansion as a random vector generator,
performing the sensitivity analysis of the expansion.
Using the expansion as a random vector generator¶
The approximation can be used to build an efficient random generator of based on the random vector , using the equation:
This equation can be used to simulate independent random observations
from the PCE.
This can be done by first simulating independent observations from
the distribution of the standardized random vector ,
then by pushing forward these observations through the expansion.
See the FunctionalChaosRandomVector
class
for more details on this topic.
Sensitivity analysis¶
Assume that the input random vector has independent marginals and
that the basis is computed using
the tensor product of univariate orthonormal functions.
In that case, the Sobol’ indices can easily be deduced from the coefficients
.
Please see FunctionalChaosSobolIndices
for more details on this topic.
Polynomial chaos expansion for independent variables¶
The library enables one to build the meta model called polynomial chaos expansion based on an orthonormal basis of polynomials. See Polynomial chaos basis for more details on polynomial chaos expansion.
Other chaos expansions for independent variables¶
While the polynomial chaos expansion is a classical method, the functions
in the basis do not necessarily have to be polynomials: provided the functions
are orthogonal with respect to the measure , most of
the theory still holds.
The library enables one to use the Haar wavelet functions or the Fourier series
as orthonormal basis with respect to each margin .
The Haar wavelets basis is orthonormal with respect to the the measure (see
HaarWaveletFactory
) and the Fourier series basis is orthonormal with respect to
the measure (see FourierSeriesFactory
).
Some functional chaos expansions for dependent variables¶
If the components of the input random vector are not independent, we can use an iso-probabilistic transformation to map into with independent components.
Whatever the dependency in the standardized random vector , the following multivariate functions are orthonormal with respect to :
where is the -th marginal of and is the degree orthonormal family of polynomial for the -th marginal. If the random vector has a non-trivial dependency, the previous functions are not necessarily polynomials. Notice that:
(7)¶
where is the density of the copula of .
Link with classical deterministic polynomial approximation¶
In a deterministic setting (i.e. when the input parameters are considered to be deterministic), it is of common practice to substitute the model function by a polynomial approximation over its whole domain of definition. Actually this approach is equivalent to:
regarding the input parameters as random uniform random variables,
expanding any quantity of interest provided by the model onto a PC expansion made of Legendre polynomials.