Functional Chaos Expansion

Introduction

Accounting for the joint probability density function (PDF) \mu_{\vect{X}}(\vect{x}) of the input random vector \vect{X}, one seeks the joint PDF of output random vector \vect{Y} = g(\vect{X}). This may be achieved using Monte Carlo (MC) simulation (see Monte Carlo simulation). However, the MC method may require a large number of model evaluations, i.e. a great computational cost, in order to obtain accurate results.

A possible solution to overcome this problem is to project the model g in a suitable functional space, such as the Hilbert space L^2(\mu_{\vect{X}}) of square-integrable functions with respect to \mu_{\vect{X}}. More precisely, we may consider an expansion of the model onto an orthonormal basis of L^2(\mu_{\vect{X}}). As an example of this type of expansion, one can mention expansions by wavelets, polynomials, etc.

The principles of the building of a functional chaos expansion are described in the sequel.

Model

We consider the output random vector:

\vect{Y} = g(\vect{X})

where g: \Rset^{n_X} \rightarrow \Rset^{n_Y} is the model, \vect{X} is the input random vector which distribution is \mu_{\vect{X}}, n_X \in \Nset is the input dimension, n_Y \in \Nset is the output dimension. We assume that \vect{Y} has finite variance i.e. g\in L^2(\mu_{\vect{X}}).

When n_Y > 1, the functional chaos algorithm is used on each marginal of \vect{Y}, using the same multivariate orthonormal basis for all the marginals. Thus, the method is detailed here for a scalar output Y and g: \Rset^{n_X} \rightarrow \Rset.

Iso-probabilistic transformation

Let T: \Rset^{n_X} \rightarrow \Rset^{n_X} be an isoprobabilistic transformation (see Isoprobabilistic transformations) such that \vect{Z} = T(\vect{X}) \sim \mu_{\vect{Z}} where \mu_{\vect{Z}} is the distribution of the standardized random vector \vect{Z}. The distribution is called the measure below. As we will see soon, this distribution defines the scalar product that defines the orthogonality property of the functional basis. Let h be the function defined by the equation:

h = g \circ T^{-1}.

Therefore h \in L^2\left(\mu_{\vect{Z}}\right).

Hilbert space

We introduce the scalar product:

\scalarproduct{h_1}{h_2}_{L^2\left(\mu_{\vect{Z}}\right)}
= \Expect{h_1(\vect{Z}) h_2(\vect{Z})}

for any (h_1,h_2) \in L^2\left(\mu_{\vect{Z}}\right). For a continuous random variable, the scalar product is:

\scalarproduct{h_1}{h_2}_{L^2\left(\mu_{\vect{Z}}\right)}
& =  \int h_1(\vect{z}) h_2(\vect{z})\, \mu_{\vect{Z}}(\vect{z}) d\vect{z}.

For a discrete random variable, the scalar product is:

\scalarproduct{h_1}{h_2}_{L^2\left(\mu_{\vect{Z}}\right)}
& = \sum_\vect{z} h_1(\vect{z}) h_2(\vect{z})\, \Prob{\vect{Z} = \vect{z}}.

The associated norm is defined by:

\|h\|^2_{L^2(\mu_{\vect{Z}})}
= \Expect{\left(h(\vect{Z})\right)^2}

for any h \in L^2\left(\mu_{\vect{Z}}\right). Based on this scalar product, the functional space L^2\left(\mu_{\vect{Z}}\right) is a Hilbert space.

Orthonormal basis

In this section, we introduce an orthonormal basis of the previous Hilbert space. Let \left(\psi_k : \Rset^{n_X} \rightarrow \Rset\right)_{k \geq 0} be a set of functions. This set is orthonormal with respect to \mu_{\vect{Z}} if:

(1)\scalarproduct{\psi_k}{\Psi_{\ell}}_{L^2\left(\mu_{\vect{Z}}\right)}  =  \delta_{k,\ell}

for any k, \ell \geq 0 where \delta_{k, \ell} is the Kronecker symbol:

\delta_{k, \ell}
=
\begin{cases}
1 & \textrm{ if } k = \ell, \\
0 & \textrm{otherwise.}
\end{cases}

See StandardDistributionPolynomialFactory for more details on the available orthonormal bases.

In the library, we choose a basis \left(\psi_k\right)_{k \geq 0} which is orthonormal with respect to \mu_{\vect{Z}}, so that the equation (1) is satisfied. Furthermore, we require that the first element be:

(2)\Psi_0 = 1

The orthogonality of the functions imply:

\scalarproduct{\psi_{i}}{\psi_{0}}_{L^2\left(\mu_{\vect{Z}}\right)} = 0

for any non-zero i. The equation (2) implies:

\Expect{\psi_{i}(\vect{Z})} = \Expect{\Psi_{i}(\vect{Z})\Psi_{0}(\vect{Z})}
= 0

for any i\neq 0.

Functional chaos expansion

The functional chaos expansion of h is (see [lemaitre2010] page 39):

h = \sum_{k \geq 0} a_k \psi_k

where \left(a_k \in \Rset\right)_{k\geq 0} is a set of coefficients. We cannot compute an infinite set of coefficients: we can only compute a finite subset of these. The truncated functional chaos expansion is:

\widetilde{h} =  \sum_{k = 0}^{P} a_k \psi_k

where P \in \Nset. Thus \widetilde{h} is represented by a finite subset of coefficients (a_k)_{k = 0, ..., P} in a truncated basis \left(\psi_k\right)_{k = 0, ..., P}.

A specific choice of P can be done using one enumeration rule, as presented in Chaos basis enumeration strategies. If the number of coefficients, P + 1, is too large, this can lead to overfitting. This may happen e.g. if the total polynomial order we choose is too large. In order to limit this effect, one method is to select the coefficients which best predict the output, as presented in Sparse least squares metamodel.

Convergence of the expansion

In this section, we introduce the conditions which ensures that the expansion converges to the function.

The orthonormal expansion of any function h \in L^2\left(\mu_{\vect{Z}}\right) converges in norm to h, i.e.:

\lim_{P \rightarrow \infty} \left\|h -
\sum_{k = 0}^{P} a_k \psi_k\right\|_{L^2\left(\mu_{\vect{Z}}\right)} = 0

if and only if the basis \left(\psi_k\right)_{k \geq 0} is a complete orthonormal system (see [sullivan2015], page 139, [dahlquist2008], theorem 4.5.16 page 456 and [rudin1987], section 4.24 page 85). In this case, the closure of the vector space spanned by the orthogonal functions is equal to the whole set of square integrable functions with respect to \mu_{\vect{Z}}:

(3)\overline{\operatorname{span}\left(\left(\psi_k\right)_{k \geq 0}\right)} = L^2\left(\mu_{\vect{Z}}\right).

There are known sufficient conditions which ensure this property. For example, if the support of \mu_{\vect{Z}} is bounded, then the basis is a complete orthonormal system.

There exists some infinite set of orthonormal polynomials which are not complete, e.g. those derived from the log-normal distribution (see [ernst2012]). In this case, the expansion may not converge to the function. Nevertheless, even without any guarantee, it is possible that the meta model built using the basis \left(\psi_k\right)_{k \in \{0, ..., P\}} may be a good approximation of h.

Define and estimate the coefficients

In this section, we review two equivalent methods to define the coefficients of the expansion:

  • using a least squares problem,

  • using integration.

Both methods can be introduced and then discretized using a sample.

The vector of coefficients is the solution of the linear least-squares problem:

(4)\vect{a}^\star  = \argmin_{\vect{a} \in \Rset^{P + 1}}
 \left\| h - \sum_{k = 0}^{P} a_k \psi_k \right\|^2_{L^2\left(\mu_{\vect{Z}}\right)}.

The equation (4) means that the coefficients (a_k)_{k = 0, ..., P} minimize the quadratic error between the model and the functional approximation. For more details of the approximation based on least squares, see the LeastSquaresStrategy class.

Let us discretize the solution of the linear least squares problem. Let n \in \Nset be the sample size. Let \{\vect{x}^{(j)} \in \Rset^{n_x}\}_{j = 1, ..., n} be an i.i.d. sample from the random vector \vect{X}. Let \{\vect{z}^{(j)} = T\left(\vect{x}^{(j)}\right)\}_{j = 1, ..., n} be the standardized input sample. Let \{y^{(j)} = \model\left(\vect{x}^{(j)}\right)\}_{j = 1, ..., n} be the corresponding output sample. Let \vect{y} = \Tr{(y^{(1)}, ..., y^{(n)})} \in \Rset^n be the vector of output observations of the model. Let \mat{\Psi} \in \Rset^{n \times (P + 1)} be the design matrix, defined by:

\mat{\Psi}_{jk} = \psi_k\left(\vect{z}^{(j)}\right)

for j = 1, ..., n and k = 0, ..., P. Assume that the design matrix has full rank. The discretized linear least squares problem is:

\widehat{\vect{a}} = \argmin_{\vect{a} \in \Rset^{P + 1}}
\left\| \vect{y} - \mat{\Psi} \vect{a} \right\|^2_2.

The solution is:

\widehat{\vect{a}}
= \left(\Tr{\mat{\Psi}} \mat{\Psi}\right)^{-1} \Tr{\mat{\Psi}} \vect{y}.

The choice of basis has a major impact on the conditioning of the least-squares problem (4). Indeed, if the basis \left(\psi_k\right)_{k \in \{0, ..., P\}} is orthonormal, then the design matrix of the least squares problem is well-conditioned.

The problem can be equivalently solved using the scalar product (see [dahlquist2008] theorem 4.5.13 page 454):

(5)a_k^\star = \scalarproduct{h}{\psi_k}_{L^2\left(\mu_{\vect{Z}}\right)}

for k = 0, ..., P. These equations express the coefficients of the orthogonal projection of the function h onto the vector space spanned by the orthogonal functions in the basis. Since the definition of the scalar product is based on an expectation, this amounts to approximate an integral using a quadrature rule.

The equation (5) means that each coefficient a_k is the scalar product of the model with the k-th element of the orthonormal basis \left(\psi_k\right)_{k \geq 0}. For more details on the PCE based on quadrature, see the IntegrationStrategy class.

Let us discretize the solution of the problem based on the scalar product. This can be done by considering a quadrature rule that makes it possible to approximate the integral. Let n \in \Nset be the sample size. Let \{\vect{z}^{(j)} \in \Rset^{n_x}\}_{j = 1, ..., n} be the nodes of the quadrature rule and let \{w^{(j)} \in \Rset\}_{j = 1, ..., n} be the weights. The quadrature rule is:

\widehat{a}_k = \sum_{j = 1}^n w^{(j)} h\left(\vect{z}^{(j)}\right)
\psi_k\left(\vect{z}^{(j)}\right)

for k = 0, ..., P.

Several algorithms are available to compute the coefficients (a_k)_{k = 0, ..., P}:

The two methods to define the coefficients of the expansion are equivalent: the solution of the equations (4) and (5) produce the same coefficients (a_k)_{k = 0, ..., P}. This is different when we estimate these coefficients based on a sample. In this discretized framework, the solution of the two methods can be different. It can be shown, however, that the limit of the two estimators are equal when the sample size tends to infinity (see [lemaitre2010] eq. 3.48 page 66). Moreover, the two discretized methods are equivalent if the sample points satisfy an empirical orthogonality condition (see [lemaitre2010] eq. 3.49 page 66).

A step-by-step method

Three steps are required in order to create a functional chaos algorithm:

  • define the multivariate orthonormal basis;

  • truncate the multivariate orthonormal basis;

  • evaluate the coefficients.

These steps are presented in more detail below.

Step 1 - Define the multivariate orthonormal basis: the multivariate orthonornal basis \left(\psi_k\right)_{k \geq 0} is built as the tensor product of orthonormal univariate families.

The univariate bases may be:

  • polynomials: the associated distribution \mu_i can be continuous or discrete. Note that it is possible to build the polynomial family orthonormal to any arbitrary univariate distribution \mu_i under some conditions. For more details on this basis, see StandardDistributionPolynomialFactory;

  • Haar wavelets: they approximate functions with discontinuities. For details on this basis, see HaarWaveletFactory;

  • Fourier series: for more details on this basis, see FourierSeriesFactory.

Furthermore, the numbering of the multivariate orthonormal basis \left(\psi_k\right)_{k \geq 0} is given by an enumerate function which defines a way to generate the collection of polynomial degrees used for the univariate polynomials: an enumerate function represents a bijection \Nset \rightarrow \Nset^{n_X}. See LinearEnumerateFunction or HyperbolicAnisotropicEnumerateFunction for more details on this topic.

Step 2 - Truncate the multivariate orthonormal basis: a strategy must be chosen for the selection of the different terms of the multivariate basis. The selected terms are gathered in the subset \{0, ..., P\}. For information about the possible strategies, see FixedStrategy and CleaningStrategy.

Step 3 - Evaluate the coefficients: a projectionStrategy must be chosen for the estimation of the coefficients \left(a_k\right)_{k = 0, ..., P}.

The meta model

The meta model of g can be defined using the isoprobabilistic transformation T:

(6)\widetilde{g} = \widetilde{h} \circ T.

More details are available on these topics.

There are many ways to use the functional chaos expansion. In the next two sections, we present two examples:

  • using the expansion as a random vector generator,

  • performing the sensitivity analysis of the expansion.

Using the expansion as a random vector generator

The approximation \widetilde{h} can be used to build an efficient random generator of Y based on the random vector \vect{X}, using the equation:

\widetilde{Y} = \widetilde{h}(\vect{Z}).

This equation can be used to simulate independent random observations from the PCE. This can be done by first simulating independent observations from the distribution of the standardized random vector \vect{Z}, then by pushing forward these observations through the expansion. See the FunctionalChaosRandomVector class for more details on this topic.

Sensitivity analysis

Assume that the input random vector has independent marginals and that the basis \left(\psi_k\right)_{k \geq 0} is computed using the tensor product of univariate orthonormal functions. In that case, the Sobol’ indices can easily be deduced from the coefficients \left(a_k\right)_{k = 0, ..., P}. Please see FunctionalChaosSobolIndices for more details on this topic.

Polynomial chaos expansion for independent variables

The library enables one to build the meta model called polynomial chaos expansion based on an orthonormal basis of polynomials. See Polynomial chaos basis for more details on polynomial chaos expansion.

Other chaos expansions for independent variables

While the polynomial chaos expansion is a classical method, the functions in the basis do not necessarily have to be polynomials: provided the functions are orthogonal with respect to the measure \mu_{\vect{Z}}, most of the theory still holds. The library enables one to use the Haar wavelet functions or the Fourier series as orthonormal basis with respect to each margin \mu_i. The Haar wavelets basis is orthonormal with respect to the the \cU(0,1) measure (see HaarWaveletFactory) and the Fourier series basis is orthonormal with respect to the \cU(-\pi, \pi) measure (see FourierSeriesFactory).

Some functional chaos expansions for dependent variables

If the components of the input random vector \vect{X} are not independent, we can use an iso-probabilistic transformation to map \vect{X} into \vect{Z} with independent components.

Whatever the dependency in the standardized random vector \vect{Z}, the following multivariate functions are orthonormal with respect to \mu_{\vect{Z}}:

\Psi_{\idx}(\vect{z})
= \left( \dfrac{\mu_{Z_1}(z_1) \cdots \mu_{Z_{n_X}}(z_{n_X})}{\mu_{\vect{Z}}(\vect{z})} \right)^{\frac{1}{2}}\;
\prod_{i=1}^{n_X} \pi^{(i)}_{\alpha_{i}}(z_{i})

where \mu_{Z_i} is the i -th marginal of \mu_{\vect{Z}} and \pi^{(i)}_{\alpha_{i}} is the degree \alpha_i orthonormal family of polynomial for the i-th marginal. If the random vector \vect{Z} has a non-trivial dependency, the previous functions are not necessarily polynomials. Notice that:

(7)\dfrac{\mu_{Z_1}(z_1) \cdots \mu_{Z_{n_X}}(z_{n_X})}{\mu_{\vect{Z}}(\vect{z})}
 = \dfrac{1}{c(\vect{z})}

where c is the density of the copula of \vect{Z}.