Polynomial chaos exploitation

The goal of this example is to create a polynomial chaos expansion using the FunctionalChaosAlgorithm class and see the different methods of the FunctionalChaosResult class. In order to understand the different results, let us review some notations associated to the polynomial chaos expansion.

We first introduce the physical input, output and model:

  • the random vector \vect{X} is the physical input random vector,

  • the output random vector \vect{Y} is the output of the physical model,

  • the physical model g is a function of the physical input:

\vect{Y} = g(\vect{X}).

Then we introduce the iso-probabilistic transformation:

  • the random vector \vect{Z} is the standardized input random vector,

  • the transformation T is the iso-probabilistic transformation mapping from the physical input to the standardized input:

\vect{Z} = T(\vect{X}),

  • the standard distribution \mu of the standardized random vector \vect{Z}.

We expand the model on the multivariate basis:

  • the full (i.e. unselected) multivariate basis is \left(\Psi_k\right)_{k \in \mathbb{N}},

  • the composition of each polynomial of the truncated multivariate basis \Psi_k,

  • the full set of coefficients of the polynomial expansion is the set (\vect{a}_k)_{k \in \mathbb{N}},

  • the composed model h is a function of the standardized random vector \vect{Z}:

\vect{Y} = h(\vect{Z}) = g \circ T^{-1}(\vect{Z})

Then we use a model selection method (e.g. from the LARS class):

  • the set \mathcal{K} \subseteq \mathbb{N} is the set of indices corresponding to the result of the selection method,

  • the truncated (i.e. selected) multivariate basis is \left(\Psi_k\right)_{k \in \mathcal{K}},

  • the selected set of coefficients of the polynomial expansion is the set (\vect{a}_k)_{k \in \mathcal{K}},

  • the composed meta model \hat{h} is the function of the standardized variables based on the truncated multivariate basis \left(\Psi_k\right)_{k \in \mathcal{K}}.

  • the meta model is a function of the physical input:

\vect{Y} = \hat{g} (\vect{X}) = \hat{h} \circ T(\vect{X}).

Based on the previous definitions, the composed model is:

h(\vect{Z}) =  \sum_{k \in \mathbb{N}} \vect{a}_k \Psi_k(\vect{Z}),

the composed metamodel is:

\hat{h}(\vect{Z}) = \sum_{k \in \mathcal{K}} \vect{a}_k \Psi_k(\vect{Z}),

and the metamodel is:

\hat{g}(\vect{X}) = \sum_{k \in \mathcal{K}} \vect{a}_k \Psi_k \circ T(\vect{X}).

The three previous mathematical functions are implemented as instances of the Function class.

Create the polynomial chaos expansion

import openturns as ot

ot.Log.Show(ot.Log.NONE)

Prepare some input and output samples. We define a model which has two inputs and two outputs. Then we define a Normal input random vector with independent marginals, and we generate a sample from the input random vector. Finally, we evaluate the output sample from the model.

ot.RandomGenerator.SetSeed(0)
dimension = 2
input_names = ["x1", "x2"]
formulas = ["cos(x1 + x2)", "(x2 + 1) * exp(x1 - 2 * x2)"]
model = ot.SymbolicFunction(input_names, formulas)
distribution = ot.Normal(dimension)
x = distribution.getSample(30)
y = model(x)

Create a functional chaos algorithm.

algo = ot.FunctionalChaosAlgorithm(x, y)

The previous constructor is the simplest since it has only two input arguments. In this case, the algorithm has to retrieve the distribution from the x sample, which can be difficult, especially if the sample size is small. Notice, however, that the input distribution is known in our simple case. This is why we update the previous script and give the input distribution as a third input argument of the constructor.

algo = ot.FunctionalChaosAlgorithm(x, y, distribution)
algo.run()

Get the result.

result = algo.getResult()
result
FunctionalChaosResult
  • input dimension: 2
  • output dimension: 2
  • distribution dimension: 2
  • transformation: 2 -> 2
  • inverse transformation: 2 -> 2
  • orthogonal basis dimension: 2
  • indices size: 30
  • relative errors: [5.14784e-27,3.47237e-24]
  • residuals: [7.34243e-15,1.02577e-12]
Index Multi-index Coeff.#0 Coeff.#1
0 [0,0] 4.865336 -402.7622
1 [1,0] 4.948702 -771.5069
2 [0,1] 1.688011 124.3656
3 [2,0] 35.55641 -3034.59
4 [0,2] -0.1645904 -143.8753
5 [3,0] 17.53419 -2890.676
6 [0,3] 6.069326 459.9361
7 [1,1] -0.2800594 1.707412
8 [4,0] 93.80168 -7922.15
9 [0,4] 0.2280537 -274.1107
10 [5,0] 24.33692 -4443.708
11 [0,5] 9.984109 689.8605
12 [2,1] 0.4935883 -13.02656
13 [1,2] -0.6411503 0.7475312
14 [6,0] 116.69 -9821.913
15 [0,6] 0.3746894 -228.1956
16 [7,0] 15.20941 -3140.008
17 [0,7] 7.393779 488.6255
18 [3,1] 0.1247836 -1.845836
19 [1,3] 0.4412629 4.137734
20 [2,2] 0.4787299 -0.101163
21 [8,0] 70.84271 -5952.307
22 [0,8] 0.2514189 -71.43391
23 [4,1] 0.4113195 -10.99826
24 [1,4] -0.1839944 -2.613044
25 [9,0] 3.607702 -847.7772
26 [0,9] 2.085063 133.6158
27 [3,2] -0.3527807 5.155384
28 [2,3] -0.197786 4.449799
29 [10,0] 16.92172 -1425.889


Get the coefficients

In the next cells, we observe several methods of the result object. First, we get the polynomial chaos coefficients.

result.getCoefficients()
v0v1
04.865336-402.7622
14.948702-771.5069
21.688011124.3656
335.55641-3034.59
4-0.1645904-143.8753
517.53419-2890.676
66.069326459.9361
7-0.28005941.707412
893.80168-7922.15
90.2280537-274.1107
1024.33692-4443.708
119.984109689.8605
120.4935883-13.02656
13-0.64115030.7475312
14116.69-9821.913
150.3746894-228.1956
1615.20941-3140.008
177.393779488.6255
180.1247836-1.845836
190.44126294.137734
200.4787299-0.101163
2170.84271-5952.307
220.2514189-71.43391
230.4113195-10.99826
24-0.1839944-2.613044
253.607702-847.7772
262.085063133.6158
27-0.35278075.155384
28-0.1977864.449799
2916.92172-1425.889


The coefficients of the i-th output marginal.

i = 1
result.getCoefficients()[i]
class=Point name=Unnamed dimension=2 values=[4.9487,-771.507]


Get the indices of the selected polynomials.

subsetK = result.getIndices()
subsetK
[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29]


Get the composition of the polynomials of the truncated multivariate basis.

for i in range(subsetK.getSize()):
    print(
        "Polynomial number ",
        i,
        " in truncated basis <-> polynomial number ",
        subsetK[i],
        " = ",
        ot.LinearEnumerateFunction(dimension)(subsetK[i]),
        " in complete basis",
    )
Polynomial number  0  in truncated basis <-> polynomial number  0  =  [0,0]  in complete basis
Polynomial number  1  in truncated basis <-> polynomial number  1  =  [1,0]  in complete basis
Polynomial number  2  in truncated basis <-> polynomial number  2  =  [0,1]  in complete basis
Polynomial number  3  in truncated basis <-> polynomial number  3  =  [2,0]  in complete basis
Polynomial number  4  in truncated basis <-> polynomial number  4  =  [1,1]  in complete basis
Polynomial number  5  in truncated basis <-> polynomial number  5  =  [0,2]  in complete basis
Polynomial number  6  in truncated basis <-> polynomial number  6  =  [3,0]  in complete basis
Polynomial number  7  in truncated basis <-> polynomial number  7  =  [2,1]  in complete basis
Polynomial number  8  in truncated basis <-> polynomial number  8  =  [1,2]  in complete basis
Polynomial number  9  in truncated basis <-> polynomial number  9  =  [0,3]  in complete basis
Polynomial number  10  in truncated basis <-> polynomial number  10  =  [4,0]  in complete basis
Polynomial number  11  in truncated basis <-> polynomial number  11  =  [3,1]  in complete basis
Polynomial number  12  in truncated basis <-> polynomial number  12  =  [2,2]  in complete basis
Polynomial number  13  in truncated basis <-> polynomial number  13  =  [1,3]  in complete basis
Polynomial number  14  in truncated basis <-> polynomial number  14  =  [0,4]  in complete basis
Polynomial number  15  in truncated basis <-> polynomial number  15  =  [5,0]  in complete basis
Polynomial number  16  in truncated basis <-> polynomial number  16  =  [4,1]  in complete basis
Polynomial number  17  in truncated basis <-> polynomial number  17  =  [3,2]  in complete basis
Polynomial number  18  in truncated basis <-> polynomial number  18  =  [2,3]  in complete basis
Polynomial number  19  in truncated basis <-> polynomial number  19  =  [1,4]  in complete basis
Polynomial number  20  in truncated basis <-> polynomial number  20  =  [0,5]  in complete basis
Polynomial number  21  in truncated basis <-> polynomial number  21  =  [6,0]  in complete basis
Polynomial number  22  in truncated basis <-> polynomial number  22  =  [5,1]  in complete basis
Polynomial number  23  in truncated basis <-> polynomial number  23  =  [4,2]  in complete basis
Polynomial number  24  in truncated basis <-> polynomial number  24  =  [3,3]  in complete basis
Polynomial number  25  in truncated basis <-> polynomial number  25  =  [2,4]  in complete basis
Polynomial number  26  in truncated basis <-> polynomial number  26  =  [1,5]  in complete basis
Polynomial number  27  in truncated basis <-> polynomial number  27  =  [0,6]  in complete basis
Polynomial number  28  in truncated basis <-> polynomial number  28  =  [7,0]  in complete basis
Polynomial number  29  in truncated basis <-> polynomial number  29  =  [6,1]  in complete basis

Get the basis

Get the multivariate basis as a collection of Function objects.

reduced = result.getReducedBasis()

Get the orthogonal basis.

orthgBasis = result.getOrthogonalBasis()

Get the standard distribution \mu of the standardized random vector \vect{Z}.

orthgBasis.getMeasure()
JointDistribution
  • name=JointDistribution
  • dimension: 2
  • description=[X0,X1]
  • copula: IndependentCopula(dimension = 2)
Index Variable Distribution
0 X0 Normal(mu = 0, sigma = 1)
1 X1 Normal(mu = 0, sigma = 1)


Get the metamodel

Get the composed meta model \hat{h} which is the model of the standardized random vector \vect{Z} within the reduced polynomials basis.

result.getComposedMetaModel()
DualLinearCombinationEvaluation
  • Input dimension = 2
  • Input description = [x0,x1]
  • Output dimension = 2
  • Size = 30
Coefficient Function
[4.86534,-402.762] 1
[4.9487,-771.507] x0
[1.68801,124.366] x1
[35.5564,-3034.59] -0.707107 + 0.707107 × x02
[-0.16459,-143.875] -0.707107 + 0.707107 × x12
[17.5342,-2890.68] -1.22474 × x0 + 0.408248 × x03
[6.06933,459.936] -1.22474 × x1 + 0.408248 × x13
[-0.280059,1.70741] x0 × x1
[93.8017,-7922.15] 0.612372 - 1.22474 × x02 + 0.204124 × x04
[0.228054,-274.111] 0.612372 - 1.22474 × x12 + 0.204124 × x14
[24.3369,-4443.71] 1.36931 × x0 - 0.912871 × x03 + 0.0912871 × x05
[9.98411,689.86] 1.36931 × x1 - 0.912871 × x13 + 0.0912871 × x15
[0.493588,-13.0266] (-0.707107 + 0.707107 × x02) × x1
[-0.64115,0.747531] x0 × (-0.707107 + 0.707107 × x12)
[116.69,-9821.91] -0.559017 + 1.67705 × x02 - 0.559017 × x04 + 0.0372678 × x06
[0.374689,-228.196] -0.559017 + 1.67705 × x12 - 0.559017 × x14 + 0.0372678 × x16
[15.2094,-3140.01] -1.47902 × x0 + 1.47902 × x03 - 0.295804 × x05 + 0.0140859 × x07
[7.39378,488.626] -1.47902 × x1 + 1.47902 × x13 - 0.295804 × x15 + 0.0140859 × x17
[0.124784,-1.84584] (-1.22474 × x0 + 0.408248 × x03) × x1
[0.441263,4.13773] x0 × (-1.22474 × x1 + 0.408248 × x13)
[0.47873,-0.101163] (-0.707107 + 0.707107 × x02) × (-0.707107 + 0.707107 × x12)
[70.8427,-5952.31] 0.522913 - 2.09165 × x02 + 1.04583 × x04 - 0.139443 × x06 + 0.00498012 × x08
[0.251419,-71.4339] 0.522913 - 2.09165 × x12 + 1.04583 × x14 - 0.139443 × x16 + 0.00498012 × x18
[0.41132,-10.9983] (0.612372 - 1.22474 × x02 + 0.204124 × x04) × x1
[-0.183994,-2.61304] x0 × (0.612372 - 1.22474 × x12 + 0.204124 × x14)
[3.6077,-847.777] 1.56874 × x0 - 2.09165 × x03 + 0.627495 × x05 - 0.0597614 × x07 + 0.00166004 × x09
[2.08506,133.616] 1.56874 × x1 - 2.09165 × x13 + 0.627495 × x15 - 0.0597614 × x17 + 0.00166004 × x19
[-0.352781,5.15538] (-1.22474 × x0 + 0.408248 × x03) × (-0.707107 + 0.707107 × x12)
[-0.197786,4.4498] (-0.707107 + 0.707107 × x02) × (-1.22474 × x1 + 0.408248 × x13)
[16.9217,-1425.89] -0.496078 + 2.48039 × x02 - 1.65359 × x04 + 0.330719 × x06 - 0.0236228 × x08 + 0.000524951 × x010


Get the meta model \hat{g} which is the composed meta model composed with the iso probabilistic transformation T.

result.getMetaModel()
DualLinearCombinationEvaluation
  • Input dimension = 2
  • Input description = [X0,X1]
  • Output dimension = 2
  • Size = 30
Coefficient Function
[4.86534,-402.762] 1
[4.9487,-771.507] x0
[1.68801,124.366] x1
[35.5564,-3034.59] -0.707107 + 0.707107 × x02
[-0.16459,-143.875] -0.707107 + 0.707107 × x12
[17.5342,-2890.68] -1.22474 × x0 + 0.408248 × x03
[6.06933,459.936] -1.22474 × x1 + 0.408248 × x13
[-0.280059,1.70741] x0 × x1
[93.8017,-7922.15] 0.612372 - 1.22474 × x02 + 0.204124 × x04
[0.228054,-274.111] 0.612372 - 1.22474 × x12 + 0.204124 × x14
[24.3369,-4443.71] 1.36931 × x0 - 0.912871 × x03 + 0.0912871 × x05
[9.98411,689.86] 1.36931 × x1 - 0.912871 × x13 + 0.0912871 × x15
[0.493588,-13.0266] (-0.707107 + 0.707107 × x02) × x1
[-0.64115,0.747531] x0 × (-0.707107 + 0.707107 × x12)
[116.69,-9821.91] -0.559017 + 1.67705 × x02 - 0.559017 × x04 + 0.0372678 × x06
[0.374689,-228.196] -0.559017 + 1.67705 × x12 - 0.559017 × x14 + 0.0372678 × x16
[15.2094,-3140.01] -1.47902 × x0 + 1.47902 × x03 - 0.295804 × x05 + 0.0140859 × x07
[7.39378,488.626] -1.47902 × x1 + 1.47902 × x13 - 0.295804 × x15 + 0.0140859 × x17
[0.124784,-1.84584] (-1.22474 × x0 + 0.408248 × x03) × x1
[0.441263,4.13773] x0 × (-1.22474 × x1 + 0.408248 × x13)
[0.47873,-0.101163] (-0.707107 + 0.707107 × x02) × (-0.707107 + 0.707107 × x12)
[70.8427,-5952.31] 0.522913 - 2.09165 × x02 + 1.04583 × x04 - 0.139443 × x06 + 0.00498012 × x08
[0.251419,-71.4339] 0.522913 - 2.09165 × x12 + 1.04583 × x14 - 0.139443 × x16 + 0.00498012 × x18
[0.41132,-10.9983] (0.612372 - 1.22474 × x02 + 0.204124 × x04) × x1
[-0.183994,-2.61304] x0 × (0.612372 - 1.22474 × x12 + 0.204124 × x14)
[3.6077,-847.777] 1.56874 × x0 - 2.09165 × x03 + 0.627495 × x05 - 0.0597614 × x07 + 0.00166004 × x09
[2.08506,133.616] 1.56874 × x1 - 2.09165 × x13 + 0.627495 × x15 - 0.0597614 × x17 + 0.00166004 × x19
[-0.352781,5.15538] (-1.22474 × x0 + 0.408248 × x03) × (-0.707107 + 0.707107 × x12)
[-0.197786,4.4498] (-0.707107 + 0.707107 × x02) × (-1.22474 × x1 + 0.408248 × x13)
[16.9217,-1425.89] -0.496078 + 2.48039 × x02 - 1.65359 × x04 + 0.330719 × x06 - 0.0236228 × x08 + 0.000524951 × x010


Get the projection method

Get the projection strategy: this is the method to estimate the coefficients, i.e. regression or integration.

algo.getProjectionStrategy()
ProjectionStrategyImplementation
  • coefficients: dimension=30
  • residual: 1.02577e-12
  • relative error: 3.47237e-24
  • measure: Distribution
  • weighted experiment: WeightedExperiment
  • input sample: size= 30 x dimension= 2
  • output sample: size= 30 x dimension= 2
  • weights: dimension= 30
  • design: size= 30