Polynomial chaos exploitation

The goal of this example is to create a polynomial chaos expansion using the FunctionalChaosAlgorithm class and see the different methods of the FunctionalChaosResult class. In order to understand the different results, let us review some notations associated to the polynomial chaos expansion.

We first introduce the physical input, output and model:

  • the random vector \vect{X} is the physical input random vector,

  • the output random vector \vect{Y} is the output of the physical model,

  • the physical model g is a function of the physical input:

\vect{Y} = g(\vect{X}).

Then we introduce the iso-probabilistic transformation:

  • the random vector \vect{Z} is the standardized input random vector,

  • the transformation T is the iso-probabilistic transformation mapping from the physical input to the standardized input:

\vect{Z} = T(\vect{X}),

  • the standard distribution \mu of the standardized random vector \vect{Z}.

We expand the model on the multivariate basis:

  • the full (i.e. unselected) multivariate basis is \left(\Psi_k\right)_{k \in \mathbb{N}},

  • the composition of each polynomial of the truncated multivariate basis \Psi_k,

  • the full set of coefficients of the polynomial expansion is the set (\vect{a}_k)_{k \in \mathbb{N}},

  • the composed model h is a function of the standardized random vector \vect{Z}:

\vect{Y} = h(\vect{Z}) = g \circ T^{-1}(\vect{Z})

Then we use a model selection method (e.g. from the LARS class):

  • the set \mathcal{K} \subseteq \mathbb{N} is the set of indices corresponding to the result of the selection method,

  • the truncated (i.e. selected) multivariate basis is \left(\Psi_k\right)_{k \in \mathcal{K}},

  • the selected set of coefficients of the polynomial expansion is the set (\vect{a}_k)_{k \in \mathcal{K}},

  • the composed meta model \hat{h} is the function of the standardized variables based on the truncated multivariate basis \left(\Psi_k\right)_{k \in \mathcal{K}}.

  • the meta model is a function of the physical input:

\vect{Y} = \hat{g} (\vect{X}) = \hat{h} \circ T(\vect{X}).

Based on the previous definitions, the composed model is:

h(\vect{Z}) =  \sum_{k \in \mathbb{N}} \vect{a}_k \Psi_k(\vect{Z}),

the composed metamodel is:

\hat{h}(\vect{Z}) = \sum_{k \in \mathcal{K}} \vect{a}_k \Psi_k(\vect{Z}),

and the metamodel is:

\hat{g}(\vect{X}) = \sum_{k \in \mathcal{K}} \vect{a}_k \Psi_k \circ T(\vect{X}).

The three previous mathematical functions are implemented as instances of the Function class.

Create the polynomial chaos expansion

import openturns as ot

ot.Log.Show(ot.Log.NONE)

Prepare some input and output samples. We define a model which has two inputs and two outputs. Then we define a normal input random vector with independent marginals, and we generate a sample from the input random vector. Finally, we evaluate the output sample from the model.

ot.RandomGenerator.SetSeed(0)
dimension = 2
input_names = ["x1", "x2"]
formulas = ["cos(x1 + x2)", "(x2 + 1) * exp(x1 - 2 * x2)"]
model = ot.SymbolicFunction(input_names, formulas)
distribution = ot.Normal(dimension)
x = distribution.getSample(30)
y = model(x)

Create a functional chaos algorithm.

algo = ot.FunctionalChaosAlgorithm(x, y)

The previous constructor is the simplest since it has only two inputs arguments. In this case, the algorithm has to retrieve the distribution from the x sample, which can be difficult, especially if the sample size is small. Notice, however, that the input distribution is known in our simple case. This is why we update the previous script and give the input distribution as a third input argument of the constructor.

algo = ot.FunctionalChaosAlgorithm(x, y, distribution)
algo.run()

Get the result.

result = algo.getResult()
result
FunctionalChaosResult
  • input dimension: 2
  • output dimension: 2
  • distribution dimension: 2
  • transformation: 2 -> 2
  • inverse transformation: 2 -> 2
  • orthogonal basis dimension: 2
  • indices size: 33
  • relative errors: [1.67645e-29,3.00023e-28]
  • residuals: [4.19008e-16,9.53483e-15]
Index Multi-index Coeff.#0 Coeff.#1
0 [0,0] 0.003044029 1.453744
1 [1,0] -0.7662159 -8.213236
2 [0,1] -0.2309922 -5.143131
3 [2,0] -1.382805 12.49095
4 [0,2] -1.064142 -3.434068
5 [3,0] -2.159387 -25.72925
6 [0,3] -1.016437 -37.68955
7 [1,1] -0.3250412 8.681302
8 [4,0] -0.9074029 22.5756
9 [0,4] -1.625116 -9.308155
10 [5,0] -2.167086 -49.31879
11 [0,5] -2.250736 -66.24227
12 [2,1] -0.173514 3.055715
13 [1,2] 0.1128078 -13.65439
14 [6,0] 1.276335 14.73564
15 [0,6] -1.21493 -38.304
16 [7,0] -0.717338 -40.98679
17 [0,7] -2.132556 -43.57289
18 [3,1] -0.136081 1.656592
19 [1,3] 0.4362043 13.44593
20 [2,2] 0.4283277 -0.5028248
21 [8,0] 2.264666 -1.318424
22 [0,8] -0.03839479 -49.45262
23 [4,1] -0.1157121 2.091355
24 [1,4] -0.02997813 -7.302498
25 [9,0] -0.005127433 -12.17123
26 [0,9] -0.7380268 -9.394012
27 [3,2] 0.1536367 -1.629815
28 [2,3] 0.02009974 -0.1488457
29 [10,0] 0.8816912 -3.421535
30 [0,10] 0.181939 -19.66619
31 [5,1] -0.2689637 1.728556
32 [1,5] 0.04365453 4.732682


Get the coefficients

In the next cells, we observe several methods of the result object. First, we get the polynomial chaos coefficients.

result.getCoefficients()
v0v1
00.0030440291.453744
1-0.7662159-8.213236
2-0.2309922-5.143131
3-1.38280512.49095
4-1.064142-3.434068
5-2.159387-25.72925
6-1.016437-37.68955
7-0.32504128.681302
8-0.907402922.5756
9-1.625116-9.308155
10-2.167086-49.31879
11-2.250736-66.24227
12-0.1735143.055715
130.1128078-13.65439
141.27633514.73564
15-1.21493-38.304
16-0.717338-40.98679
17-2.132556-43.57289
18-0.1360811.656592
190.436204313.44593
200.4283277-0.5028248
212.264666-1.318424
22-0.03839479-49.45262
23-0.11571212.091355
24-0.02997813-7.302498
25-0.005127433-12.17123
26-0.7380268-9.394012
270.1536367-1.629815
280.02009974-0.1488457
290.8816912-3.421535
300.181939-19.66619
31-0.26896371.728556
320.043654534.732682


The coefficients of the i-th output marginal.

i = 1
result.getCoefficients()[i]
class=Point name=Unnamed dimension=2 values=[-0.766216,-8.21324]


Get the indices of the selected polynomials.

subsetK = result.getIndices()
subsetK
[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32]


Get the composition of the polynomials of the truncated multivariate basis.

for i in range(subsetK.getSize()):
    print(
        "Polynomial number ",
        i,
        " in truncated basis <-> polynomial number ",
        subsetK[i],
        " = ",
        ot.LinearEnumerateFunction(dimension)(subsetK[i]),
        " in complete basis",
    )
Polynomial number  0  in truncated basis <-> polynomial number  0  =  [0,0]  in complete basis
Polynomial number  1  in truncated basis <-> polynomial number  1  =  [1,0]  in complete basis
Polynomial number  2  in truncated basis <-> polynomial number  2  =  [0,1]  in complete basis
Polynomial number  3  in truncated basis <-> polynomial number  3  =  [2,0]  in complete basis
Polynomial number  4  in truncated basis <-> polynomial number  4  =  [1,1]  in complete basis
Polynomial number  5  in truncated basis <-> polynomial number  5  =  [0,2]  in complete basis
Polynomial number  6  in truncated basis <-> polynomial number  6  =  [3,0]  in complete basis
Polynomial number  7  in truncated basis <-> polynomial number  7  =  [2,1]  in complete basis
Polynomial number  8  in truncated basis <-> polynomial number  8  =  [1,2]  in complete basis
Polynomial number  9  in truncated basis <-> polynomial number  9  =  [0,3]  in complete basis
Polynomial number  10  in truncated basis <-> polynomial number  10  =  [4,0]  in complete basis
Polynomial number  11  in truncated basis <-> polynomial number  11  =  [3,1]  in complete basis
Polynomial number  12  in truncated basis <-> polynomial number  12  =  [2,2]  in complete basis
Polynomial number  13  in truncated basis <-> polynomial number  13  =  [1,3]  in complete basis
Polynomial number  14  in truncated basis <-> polynomial number  14  =  [0,4]  in complete basis
Polynomial number  15  in truncated basis <-> polynomial number  15  =  [5,0]  in complete basis
Polynomial number  16  in truncated basis <-> polynomial number  16  =  [4,1]  in complete basis
Polynomial number  17  in truncated basis <-> polynomial number  17  =  [3,2]  in complete basis
Polynomial number  18  in truncated basis <-> polynomial number  18  =  [2,3]  in complete basis
Polynomial number  19  in truncated basis <-> polynomial number  19  =  [1,4]  in complete basis
Polynomial number  20  in truncated basis <-> polynomial number  20  =  [0,5]  in complete basis
Polynomial number  21  in truncated basis <-> polynomial number  21  =  [6,0]  in complete basis
Polynomial number  22  in truncated basis <-> polynomial number  22  =  [5,1]  in complete basis
Polynomial number  23  in truncated basis <-> polynomial number  23  =  [4,2]  in complete basis
Polynomial number  24  in truncated basis <-> polynomial number  24  =  [3,3]  in complete basis
Polynomial number  25  in truncated basis <-> polynomial number  25  =  [2,4]  in complete basis
Polynomial number  26  in truncated basis <-> polynomial number  26  =  [1,5]  in complete basis
Polynomial number  27  in truncated basis <-> polynomial number  27  =  [0,6]  in complete basis
Polynomial number  28  in truncated basis <-> polynomial number  28  =  [7,0]  in complete basis
Polynomial number  29  in truncated basis <-> polynomial number  29  =  [6,1]  in complete basis
Polynomial number  30  in truncated basis <-> polynomial number  30  =  [5,2]  in complete basis
Polynomial number  31  in truncated basis <-> polynomial number  31  =  [4,3]  in complete basis
Polynomial number  32  in truncated basis <-> polynomial number  32  =  [3,4]  in complete basis

Get the basis

Get the multivariate basis as a collection of Function objects.

reduced = result.getReducedBasis()

Get the orthogonal basis.

orthgBasis = result.getOrthogonalBasis()

Get the standard distribution \mu of the standardized random vector \vect{Z}.

orthgBasis.getMeasure()
ComposedDistribution
  • name=ComposedDistribution
  • dimension: 2
  • description=[X0,X1]
  • copula: IndependentCopula(dimension = 2)
Index Variable Distribution
0 X0 Normal(mu = 0, sigma = 1)
1 X1 Normal(mu = 0, sigma = 1)


Get the metamodel

Get the composed meta model \hat{h} which is the model of the standardized random vector \vect{Z} within the reduced polynomials basis.

result.getComposedMetaModel()
DualLinearCombinationEvaluation
  • Input dimension = 2
  • Input description = [x0,x1]
  • Output dimension = 2
  • Size = 33
Coefficient Function
[0.00304403,1.45374] 1
[-0.766216,-8.21324] x0
[-0.230992,-5.14313] x1
[-1.3828,12.4909] -0.707107 + 0.707107 × x02
[-1.06414,-3.43407] -0.707107 + 0.707107 × x12
[-2.15939,-25.7292] -1.22474 × x0 + 0.408248 × x03
[-1.01644,-37.6896] -1.22474 × x1 + 0.408248 × x13
[-0.325041,8.6813] x0 × x1
[-0.907403,22.5756] 0.612372 - 1.22474 × x02 + 0.204124 × x04
[-1.62512,-9.30815] 0.612372 - 1.22474 × x12 + 0.204124 × x14
[-2.16709,-49.3188] 1.36931 × x0 - 0.912871 × x03 + 0.0912871 × x05
[-2.25074,-66.2423] 1.36931 × x1 - 0.912871 × x13 + 0.0912871 × x15
[-0.173514,3.05571] (-0.707107 + 0.707107 × x02) × x1
[0.112808,-13.6544] x0 × (-0.707107 + 0.707107 × x12)
[1.27634,14.7356] -0.559017 + 1.67705 × x02 - 0.559017 × x04 + 0.0372678 × x06
[-1.21493,-38.304] -0.559017 + 1.67705 × x12 - 0.559017 × x14 + 0.0372678 × x16
[-0.717338,-40.9868] -1.47902 × x0 + 1.47902 × x03 - 0.295804 × x05 + 0.0140859 × x07
[-2.13256,-43.5729] -1.47902 × x1 + 1.47902 × x13 - 0.295804 × x15 + 0.0140859 × x17
[-0.136081,1.65659] (-1.22474 × x0 + 0.408248 × x03) × x1
[0.436204,13.4459] x0 × (-1.22474 × x1 + 0.408248 × x13)
[0.428328,-0.502825] (-0.707107 + 0.707107 × x02) × (-0.707107 + 0.707107 × x12)
[2.26467,-1.31842] 0.522913 - 2.09165 × x02 + 1.04583 × x04 - 0.139443 × x06 + 0.00498012 × x08
[-0.0383948,-49.4526] 0.522913 - 2.09165 × x12 + 1.04583 × x14 - 0.139443 × x16 + 0.00498012 × x18
[-0.115712,2.09135] (0.612372 - 1.22474 × x02 + 0.204124 × x04) × x1
[-0.0299781,-7.3025] x0 × (0.612372 - 1.22474 × x12 + 0.204124 × x14)
[-0.00512743,-12.1712] 1.56874 × x0 - 2.09165 × x03 + 0.627495 × x05 - 0.0597614 × x07 + 0.00166004 × x09
[-0.738027,-9.39401] 1.56874 × x1 - 2.09165 × x13 + 0.627495 × x15 - 0.0597614 × x17 + 0.00166004 × x19
[0.153637,-1.62981] (-1.22474 × x0 + 0.408248 × x03) × (-0.707107 + 0.707107 × x12)
[0.0200997,-0.148846] (-0.707107 + 0.707107 × x02) × (-1.22474 × x1 + 0.408248 × x13)
[0.881691,-3.42154] -0.496078 + 2.48039 × x02 - 1.65359 × x04 + 0.330719 × x06 - 0.0236228 × x08 + 0.000524951 × x010
[0.181939,-19.6662] -0.496078 + 2.48039 × x12 - 1.65359 × x14 + 0.330719 × x16 - 0.0236228 × x18 + 0.000524951 × x110
[-0.268964,1.72856] (1.36931 × x0 - 0.912871 × x03 + 0.0912871 × x05) × x1
[0.0436545,4.73268] x0 × (1.36931 × x1 - 0.912871 × x13 + 0.0912871 × x15)


Get the meta model \hat{g} which is the composed meta model composed with the iso probabilistic transformation T.

result.getMetaModel()
DualLinearCombinationEvaluation
  • Input dimension = 2
  • Input description = [X0,X1]
  • Output dimension = 2
  • Size = 33
Coefficient Function
[0.00304403,1.45374] 1
[-0.766216,-8.21324] x0
[-0.230992,-5.14313] x1
[-1.3828,12.4909] -0.707107 + 0.707107 × x02
[-1.06414,-3.43407] -0.707107 + 0.707107 × x12
[-2.15939,-25.7292] -1.22474 × x0 + 0.408248 × x03
[-1.01644,-37.6896] -1.22474 × x1 + 0.408248 × x13
[-0.325041,8.6813] x0 × x1
[-0.907403,22.5756] 0.612372 - 1.22474 × x02 + 0.204124 × x04
[-1.62512,-9.30815] 0.612372 - 1.22474 × x12 + 0.204124 × x14
[-2.16709,-49.3188] 1.36931 × x0 - 0.912871 × x03 + 0.0912871 × x05
[-2.25074,-66.2423] 1.36931 × x1 - 0.912871 × x13 + 0.0912871 × x15
[-0.173514,3.05571] (-0.707107 + 0.707107 × x02) × x1
[0.112808,-13.6544] x0 × (-0.707107 + 0.707107 × x12)
[1.27634,14.7356] -0.559017 + 1.67705 × x02 - 0.559017 × x04 + 0.0372678 × x06
[-1.21493,-38.304] -0.559017 + 1.67705 × x12 - 0.559017 × x14 + 0.0372678 × x16
[-0.717338,-40.9868] -1.47902 × x0 + 1.47902 × x03 - 0.295804 × x05 + 0.0140859 × x07
[-2.13256,-43.5729] -1.47902 × x1 + 1.47902 × x13 - 0.295804 × x15 + 0.0140859 × x17
[-0.136081,1.65659] (-1.22474 × x0 + 0.408248 × x03) × x1
[0.436204,13.4459] x0 × (-1.22474 × x1 + 0.408248 × x13)
[0.428328,-0.502825] (-0.707107 + 0.707107 × x02) × (-0.707107 + 0.707107 × x12)
[2.26467,-1.31842] 0.522913 - 2.09165 × x02 + 1.04583 × x04 - 0.139443 × x06 + 0.00498012 × x08
[-0.0383948,-49.4526] 0.522913 - 2.09165 × x12 + 1.04583 × x14 - 0.139443 × x16 + 0.00498012 × x18
[-0.115712,2.09135] (0.612372 - 1.22474 × x02 + 0.204124 × x04) × x1
[-0.0299781,-7.3025] x0 × (0.612372 - 1.22474 × x12 + 0.204124 × x14)
[-0.00512743,-12.1712] 1.56874 × x0 - 2.09165 × x03 + 0.627495 × x05 - 0.0597614 × x07 + 0.00166004 × x09
[-0.738027,-9.39401] 1.56874 × x1 - 2.09165 × x13 + 0.627495 × x15 - 0.0597614 × x17 + 0.00166004 × x19
[0.153637,-1.62981] (-1.22474 × x0 + 0.408248 × x03) × (-0.707107 + 0.707107 × x12)
[0.0200997,-0.148846] (-0.707107 + 0.707107 × x02) × (-1.22474 × x1 + 0.408248 × x13)
[0.881691,-3.42154] -0.496078 + 2.48039 × x02 - 1.65359 × x04 + 0.330719 × x06 - 0.0236228 × x08 + 0.000524951 × x010
[0.181939,-19.6662] -0.496078 + 2.48039 × x12 - 1.65359 × x14 + 0.330719 × x16 - 0.0236228 × x18 + 0.000524951 × x110
[-0.268964,1.72856] (1.36931 × x0 - 0.912871 × x03 + 0.0912871 × x05) × x1
[0.0436545,4.73268] x0 × (1.36931 × x1 - 0.912871 × x13 + 0.0912871 × x15)


Get the projection method

Get the projection strategy: this is the method to estimate the coefficients, i.e. regression or integration.

algo.getProjectionStrategy()
ProjectionStrategyImplementation
  • coefficients: dimension=33
  • residual: 9.53483e-15
  • relative error: 3.00023e-28
  • measure: Distribution
  • weighted experiment: WeightedExperiment
  • input sample: size= 30 x dimension= 2
  • output sample: size= 30 x dimension= 2
  • weights: dimension= 30
  • design: size= 30