Polynomial chaos exploitationΒΆ

The goal of this example is to create a polynomial chaos expansion using the FunctionalChaosAlgorithm class and see the different methods of the FunctionalChaosResult class. In order to understand the different results, let us review some notations associated to the polynomial chaos expansion.

We first introduce the physical input, output and model:

  • the random vector \vect{X} is the physical input random vector,

  • the output random vector \vect{Y} is the output of the physical model,

  • the physical model g is a function of the physical input:

\vect{Y} = g(\vect{X}).

Then we introduce the iso-probabilistic transformation:

  • the random vector \vect{Z} is the standardized input random vector,

  • the transformation T is the iso-probabilistic transformation mapping from the physical input to the standardized input:

\vect{Z} = T(\vect{X}),

  • the standard distribution \mu of the standardized random vector \vect{Z}.

We expand the model on the multivariate basis:

  • the full (i.e. unselected) multivariate basis is \left(\Psi_k\right)_{k \in \mathbb{N}},

  • the composition of each polynomial of the truncated multivariate basis \Psi_k,

  • the full set of coefficients of the polynomial expansion is the set (\vect{a}_k)_{k \in \mathbb{N}},

  • the composed model h is a function of the standardized random vector \vect{Z}:

\vect{Y} = h(\vect{Z}) = g \circ T^{-1}(\vect{Z})

Then we use a model selection method (e.g. from the LARS class):

  • the set \mathcal{K} \subseteq \mathbb{N} is the set of indices corresponding to the result of the selection method,

  • the truncated (i.e. selected) multivariate basis is \left(\Psi_k\right)_{k \in \mathcal{K}},

  • the selected set of coefficients of the polynomial expansion is the set (\vect{a}_k)_{k \in \mathcal{K}},

  • the composed meta model \hat{h} is the function of the standardized variables based on the truncated multivariate basis \left(\Psi_k\right)_{k \in \mathcal{K}}.

  • the meta model is a function of the physical input:

\vect{Y} = \hat{g} (\vect{X}) = \hat{h} \circ T(\vect{X}).

Based on the previous definitions, the composed model is:

h(\vect{Z}) =  \sum_{k \in \mathbb{N}} \vect{a}_k \Psi_k(\vect{Z}),

the composed metamodel is:

\hat{h}(\vect{Z}) = \sum_{k \in \mathcal{K}} \vect{a}_k \Psi_k(\vect{Z}),

and the metamodel is:

\hat{g}(\vect{X}) = \sum_{k \in \mathcal{K}} \vect{a}_k \Psi_k \circ T(\vect{X}).

The three previous mathematical functions are implemented as instances of the Function class.

In this example we are going to create a global approximation of a model response using functional chaos and expose the associated results:

  • the composed model: h: \underline{Z}^{\strut} \longrightarrow \underline{Y} = g \circ T^{-1}(\underline{Z}), which is the model of the reduced variables \underline{Z}. We have \displaystyle h =  \sum_{k \in \mathbb N} \underline{\alpha}_k \Psi_k,

  • the coefficients of the polynomial approximation : (\underline{\alpha}_k)_{k \in K},

  • the composed meta model: \hat{h}, which is the model of the reduced variables reduced to the truncated multivariate basis (\Psi_k)_{k \in K}. We have \displaystyle  \hat{h} = \sum_{k \in K} \underline{\alpha}_k \Psi_k,

  • the meta model: \displaystyle \hat{g} : \underline{X} \longrightarrow Y = \hat{h} \circ T(\underline{X}) which is the polynomial chaos approximation as a Function. We have \displaystyle \hat{g} = \sum_{k \in K} \underline{\alpha}_k \Psi_k \circ T,

  • the truncated multivariate basis : (\Psi_k)_{k \in K},

  • the indices K,

  • the composition of each polynomial of the truncated multivariate basis \Psi_k,

  • the distribution \mu of the transformed variables \underline{Z}.

import openturns as ot

ot.Log.Show(ot.Log.NONE)

Prepare some input and output samples. We define a model which has two inputs and two outputs. Then we define a normal input random vector with independent marginals, and we generate a sample from the input random vector. Finally, we evaluate the output sample from the model.

ot.RandomGenerator.SetSeed(0)
dimension = 2
input_names = ["x1", "x2"]
formulas = ["cos(x1 + x2)", "(x2 + 1) * exp(x1 - 2 * x2)"]
model = ot.SymbolicFunction(input_names, formulas)
distribution = ot.Normal(dimension)
x = distribution.getSample(30)
y = model(x)

Create a functional chaos algorithm.

algo = ot.FunctionalChaosAlgorithm(x, y)

The previous constructor is the simplest since it has only two inputs arguments. In this case, the algorithm has to retrieve the distribution from the x sample, which can be difficult, especially if the sample size is small. Notice, however, that the input distribution is known in our simple case. This is why we update the previous script and give the input distribution as a third input argument of the constructor.

algo = ot.FunctionalChaosAlgorithm(x, y, distribution)
algo.run()

Get the result.

result = algo.getResult()
result
  • input dimension: 2
  • output dimension: 2
  • distribution dimension: 2
  • transformation: 2 -> 2
  • inverse transformation: 2 -> 2
  • orthogonal basis dimension: 2
  • indices size: 33
  • relative errors: [1.67645e-29,3.00023e-28]
  • residuals: [4.19008e-16,9.53483e-15]
Index Multi-index Coeff.#0 Coeff.#1
0 [0,0] 0.003044029 1.453744
1 [1,0] -0.7662159 -8.213236
2 [0,1] -0.2309922 -5.143131
3 [2,0] -1.382805 12.49095
4 [0,2] -1.064142 -3.434068
5 [3,0] -2.159387 -25.72925
6 [0,3] -1.016437 -37.68955
7 [1,1] -0.3250412 8.681302
8 [4,0] -0.9074029 22.5756
9 [0,4] -1.625116 -9.308155
10 [5,0] -2.167086 -49.31879
11 [0,5] -2.250736 -66.24227
12 [2,1] -0.173514 3.055715
13 [1,2] 0.1128078 -13.65439
14 [6,0] 1.276335 14.73564
15 [0,6] -1.21493 -38.304
16 [7,0] -0.717338 -40.98679
17 [0,7] -2.132556 -43.57289
18 [3,1] -0.136081 1.656592
19 [1,3] 0.4362043 13.44593
20 [2,2] 0.4283277 -0.5028248
21 [8,0] 2.264666 -1.318424
22 [0,8] -0.03839479 -49.45262
23 [4,1] -0.1157121 2.091355
24 [1,4] -0.02997813 -7.302498
25 [9,0] -0.005127433 -12.17123
26 [0,9] -0.7380268 -9.394012
27 [3,2] 0.1536367 -1.629815
28 [2,3] 0.02009974 -0.1488457
29 [10,0] 0.8816912 -3.421535
30 [0,10] 0.181939 -19.66619
31 [5,1] -0.2689637 1.728556
32 [1,5] 0.04365453 4.732682


In the next cells, we observe several methods of the result object. First, we get the polynomial chaos coefficients.

result.getCoefficients()
v0v1
00.0030440291.453744
1-0.7662159-8.213236
2-0.2309922-5.143131
3-1.38280512.49095
4-1.064142-3.434068
5-2.159387-25.72925
6-1.016437-37.68955
7-0.32504128.681302
8-0.907402922.5756
9-1.625116-9.308155
10-2.167086-49.31879
11-2.250736-66.24227
12-0.1735143.055715
130.1128078-13.65439
141.27633514.73564
15-1.21493-38.304
16-0.717338-40.98679
17-2.132556-43.57289
18-0.1360811.656592
190.436204313.44593
200.4283277-0.5028248
212.264666-1.318424
22-0.03839479-49.45262
23-0.11571212.091355
24-0.02997813-7.302498
25-0.005127433-12.17123
26-0.7380268-9.394012
270.1536367-1.629815
280.02009974-0.1488457
290.8816912-3.421535
300.181939-19.66619
31-0.26896371.728556
320.043654534.732682


The coefficients of the i-th output marginal.

i = 1
result.getCoefficients()[i]

[-0.766216,-8.21324]



Get the indices of the selected polynomials.

subsetK = result.getIndices()
subsetK

[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32]#33



Get the composition of the polynomials of the truncated multivariate basis.

for i in range(subsetK.getSize()):
    print(
        "Polynomial number ",
        i,
        " in truncated basis <-> polynomial number ",
        subsetK[i],
        " = ",
        ot.LinearEnumerateFunction(dimension)(subsetK[i]),
        " in complete basis",
    )
Polynomial number  0  in truncated basis <-> polynomial number  0  =  [0,0]  in complete basis
Polynomial number  1  in truncated basis <-> polynomial number  1  =  [1,0]  in complete basis
Polynomial number  2  in truncated basis <-> polynomial number  2  =  [0,1]  in complete basis
Polynomial number  3  in truncated basis <-> polynomial number  3  =  [2,0]  in complete basis
Polynomial number  4  in truncated basis <-> polynomial number  4  =  [1,1]  in complete basis
Polynomial number  5  in truncated basis <-> polynomial number  5  =  [0,2]  in complete basis
Polynomial number  6  in truncated basis <-> polynomial number  6  =  [3,0]  in complete basis
Polynomial number  7  in truncated basis <-> polynomial number  7  =  [2,1]  in complete basis
Polynomial number  8  in truncated basis <-> polynomial number  8  =  [1,2]  in complete basis
Polynomial number  9  in truncated basis <-> polynomial number  9  =  [0,3]  in complete basis
Polynomial number  10  in truncated basis <-> polynomial number  10  =  [4,0]  in complete basis
Polynomial number  11  in truncated basis <-> polynomial number  11  =  [3,1]  in complete basis
Polynomial number  12  in truncated basis <-> polynomial number  12  =  [2,2]  in complete basis
Polynomial number  13  in truncated basis <-> polynomial number  13  =  [1,3]  in complete basis
Polynomial number  14  in truncated basis <-> polynomial number  14  =  [0,4]  in complete basis
Polynomial number  15  in truncated basis <-> polynomial number  15  =  [5,0]  in complete basis
Polynomial number  16  in truncated basis <-> polynomial number  16  =  [4,1]  in complete basis
Polynomial number  17  in truncated basis <-> polynomial number  17  =  [3,2]  in complete basis
Polynomial number  18  in truncated basis <-> polynomial number  18  =  [2,3]  in complete basis
Polynomial number  19  in truncated basis <-> polynomial number  19  =  [1,4]  in complete basis
Polynomial number  20  in truncated basis <-> polynomial number  20  =  [0,5]  in complete basis
Polynomial number  21  in truncated basis <-> polynomial number  21  =  [6,0]  in complete basis
Polynomial number  22  in truncated basis <-> polynomial number  22  =  [5,1]  in complete basis
Polynomial number  23  in truncated basis <-> polynomial number  23  =  [4,2]  in complete basis
Polynomial number  24  in truncated basis <-> polynomial number  24  =  [3,3]  in complete basis
Polynomial number  25  in truncated basis <-> polynomial number  25  =  [2,4]  in complete basis
Polynomial number  26  in truncated basis <-> polynomial number  26  =  [1,5]  in complete basis
Polynomial number  27  in truncated basis <-> polynomial number  27  =  [0,6]  in complete basis
Polynomial number  28  in truncated basis <-> polynomial number  28  =  [7,0]  in complete basis
Polynomial number  29  in truncated basis <-> polynomial number  29  =  [6,1]  in complete basis
Polynomial number  30  in truncated basis <-> polynomial number  30  =  [5,2]  in complete basis
Polynomial number  31  in truncated basis <-> polynomial number  31  =  [4,3]  in complete basis
Polynomial number  32  in truncated basis <-> polynomial number  32  =  [3,4]  in complete basis

Get the multivariate basis as a collection of Function objects.

reduced = result.getReducedBasis()

Get the orthogonal basis.

orthgBasis = result.getOrthogonalBasis()

Get the standard distribution \mu of the standardized random vector \vect{Z}.

orthgBasis.getMeasure()

ComposedDistribution(Normal(mu = 0, sigma = 1), Normal(mu = 0, sigma = 1), IndependentCopula(dimension = 2))



Get the composed meta model \hat{h} which is the model of the standardized random vector \vect{Z} within the reduced polynomials basis.

result.getComposedMetaModel()

[0.00304403,1.45374] + [-0.766216,-8.21324] * x0 + [-0.230992,-5.14313] * x1 + [-1.3828,12.4909] * (-0.707107 + 0.707107 * x0^2) + [-1.06414,-3.43407] * (-0.707107 + 0.707107 * x1^2) + [-2.15939,-25.7292] * (-1.22474 * x0 + 0.408248 * x0^3) + [-1.01644,-37.6896] * (-1.22474 * x1 + 0.408248 * x1^3) + [-0.325041,8.6813] * ((x0) * (x1)) + [-0.907403,22.5756] * (0.612372 - 1.22474 * x0^2 + 0.204124 * x0^4) + [-1.62512,-9.30815] * (0.612372 - 1.22474 * x1^2 + 0.204124 * x1^4) + [-2.16709,-49.3188] * (1.36931 * x0 - 0.912871 * x0^3 + 0.0912871 * x0^5) + [-2.25074,-66.2423] * (1.36931 * x1 - 0.912871 * x1^3 + 0.0912871 * x1^5) + [-0.173514,3.05571] * ((-0.707107 + 0.707107 * x0^2) * (x1)) + [0.112808,-13.6544] * ((x0) * (-0.707107 + 0.707107 * x1^2)) + [1.27634,14.7356] * (-0.559017 + 1.67705 * x0^2 - 0.559017 * x0^4 + 0.0372678 * x0^6) + [-1.21493,-38.304] * (-0.559017 + 1.67705 * x1^2 - 0.559017 * x1^4 + 0.0372678 * x1^6) + [-0.717338,-40.9868] * (-1.47902 * x0 + 1.47902 * x0^3 - 0.295804 * x0^5 + 0.0140859 * x0^7) + [-2.13256,-43.5729] * (-1.47902 * x1 + 1.47902 * x1^3 - 0.295804 * x1^5 + 0.0140859 * x1^7) + [-0.136081,1.65659] * ((-1.22474 * x0 + 0.408248 * x0^3) * (x1)) + [0.436204,13.4459] * ((x0) * (-1.22474 * x1 + 0.408248 * x1^3)) + [0.428328,-0.502825] * ((-0.707107 + 0.707107 * x0^2) * (-0.707107 + 0.707107 * x1^2)) + [2.26467,-1.31842] * (0.522913 - 2.09165 * x0^2 + 1.04583 * x0^4 - 0.139443 * x0^6 + 0.00498012 * x0^8) + [-0.0383948,-49.4526] * (0.522913 - 2.09165 * x1^2 + 1.04583 * x1^4 - 0.139443 * x1^6 + 0.00498012 * x1^8) + [-0.115712,2.09135] * ((0.612372 - 1.22474 * x0^2 + 0.204124 * x0^4) * (x1)) + [-0.0299781,-7.3025] * ((x0) * (0.612372 - 1.22474 * x1^2 + 0.204124 * x1^4)) + [-0.00512743,-12.1712] * (1.56874 * x0 - 2.09165 * x0^3 + 0.627495 * x0^5 - 0.0597614 * x0^7 + 0.00166004 * x0^9) + [-0.738027,-9.39401] * (1.56874 * x1 - 2.09165 * x1^3 + 0.627495 * x1^5 - 0.0597614 * x1^7 + 0.00166004 * x1^9) + [0.153637,-1.62981] * ((-1.22474 * x0 + 0.408248 * x0^3) * (-0.707107 + 0.707107 * x1^2)) + [0.0200997,-0.148846] * ((-0.707107 + 0.707107 * x0^2) * (-1.22474 * x1 + 0.408248 * x1^3)) + [0.881691,-3.42154] * (-0.496078 + 2.48039 * x0^2 - 1.65359 * x0^4 + 0.330719 * x0^6 - 0.0236228 * x0^8 + 0.000524951 * x0^10) + [0.181939,-19.6662] * (-0.496078 + 2.48039 * x1^2 - 1.65359 * x1^4 + 0.330719 * x1^6 - 0.0236228 * x1^8 + 0.000524951 * x1^10) + [-0.268964,1.72856] * ((1.36931 * x0 - 0.912871 * x0^3 + 0.0912871 * x0^5) * (x1)) + [0.0436545,4.73268] * ((x0) * (1.36931 * x1 - 0.912871 * x1^3 + 0.0912871 * x1^5))



Get the meta model \hat{g} which is the composed meta model composed with the iso probabilistic transformation T.

result.getMetaModel()

([0.00304403,1.45374] + [-0.766216,-8.21324] * x0 + [-0.230992,-5.14313] * x1 + [-1.3828,12.4909] * (-0.707107 + 0.707107 * x0^2) + [-1.06414,-3.43407] * (-0.707107 + 0.707107 * x1^2) + [-2.15939,-25.7292] * (-1.22474 * x0 + 0.408248 * x0^3) + [-1.01644,-37.6896] * (-1.22474 * x1 + 0.408248 * x1^3) + [-0.325041,8.6813] * ((x0) * (x1)) + [-0.907403,22.5756] * (0.612372 - 1.22474 * x0^2 + 0.204124 * x0^4) + [-1.62512,-9.30815] * (0.612372 - 1.22474 * x1^2 + 0.204124 * x1^4) + [-2.16709,-49.3188] * (1.36931 * x0 - 0.912871 * x0^3 + 0.0912871 * x0^5) + [-2.25074,-66.2423] * (1.36931 * x1 - 0.912871 * x1^3 + 0.0912871 * x1^5) + [-0.173514,3.05571] * ((-0.707107 + 0.707107 * x0^2) * (x1)) + [0.112808,-13.6544] * ((x0) * (-0.707107 + 0.707107 * x1^2)) + [1.27634,14.7356] * (-0.559017 + 1.67705 * x0^2 - 0.559017 * x0^4 + 0.0372678 * x0^6) + [-1.21493,-38.304] * (-0.559017 + 1.67705 * x1^2 - 0.559017 * x1^4 + 0.0372678 * x1^6) + [-0.717338,-40.9868] * (-1.47902 * x0 + 1.47902 * x0^3 - 0.295804 * x0^5 + 0.0140859 * x0^7) + [-2.13256,-43.5729] * (-1.47902 * x1 + 1.47902 * x1^3 - 0.295804 * x1^5 + 0.0140859 * x1^7) + [-0.136081,1.65659] * ((-1.22474 * x0 + 0.408248 * x0^3) * (x1)) + [0.436204,13.4459] * ((x0) * (-1.22474 * x1 + 0.408248 * x1^3)) + [0.428328,-0.502825] * ((-0.707107 + 0.707107 * x0^2) * (-0.707107 + 0.707107 * x1^2)) + [2.26467,-1.31842] * (0.522913 - 2.09165 * x0^2 + 1.04583 * x0^4 - 0.139443 * x0^6 + 0.00498012 * x0^8) + [-0.0383948,-49.4526] * (0.522913 - 2.09165 * x1^2 + 1.04583 * x1^4 - 0.139443 * x1^6 + 0.00498012 * x1^8) + [-0.115712,2.09135] * ((0.612372 - 1.22474 * x0^2 + 0.204124 * x0^4) * (x1)) + [-0.0299781,-7.3025] * ((x0) * (0.612372 - 1.22474 * x1^2 + 0.204124 * x1^4)) + [-0.00512743,-12.1712] * (1.56874 * x0 - 2.09165 * x0^3 + 0.627495 * x0^5 - 0.0597614 * x0^7 + 0.00166004 * x0^9) + [-0.738027,-9.39401] * (1.56874 * x1 - 2.09165 * x1^3 + 0.627495 * x1^5 - 0.0597614 * x1^7 + 0.00166004 * x1^9) + [0.153637,-1.62981] * ((-1.22474 * x0 + 0.408248 * x0^3) * (-0.707107 + 0.707107 * x1^2)) + [0.0200997,-0.148846] * ((-0.707107 + 0.707107 * x0^2) * (-1.22474 * x1 + 0.408248 * x1^3)) + [0.881691,-3.42154] * (-0.496078 + 2.48039 * x0^2 - 1.65359 * x0^4 + 0.330719 * x0^6 - 0.0236228 * x0^8 + 0.000524951 * x0^10) + [0.181939,-19.6662] * (-0.496078 + 2.48039 * x1^2 - 1.65359 * x1^4 + 0.330719 * x1^6 - 0.0236228 * x1^8 + 0.000524951 * x1^10) + [-0.268964,1.72856] * ((1.36931 * x0 - 0.912871 * x0^3 + 0.0912871 * x0^5) * (x1)) + [0.0436545,4.73268] * ((x0) * (1.36931 * x1 - 0.912871 * x1^3 + 0.0912871 * x1^5)))o((| y0 = [x0]->[x0]
| y1 = [x1]->[x1]
)o(class=LinearEvaluation name=Unnamed center=[0,0] constant=[0,0] linear=[[ 1 0 ]
[ 0 1 ]]))



Get the projection strategy: this is the method to estimate the coefficients, i.e. regression or integration.

algo.getProjectionStrategy()

class=ProjectionStrategy implementation=class=LeastSquaresStrategy experiment=class=FixedExperiment name=Unnamed sample=class=Sample name=Normal implementation=class=SampleImplementation name=Normal size=30 dimension=2 description=[X0,X1] data=[[0.608202,-1.26617],[-0.438266,1.20548],[-2.18139,0.350042],[-0.355007,1.43725],[0.810668,0.793156],[-0.470526,0.261018],[-2.29006,-1.28289],[-1.31178,-0.0907838],[0.995793,-0.139453],[-0.560206,0.44549],[0.322925,0.445785],[-1.03808,-0.856712],[0.473617,-0.125498],[0.351418,1.78236],[0.0702074,-0.781366],[-0.721533,-0.241223],[-1.78796,0.40136],[1.36783,1.00434],[0.741548,-0.0436123],[0.539345,0.29995],[0.407717,-0.485112],[-0.382992,-0.752817],[0.257926,1.96876],[-0.671291,1.85579],[0.0521593,0.790446],[0.716353,-0.743622],[0.184356,-1.53073],[0.655027,0.538071],[1.73821,-0.958722],[0.377922,-0.181004]] weights=class=Point name=Unnamed dimension=30 values=[0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333,0.0333333]