.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_meta_modeling/polynomial_chaos_metamodel/plot_functional_chaos.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_meta_modeling_polynomial_chaos_metamodel_plot_functional_chaos.py: Create a polynomial chaos metamodel from a data set =================================================== .. GENERATED FROM PYTHON SOURCE LINES 7-12 In this example, we create a polynomial chaos expansion (PCE) using a data set. More precisely, given a pair of input and output samples, we create a PCE without the knowledge of the input distribution. In this example, we use a relatively small sample size equal to 80. .. GENERATED FROM PYTHON SOURCE LINES 14-35 In this example we create a global approximation of a model response using polynomial chaos expansion. Let :math:`\vect{g}` be the function defined by: .. math:: \vect{g}(\vect{x}) = \Tr{\left(\cos(x_1 + x_2), (x_2 + 1) e^{x_1}\right)} for any :math:`\vect{x} \in \Rset^2`. We assume that .. math:: X_1 \sim \mathcal{N}(0,1) \textrm{ and } X_2 \sim \mathcal{N}(0,1) and that :math:`X_1` and :math:`X_2` are independent. An interesting point in this example is that the output is multivariate. This is why we are going to use the `getMarginal` method in the script in order to select the output marginal that we want to manage. .. GENERATED FROM PYTHON SOURCE LINES 37-39 Simulate input and output samples ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. GENERATED FROM PYTHON SOURCE LINES 41-47 .. code-block:: Python import openturns as ot import openturns.viewer as viewer from matplotlib import pylab as plt ot.Log.Show(ot.Log.NONE) .. GENERATED FROM PYTHON SOURCE LINES 48-49 We first create the function `model`. .. GENERATED FROM PYTHON SOURCE LINES 51-58 .. code-block:: Python ot.RandomGenerator.SetSeed(0) input_names = ["x1", "x2"] formulas = ["cos(x1 + x2)", "(x2 + 1) * exp(x1)"] model = ot.SymbolicFunction(input_names, formulas) inputDimension = model.getInputDimension() outputDimension = model.getOutputDimension() .. GENERATED FROM PYTHON SOURCE LINES 59-61 Then we create a sample `inputSample` and compute the corresponding output sample `outputSample`. .. GENERATED FROM PYTHON SOURCE LINES 63-68 .. code-block:: Python distribution = ot.Normal(inputDimension) samplesize = 80 inputSample = distribution.getSample(samplesize) outputSample = model(inputSample) .. GENERATED FROM PYTHON SOURCE LINES 69-71 Create the PCE ~~~~~~~~~~~~~~ .. GENERATED FROM PYTHON SOURCE LINES 73-83 Create a functional chaos model. The algorithm needs to fit a distribution on the input sample. To do this, the algorithm in :class:`~openturns.FunctionalChaosAlgorithm` uses the :class:`~openturns.FunctionalChaosAlgorithm.BuildDistribution` static method to fit the distribution to the input sample. Please read :doc:`Fit a distribution from an input sample ` for an example of this method. The algorithm does this automatically using the Lilliefors test. In order to make the algorithm a little faster, we reduce the value of the sample size used in the Lilliefors test. .. GENERATED FROM PYTHON SOURCE LINES 85-87 .. code-block:: Python ot.ResourceMap.SetAsUnsignedInteger("FittingTest-LillieforsMaximumSamplingSize", 50) .. GENERATED FROM PYTHON SOURCE LINES 88-91 The main topic of this example is to introduce the next constructor of :class:`~openturns.FunctionalChaosAlgorithm`. Notice that the only input arguments are the input and output samples. .. GENERATED FROM PYTHON SOURCE LINES 91-96 .. code-block:: Python algo = ot.FunctionalChaosAlgorithm(inputSample, outputSample) algo.run() result = algo.getResult() result .. raw:: html
FunctionalChaosResult
Index Multi-index Coeff.#0 Coeff.#1
0 [0,0] 0.4157338 1.766267
1 [1,0] 0.2310582 2.017161
2 [0,1] 0.06674082 1.837368
3 [2,0] 0.09816137 1.821379
4 [0,2] -0.3076637 -0.01464018
5 [3,0] 0.3951189 1.146608
6 [0,3] 0.07087581 0.0002386309
7 [1,1] 0.04728438 1.983242
8 [4,0] 0.4122558 0.7798714
9 [0,4] 0.1268529 0.001559385
10 [5,0] 0.2841537 0.4215949
11 [0,5] -0.008199683 -0.002441298
12 [2,1] 0.05345879 1.437201
13 [1,2] 0.09992143 -0.02049864
14 [6,0] 0.19985 0.3551065
15 [0,6] 0.1080172 0.01664983
16 [7,0] 0.1479776 0.173851
17 [0,7] -0.03474477 -0.00292527
18 [3,1] 0.2354754 0.7409979
19 [1,3] 0.6043447 0.04656466
20 [2,2] 0.4434066 -0.004206286
21 [8,0] 0.08286844 0.1384384
22 [0,8] 0.1204243 0.01601914
23 [4,1] 0.01628032 0.2432426
24 [1,4] 0.01408954 -0.006884768
25 [9,0] 0.04082984 0.04282058
26 [0,9] -0.009678954 0.0001552316
27 [3,2] 0.1005659 -0.002783204
28 [2,3] 0.04742904 -0.002838578
29 [10,0] 0.01375835 0.02651859
30 [0,10] 0.05126546 0.006360932
31 [5,1] -0.06583213 0.0634149
32 [1,5] 0.09951267 0.02014581


.. GENERATED FROM PYTHON SOURCE LINES 97-102 Not all coefficients are selected in this PCE. Indeed, the default constructor of :class:`~openturns.FunctionalChaosAlgorithm` creates a sparse PCE. Please read :doc:`Create a full or sparse polynomial chaos expansion ` for more details on this topic. .. GENERATED FROM PYTHON SOURCE LINES 104-105 Get the metamodel. .. GENERATED FROM PYTHON SOURCE LINES 105-107 .. code-block:: Python metamodel = result.getMetaModel() .. GENERATED FROM PYTHON SOURCE LINES 108-111 Plot the second output of our model depending on :math:`x_2` with :math:`x_1=0.5`. In order to do this, we create a `ParametricFunction` and set the value of :math:`x_1`. Then we use the `getMarginal` method to extract the second output (which index is equal to 1). .. GENERATED FROM PYTHON SOURCE LINES 113-131 .. code-block:: Python x1index = 0 x1value = 0.5 x2min = -3.0 x2max = 3.0 outputIndex = 1 metamodelParametric = ot.ParametricFunction(metamodel, [x1index], [x1value]) graph = metamodelParametric.getMarginal(outputIndex).draw(x2min, x2max) graph.setLegends(["Metamodel"]) modelParametric = ot.ParametricFunction(model, [x1index], [x1value]) curve = modelParametric.getMarginal(outputIndex).draw(x2min, x2max).getDrawable(0) curve.setColor("red") curve.setLegend("Model") graph.add(curve) graph.setLegendPosition("lower right") graph.setXTitle("X2") graph.setTitle("Metamodel Validation, output #%d" % (outputIndex)) view = viewer.View(graph) .. image-sg:: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_001.png :alt: Metamodel Validation, output #1 :srcset: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 132-136 We see that the metamodel fits approximately to the model, except perhaps for extreme values of :math:`x_2`. However, there is a better way of globally validating the metamodel, using the :class:`~openturns.MetaModelValidation` on a validation design of experiments. .. GENERATED FROM PYTHON SOURCE LINES 138-143 .. code-block:: Python n_valid = 100 inputTest = distribution.getSample(n_valid) outputTest = model(inputTest) .. GENERATED FROM PYTHON SOURCE LINES 144-145 Plot the corresponding validation graphics. .. GENERATED FROM PYTHON SOURCE LINES 147-154 .. code-block:: Python metamodelPredictions = metamodel(inputTest) val = ot.MetaModelValidation(outputTest, metamodelPredictions) r2Score = val.computeR2Score() graph = val.drawValidation() graph.setTitle("Metamodel validation R2=" + str(r2Score)) view = viewer.View(graph) .. image-sg:: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_002.png :alt: Metamodel validation R2=[-3.50512,0.969793] :srcset: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 155-159 The coefficient of determination is not extremely satisfactory for the first output, but is would be sufficient for a central dispersion study. The second output has a much more satisfactory :math:`R^2`: only one single extreme point is far from the diagonal of the graphics. .. GENERATED FROM PYTHON SOURCE LINES 161-163 Compute and print Sobol' indices ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. GENERATED FROM PYTHON SOURCE LINES 165-168 .. code-block:: Python chaosSI = ot.FunctionalChaosSobolIndices(result) chaosSI .. raw:: html
FunctionalChaosSobolIndices Output marginal: 0
Input Variable Sobol' index Total index
0 X0 0.400231 0.888617
1 X1 0.111383 0.599769
Index Multi-index Part of variance
19 [1,3] 0.270497
20 [2,2] 0.145612
8 [4,0] 0.125871
5 [3,0] 0.115624
4 [0,2] 0.070105
10 [5,0] 0.059800
18 [3,1] 0.041066
1 [1,0] 0.039540
14 [6,0] 0.029580
16 [7,0] 0.016218
9 [0,4] 0.011918
22 [0,8] 0.010740
Output marginal: 1
Input Variable Sobol' index Total index
0 X0 0.491712 0.828208
1 X1 0.171792 0.508288
Index Multi-index Part of variance
1 [1,0] 0.207009
7 [1,1] 0.200105
2 [0,1] 0.171751
3 [2,0] 0.168775
12 [2,1] 0.105085
5 [3,0] 0.066886
8 [4,0] 0.030942
18 [3,1] 0.027935


.. GENERATED FROM PYTHON SOURCE LINES 169-177 Let us analyze the results of this global sensitivity analysis. * We see that the first output involves significant multi-indices with higher marginal degree. * For the second output, the first variable is especially significant, with a significant contribution of the interactions. The contribution of the interactions are very significant in this model. .. GENERATED FROM PYTHON SOURCE LINES 179-180 Draw Sobol' indices. .. GENERATED FROM PYTHON SOURCE LINES 182-186 .. code-block:: Python sensitivityAnalysis = ot.FunctionalChaosSobolIndices(result) first_order = [sensitivityAnalysis.getSobolIndex(i) for i in range(inputDimension)] total_order = [sensitivityAnalysis.getSobolTotalIndex(i) for i in range(inputDimension)] .. GENERATED FROM PYTHON SOURCE LINES 187-192 .. code-block:: Python input_names = model.getInputDescription() graph = ot.SobolIndicesAlgorithm.DrawSobolIndices(input_names, first_order, total_order) graph.setLegendPosition("center") view = viewer.View(graph) .. image-sg:: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_003.png :alt: Sobol' indices :srcset: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_003.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 193-208 Testing the sensitivity to the degree ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ With the specific constructor of :class:`~openturns.FunctionalChaosAlgorithm` that we use, the `FunctionalChaosAlgorithm-MaximumTotalDegree` in the `ResourceMap` configures the maximum degree explored by the algorithm. This degree is a trade-off. * If the maximum degree is too low, the polynomial may miss some coefficients so that the quality is lower than possible. * If the maximum degree is too large, the number of coefficients to explore is too large, so that the coefficients might be poorly estimated. This is why the following `for` loop explores various degrees to see the sensitivity of the metamodel predictivity depending on the degree. .. GENERATED FROM PYTHON SOURCE LINES 210-211 The default value of this parameter is 10. .. GENERATED FROM PYTHON SOURCE LINES 213-215 .. code-block:: Python ot.ResourceMap.GetAsUnsignedInteger("FunctionalChaosAlgorithm-MaximumTotalDegree") .. rst-class:: sphx-glr-script-out .. code-block:: none 10 .. GENERATED FROM PYTHON SOURCE LINES 216-217 This is why we explore the values from 1 to 10. .. GENERATED FROM PYTHON SOURCE LINES 219-237 .. code-block:: Python maximumDegree = 11 degrees = range(1, maximumDegree) r2Score = ot.Sample(len(degrees), outputDimension) for maximumDegree in degrees: ot.ResourceMap.SetAsUnsignedInteger( "FunctionalChaosAlgorithm-MaximumTotalDegree", maximumDegree ) print("Maximum total degree =", maximumDegree) algo = ot.FunctionalChaosAlgorithm(inputSample, outputSample) algo.run() result = algo.getResult() metamodel = result.getMetaModel() metamodelPredictions = metamodel(inputTest) val = ot.MetaModelValidation(outputTest, metamodelPredictions) r2ScoreLocal = val.computeR2Score() r2ScoreLocal = [max(0.0, r2ScoreLocal[i]) for i in range(outputDimension)] r2Score[maximumDegree - degrees[0]] = r2ScoreLocal .. rst-class:: sphx-glr-script-out .. code-block:: none Maximum total degree = 1 Maximum total degree = 2 Maximum total degree = 3 Maximum total degree = 4 Maximum total degree = 5 Maximum total degree = 6 Maximum total degree = 7 Maximum total degree = 8 Maximum total degree = 9 Maximum total degree = 10 .. GENERATED FROM PYTHON SOURCE LINES 238-253 .. code-block:: Python graph = ot.Graph("Predictivity", "Total degree", "R2", True) cloud = ot.Cloud([[d] for d in degrees], r2Score[:, 0]) cloud.setLegend("Output #0") cloud.setPointStyle("bullet") graph.add(cloud) cloud = ot.Cloud([[d] for d in degrees], r2Score[:, 1]) cloud.setLegend("Output #1") cloud.setPointStyle("diamond") graph.add(cloud) graph.setLegendPosition("upper left") graph.setLegendCorner([1.0, 1.0]) view = viewer.View(graph) plt.subplots_adjust(right=0.7) plt.show() .. image-sg:: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_004.png :alt: Predictivity :srcset: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_004.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 254-275 We see that a low total degree is not sufficient to describe the first output with good :math:`R^2` score. However, the coefficient of determination can drop when the total degree increases. The :math:`R^2` score of the second output seems to be much less satisfactory: a little more work would be required to improve the metamodel. In this situation, the following methods may be used. * Since the distribution of the input is known, we may want to give this information to the :class:`~openturns.FunctionalChaosAlgorithm`. This prevents the algorithm from trying to fit the input distribution which best fit to the data. * We may want to customize the `adaptiveStrategy` by selecting an enumerate function which best fit to this particular example. In this specific example, however, the interactions plays a great role so that the linear enumerate function may provide better results than the hyperbolic rule. * We may want to customize the `projectionStrategy` by selecting a method to compute the coefficient which improves the estimation. For example, it might be interesting to try an integration rule instead of the least squares method. Notice that a specific design of experiments is required in this case. .. GENERATED FROM PYTHON SOURCE LINES 277-278 Reset default settings .. GENERATED FROM PYTHON SOURCE LINES 278-279 .. code-block:: Python ot.ResourceMap.Reload() .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 10.219 seconds) .. _sphx_glr_download_auto_meta_modeling_polynomial_chaos_metamodel_plot_functional_chaos.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_functional_chaos.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_functional_chaos.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_functional_chaos.zip `