.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_meta_modeling/polynomial_chaos_metamodel/plot_functional_chaos.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_meta_modeling_polynomial_chaos_metamodel_plot_functional_chaos.py: Create a polynomial chaos metamodel =================================== .. GENERATED FROM PYTHON SOURCE LINES 6-25 In this example we are going to create a global approximation of a model response using functional chaos. Let :math:`h` be the function defined by: .. math:: g(\mathbf{x}) = \left[\cos(x_1 + x_2), (x_2 + 1) e^{x_1}\right] for any :math:`\mathbf{x}\in\mathbb{R}^2`. We assume that .. math:: X_1 \sim \mathcal{N}(0,1) \textrm{ and } X_2 \sim \mathcal{N}(0,1) and that :math:`X_1` and :math:`X_2` are independent. An interesting point in this example is that the output is multivariate. This is why we are going to use the `getMarginal` method in the script in order to select the output marginal that we want to manage. .. GENERATED FROM PYTHON SOURCE LINES 27-33 .. code-block:: Python import openturns as ot import openturns.viewer as viewer from matplotlib import pylab as plt ot.Log.Show(ot.Log.NONE) .. GENERATED FROM PYTHON SOURCE LINES 34-35 We first create the function `model`. .. GENERATED FROM PYTHON SOURCE LINES 37-43 .. code-block:: Python ot.RandomGenerator.SetSeed(0) dimension = 2 input_names = ["x1", "x2"] formulas = ["cos(x1 + x2)", "(x2 + 1) * exp(x1)"] model = ot.SymbolicFunction(input_names, formulas) .. GENERATED FROM PYTHON SOURCE LINES 44-45 Then we create a sample `inputSample` and compute the corresponding output sample `outputSample`. .. GENERATED FROM PYTHON SOURCE LINES 47-52 .. code-block:: Python distribution = ot.Normal(dimension) samplesize = 80 inputSample = distribution.getSample(samplesize) outputSample = model(inputSample) .. GENERATED FROM PYTHON SOURCE LINES 53-55 Create a functional chaos model. First, we need to fit a distribution on the input sample. We can do this automatically with the Lilliefors test. .. GENERATED FROM PYTHON SOURCE LINES 57-59 .. code-block:: Python ot.ResourceMap.SetAsUnsignedInteger("FittingTest-LillieforsMaximumSamplingSize", 100) .. GENERATED FROM PYTHON SOURCE LINES 60-65 .. code-block:: Python algo = ot.FunctionalChaosAlgorithm(inputSample, outputSample) algo.run() result = algo.getResult() metamodel = result.getMetaModel() .. GENERATED FROM PYTHON SOURCE LINES 66-69 Plot the second output of our model depending on :math:`x_2` with :math:`x_1=0.5`. In order to do this, we create a `ParametricFunction` and set the value of :math:`x_1`. Then we use the `getMarginal` method to extract the second output (which index is equal to 1). .. GENERATED FROM PYTHON SOURCE LINES 71-89 .. code-block:: Python x1index = 0 x1value = 0.5 x2min = -3.0 x2max = 3.0 outputIndex = 1 metamodelParametric = ot.ParametricFunction(metamodel, [x1index], [x1value]) graph = metamodelParametric.getMarginal(outputIndex).draw(x2min, x2max) graph.setLegends(["Metamodel"]) modelParametric = ot.ParametricFunction(model, [x1index], [x1value]) curve = modelParametric.getMarginal(outputIndex).draw(x2min, x2max).getDrawable(0) curve.setColor("red") curve.setLegend("Model") graph.add(curve) graph.setLegendPosition("lower right") graph.setXTitle("X2") graph.setTitle("Metamodel Validation, output #%d" % (outputIndex)) view = viewer.View(graph) .. image-sg:: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_001.png :alt: Metamodel Validation, output #1 :srcset: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 90-92 We see that the metamodel fits approximately to the model, except perhaps for extreme values of :math:`x_2`. However, there is a better way of globally validating the metamodel, using the `MetaModelValidation` on a validation design of experiment. .. GENERATED FROM PYTHON SOURCE LINES 94-99 .. code-block:: Python n_valid = 100 inputTest = distribution.getSample(n_valid) outputTest = model(inputTest) .. GENERATED FROM PYTHON SOURCE LINES 100-101 Plot the corresponding validation graphics. .. GENERATED FROM PYTHON SOURCE LINES 103-109 .. code-block:: Python val = ot.MetaModelValidation(inputTest, outputTest, metamodel) Q2 = val.computePredictivityFactor() graph = val.drawValidation() graph.setTitle("Metamodel validation Q2=" + str(Q2)) view = viewer.View(graph) .. image-sg:: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_002.png :alt: Metamodel validation Q2=[-3.50512,0.969793] :srcset: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 110-112 The coefficient of predictivity is not extremely satisfactory for the first output, but is would be sufficient for a central dispersion study. The second output has a much more satisfactory Q2: only one single extreme point is far from the diagonal of the graphics. .. GENERATED FROM PYTHON SOURCE LINES 114-116 Compute and print Sobol' indices -------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 118-121 .. code-block:: Python chaosSI = ot.FunctionalChaosSobolIndices(result) print(chaosSI) .. rst-class:: sphx-glr-script-out .. code-block:: none FunctionalChaosSobolIndices - input dimension=2 - output dimension=2 - basis size=33 - mean=[0.415734,1.76627] - std-dev=[1.16199,4.4335] Marginal: 0 | Index | Multi-index | Variance part | |-------|---------------|---------------| | 19 | [1,3] | 0.270497 | | 20 | [2,2] | 0.145612 | | 8 | [4,0] | 0.125871 | | 5 | [3,0] | 0.115624 | | 4 | [0,2] | 0.0701045 | | 10 | [5,0] | 0.0597999 | | 18 | [3,1] | 0.0410662 | | 1 | [1,0] | 0.03954 | | 14 | [6,0] | 0.0295803 | | 16 | [7,0] | 0.0162176 | | 9 | [0,4] | 0.0119177 | | 22 | [0,8] | 0.0107404 | | Input | Name | Sobol' index | Total index | |-------|---------------|---------------|---------------| | 0 | X0 | 0.400231 | 0.888617 | | 1 | X1 | 0.111383 | 0.599769 | Marginal: 1 | Index | Multi-index | Variance part | |-------|---------------|---------------| | 1 | [1,0] | 0.207009 | | 7 | [1,1] | 0.200105 | | 2 | [0,1] | 0.171751 | | 3 | [2,0] | 0.168775 | | 12 | [2,1] | 0.105085 | | 5 | [3,0] | 0.0668862 | | 8 | [4,0] | 0.0309423 | | 18 | [3,1] | 0.0279345 | | Input | Name | Sobol' index | Total index | |-------|---------------|---------------|---------------| | 0 | X0 | 0.491712 | 0.828208 | | 1 | X1 | 0.171792 | 0.508288 | .. GENERATED FROM PYTHON SOURCE LINES 122-127 Let us analyse the results of this global sensitivity analysis. * We see that the first output involves significant multi-indices with total degree 4. The contribution of the interactions are very significant in this model. * The second output involves multi-indices with total degrees from 1 to 7, with a significant contribution of multi-indices with total degress 5 and 7. The first variable is especially significant, with a significant contribution of the interactions. .. GENERATED FROM PYTHON SOURCE LINES 129-130 Draw Sobol' indices. .. GENERATED FROM PYTHON SOURCE LINES 132-136 .. code-block:: Python sensitivityAnalysis = ot.FunctionalChaosSobolIndices(result) first_order = [sensitivityAnalysis.getSobolIndex(i) for i in range(dimension)] total_order = [sensitivityAnalysis.getSobolTotalIndex(i) for i in range(dimension)] .. GENERATED FROM PYTHON SOURCE LINES 137-142 .. code-block:: Python input_names = model.getInputDescription() graph = ot.SobolIndicesAlgorithm.DrawSobolIndices(input_names, first_order, total_order) graph.setLegendPosition("center") view = viewer.View(graph) .. image-sg:: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_003.png :alt: Sobol' indices :srcset: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_003.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 143-153 Testing the sensitivity to the degree ------------------------------------- With the specific constructor of `FunctionalChaosAlgorithm` that we use, the `FunctionalChaosAlgorithm-MaximumTotalDegree` in the `ResourceMap` configure the maximum degree explored by the algorithm. This degree is a trade-off. * If the maximum degree is too low, the polynomial may miss some coefficients so that the quality is lower than possible. * If the maximum degree is too large, the number of coefficients to explore is too large, so that the coefficients might be poorly estimated. This is why the following `for` loop explores various degrees to see the sensitivity of the metamodel predictivity depending on the degree. .. GENERATED FROM PYTHON SOURCE LINES 155-156 The default value of this parameter is 10. .. GENERATED FROM PYTHON SOURCE LINES 158-160 .. code-block:: Python ot.ResourceMap.GetAsUnsignedInteger("FunctionalChaosAlgorithm-MaximumTotalDegree") .. rst-class:: sphx-glr-script-out .. code-block:: none 10 .. GENERATED FROM PYTHON SOURCE LINES 161-162 This is why we explore the values from 5 to 15. .. GENERATED FROM PYTHON SOURCE LINES 164-181 .. code-block:: Python degrees = range(5, 12) q2 = ot.Sample(len(degrees), 2) for maximumDegree in degrees: ot.ResourceMap.SetAsUnsignedInteger( "FunctionalChaosAlgorithm-MaximumTotalDegree", maximumDegree ) print("Maximum total degree =", maximumDegree) algo = ot.FunctionalChaosAlgorithm(inputSample, outputSample) algo.run() result = algo.getResult() metamodel = result.getMetaModel() for outputIndex in range(2): val = ot.MetaModelValidation( inputTest, outputTest[:, outputIndex], metamodel.getMarginal(outputIndex) ) q2[maximumDegree - degrees[0], outputIndex] = val.computePredictivityFactor()[0] .. rst-class:: sphx-glr-script-out .. code-block:: none Maximum total degree = 5 Maximum total degree = 6 Maximum total degree = 7 Maximum total degree = 8 Maximum total degree = 9 Maximum total degree = 10 Maximum total degree = 11 .. GENERATED FROM PYTHON SOURCE LINES 182-196 .. code-block:: Python graph = ot.Graph("Predictivity", "Total degree", "Q2", True) cloud = ot.Cloud([[d] for d in degrees], q2[:, 0]) cloud.setLegend("Output #0") cloud.setPointStyle("bullet") graph.add(cloud) cloud = ot.Cloud([[d] for d in degrees], q2[:, 1]) cloud.setLegend("Output #1") cloud.setColor("red") cloud.setPointStyle("bullet") graph.add(cloud) graph.setLegendPosition("upper right") view = viewer.View(graph) plt.show() .. image-sg:: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_004.png :alt: Predictivity :srcset: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_functional_chaos_004.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 197-209 We see that a total degree lower than 9 is not sufficient to describe the first output with good predictivity. However, the coefficient of predictivity drops when the total degree gets greater than 12. The predictivity of the second output seems to be much less satisfactory: a little more work would be required to improve the metamodel. In this situation, the following methods may be used. * Since the distribution of the input is known, we may want to give this information to the `FunctionalChaosAlgorithm`. This prevents the algorithm from trying to fit the distribution which best fit to the data. * We may want to customize the `adaptiveStrategy` by selecting an enumerate function which best fit to this particular situation. In this specific example, the interactions plays a great role so that the linear enumerate function may provide better results than the hyperbolic rule. * We may want to customize the `projectionStrategy` by selecting an method to compute the coefficient which improves the estimation. Given that the function is symbolic and fast, it might be interesting to try an integration rule instead of the least squares method. .. GENERATED FROM PYTHON SOURCE LINES 211-212 Reset default settings .. GENERATED FROM PYTHON SOURCE LINES 212-213 .. code-block:: Python ot.ResourceMap.Reload() .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 7.975 seconds) .. _sphx_glr_download_auto_meta_modeling_polynomial_chaos_metamodel_plot_functional_chaos.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_functional_chaos.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_functional_chaos.py `