.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_surrogate_modeling/polynomial_chaos/plot_chaos_ishigami_dependent_input.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_surrogate_modeling_polynomial_chaos_plot_chaos_ishigami_dependent_input.py: Create a FCE for dependent inputs: transformation vs domination =============================================================== .. GENERATED FROM PYTHON SOURCE LINES 7-18 In this example, we create a functional chaos expansion for the :ref:`Ishigami function` when the input distribution has dependent marginals. Refer to :ref:`functional_chaos` to learn more about functional chaos expansion. We provide one input sample and one output sample of the Ishigami function. We build two meta models: - Meta model 1: we use an isoprobabilistic transformation that maps the input distribution to another one with independent marginals. - Meta model 2: we use the domination method: the basis of the projection space is not orthonormal to the input distribution. .. GENERATED FROM PYTHON SOURCE LINES 21-23 Define the Ishigami model ------------------------- .. GENERATED FROM PYTHON SOURCE LINES 25-29 .. code-block:: Python from openturns.usecases import ishigami_function import openturns as ot import openturns.viewer as otv .. GENERATED FROM PYTHON SOURCE LINES 30-33 We load the Ishigami model. The `IshigamiModel` data class contains the input distribution :math:`\mu_{\inputRV}` of the random vector :math:`\vect{X}=(X_1, X_2, X_3)` in `im.distribution`. .. GENERATED FROM PYTHON SOURCE LINES 33-36 .. code-block:: Python im = ishigami_function.IshigamiModel() input_names = im.distribution.getDescription() .. GENERATED FROM PYTHON SOURCE LINES 37-44 We want to introduce some dependence betwenn the components. That is why we have to change the input distribution stored in the Ishigami model (which have independent components). We use a copula that links :math:`(X_1,X_2)` with a :class:`~openturns.ClaytonCopula`. The last component :math:`X_3` is independent of :math:`(X_1,X_2)`. The final 3d-copula is a :class:`~openturns.BlockIndependentCopula` copula. We keep the initial marginal distributions. .. GENERATED FROM PYTHON SOURCE LINES 44-49 .. code-block:: Python copula = ot.BlockIndependentCopula([ot.ClaytonCopula(1.5), ot.IndependentCopula(1)]) input_dist = ot.JointDistribution( [im.distribution.getMarginal(i) for i in range(im.dim)], copula ) .. GENERATED FROM PYTHON SOURCE LINES 50-53 We generate an input sample from the input distribution and we compute the output sample from the Ishigami function that is contained in `im.model`. .. GENERATED FROM PYTHON SOURCE LINES 53-57 .. code-block:: Python sampleSize = 1000 inputTrain = im.distribution.getSample(sampleSize) outputTrain = im.model(inputTrain) .. GENERATED FROM PYTHON SOURCE LINES 58-59 We display the relationships between the outputs and the inputs. .. GENERATED FROM PYTHON SOURCE LINES 59-62 .. code-block:: Python grid = ot.VisualTest.DrawPairsXY(inputTrain, outputTrain) view = otv.View(grid, figure_kw={"figsize": (12.0, 4.0)}) .. image-sg:: /auto_surrogate_modeling/polynomial_chaos/images/sphx_glr_plot_chaos_ishigami_dependent_input_001.svg :alt: plot chaos ishigami dependent input :srcset: /auto_surrogate_modeling/polynomial_chaos/images/sphx_glr_plot_chaos_ishigami_dependent_input_001.svg :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 63-64 We draw the histogramm of the output values. .. GENERATED FROM PYTHON SOURCE LINES 64-69 .. code-block:: Python graph = ot.HistogramFactory().build(outputTrain).drawPDF() graph.setTitle("Ishigami outputs") graph.setXTitle("y") view = otv.View(graph) .. image-sg:: /auto_surrogate_modeling/polynomial_chaos/images/sphx_glr_plot_chaos_ishigami_dependent_input_002.svg :alt: Ishigami outputs :srcset: /auto_surrogate_modeling/polynomial_chaos/images/sphx_glr_plot_chaos_ishigami_dependent_input_002.svg :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 70-119 Meta model 1: Transformation method ----------------------------------- As the input distribution has dependent marginals, we can use an :ref:`isoprobabilistic transformation ` that maps the input distribution to another one with independent marginals. This new distribution is not specified, therefore the default adaptive strategy and the default projection stategy are used. The default adaptive strategy is defined as follows: - a basis is built as the tensorization of the univariate polynomials family orthonormal to the standard representative of the input marginals distribution. Then this basis is orthonormal to the distribution denoted by :math:`\tilde{\mu}` which is the tensorization of the standard representative of the input marginals distribution; - the enumerate function is chosen according to the `FunctionalChaosAlgorithm-QNorm` parameter of the :class:`~openturns.ResourceMap`: if this parameter is equal to 1, then the :class:`~openturns.LinearEnumerateFunction` class is used, otherwise, the :class:`~openturns.HyperbolicAnisotropicEnumerateFunction` class is used. The default value of the key being 0.5, we use the Hyperbolic Anisotropic EnumerateFunction. - the first elements of the basis are used to build the approximation space. The number of elements is computed from the total degree (using the enumerate function of the basis) specified as default value in `FunctionalChaosAlgorithm-MaximumTotalDegree` of the :class:`~openturns.ResourceMap`. The default value being 10, the polynomial approximation space is generated by the polynomials of the basis with maximum total degree equal to 10. The :class:`~openturns.FunctionalChaosAlgorithm` class uses an isoprobabilistic transformation :math:`T` that maps :math:`\mu_{\inputRV}` into :math:`\tilde{\mu}`. We note :math:`\vect{U} = T(\inputRV)`. In this new :math:`\vect{U}` -space, the basis built by the adaptive strategy is orthonormal to the distribution :math:`\tilde{\mu}` of the new random vector :math:`\vect{U}`. The meta model of the transformed Ishigami model :math:`\model \circ T^{-1}` is built. The projection stategy is not specified neither: we use the least-squares strategy with no model selection if the key `FunctionalChaosAlgorithm-Sparse` of the :class:`~openturns.ResourceMap` is *False* and with model selection in the other case, using the selection algorithm specified by the key `FunctionalChaosAlgorithm-FittingAlgorithm`. Considering the default values of the keys, we use a :ref:`least-squares strategy ` with no model selection. Then, the meta model built in the :math:`\vect{U}` -space is finally composed with the isoprobabilistic transformation to get the meta model of the Ishigami function in the initial :math:`\vect{X}` -space. Note that the final meta model is projected on a basis which is not polynomials, due to the action of the isoprobabilistic transformation which is not affine. .. GENERATED FROM PYTHON SOURCE LINES 121-122 We create the functional chaos algorithm. .. GENERATED FROM PYTHON SOURCE LINES 122-126 .. code-block:: Python chaos_algo = ot.FunctionalChaosAlgorithm(inputTrain, outputTrain, input_dist) chaos_algo.setUseDomination(False) chaos_algo.run() .. GENERATED FROM PYTHON SOURCE LINES 127-128 We get the result and the resulting meta model. .. GENERATED FROM PYTHON SOURCE LINES 128-131 .. code-block:: Python chaos_result = chaos_algo.getResult() metamodel = chaos_result.getMetaModel() .. GENERATED FROM PYTHON SOURCE LINES 132-133 In order to validate the meta model, we generate a test sample. .. GENERATED FROM PYTHON SOURCE LINES 133-138 .. code-block:: Python n_valid = 1000 inputTest = input_dist.getSample(n_valid) outputTest = im.model(inputTest) metamodel_predictions = metamodel(inputTest) .. GENERATED FROM PYTHON SOURCE LINES 139-140 We draw the validation graph and we get the :math:`R^2` score: the meta model is of poor quality. .. GENERATED FROM PYTHON SOURCE LINES 140-148 .. code-block:: Python val = ot.MetaModelValidation(outputTest, metamodel_predictions) r2Score = val.computeR2Score()[0] print(f"r2Score with Transformation method = {r2Score:.6f}") graph = val.drawValidation() graph.setTitle(f"R2={r2Score * 100:.2f}%, use domination = false") view = otv.View(graph) .. image-sg:: /auto_surrogate_modeling/polynomial_chaos/images/sphx_glr_plot_chaos_ishigami_dependent_input_003.svg :alt: R2=64.71%, use domination = false :srcset: /auto_surrogate_modeling/polynomial_chaos/images/sphx_glr_plot_chaos_ishigami_dependent_input_003.svg :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none r2Score with Transformation method = 0.647130 .. GENERATED FROM PYTHON SOURCE LINES 149-157 Meta model 2: Domination method ------------------------------- Now, we want to use the domination method, which means that the basis created by the adaptive strategy is used to project the model. This basis is not orthonormal to :math:`\mu_{\inputRV}`. We use the :meth:`~openturns.FunctionalChaosAlgorithm.setUseDomination` method. We implement the same steps as before, until the validation graph. .. GENERATED FROM PYTHON SOURCE LINES 157-170 .. code-block:: Python chaos_algo.setUseDomination(True) chaos_algo.run() chaos_result = chaos_algo.getResult() metamodel_dom = chaos_result.getMetaModel() metamodel_dom_predictions = metamodel_dom(inputTest) val = ot.MetaModelValidation(outputTest, metamodel_dom_predictions) r2Score = val.computeR2Score()[0] print(f"r2Score with Domination method = {r2Score:.6f}") graph = val.drawValidation() graph.setTitle(f"R2={r2Score * 100:.2f}%, use domination = true") view = otv.View(graph) .. image-sg:: /auto_surrogate_modeling/polynomial_chaos/images/sphx_glr_plot_chaos_ishigami_dependent_input_004.svg :alt: R2=98.95%, use domination = true :srcset: /auto_surrogate_modeling/polynomial_chaos/images/sphx_glr_plot_chaos_ishigami_dependent_input_004.svg :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none r2Score with Domination method = 0.989526 .. GENERATED FROM PYTHON SOURCE LINES 171-181 We can see that the meta model obtained with the domination method is largely better than the meta model obtained with the Transformation method, even though the multivariate basis of the approximation space is not orthonormal to the input distribution :math:`\mu_{\inputRV}`. It can be explained by the fact that the multivariate tensorized basis used by the domination method is able to capture the tensorized structure of the Ishigami model. This was not the case of the Transformation method which uses a basis in the :math:`\vect{X}` -space which is not tensorized, due to the action of the isoprobabilist transformation. .. GENERATED FROM PYTHON SOURCE LINES 183-184 Display all figures .. GENERATED FROM PYTHON SOURCE LINES 184-185 .. code-block:: Python otv.View.ShowAll() .. _sphx_glr_download_auto_surrogate_modeling_polynomial_chaos_plot_chaos_ishigami_dependent_input.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_chaos_ishigami_dependent_input.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_chaos_ishigami_dependent_input.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_chaos_ishigami_dependent_input.zip `