.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_meta_modeling/low_rank_tensors_metamodel/plot_tensor_cantilever_beam.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_meta_modeling_low_rank_tensors_metamodel_plot_tensor_cantilever_beam.py: Tensor approximation of the cantilever beam model ================================================= .. GENERATED FROM PYTHON SOURCE LINES 6-9 In this example, we create a low-rank approximation in the canonical tensor format of the cantilever beam. In order to fit the hyper-parameters of the approximation, we use a design of experiments which size is 10000. .. GENERATED FROM PYTHON SOURCE LINES 11-49 We consider a cantilever beam defined by its Young’s modulus :math:`E`, its length :math:`L` and its section modulus :math:`I`. One end of the cantilever beam is built in a wall and we apply a concentrated bending load :math:`F` at the other end of the beam, resulting in a deviation :math:`Y`. .. figure:: ../../_static/beam.png :align: center :width: 25% The beam geometry **Inputs** * :math:`E` : Young modulus (Pa), Beta(r = 0.9, t = 3.5, a = :math:`2.5\times 10^7`, :math:`b = 5\times 10^7`) * :math:`F` : Loading (N), Lognormal(:math:`\mu_F=30 \times 10^3`, :math:`\sigma_F=9\times 10^3`, shift= :math:`15 \times 10^3`) * :math:`L` : Length of beam (cm), Uniform(min=250.0, max= 260.0) * :math:`I` : Moment of inertia (cm^4), Beta(r = 2.5, t = 4.0, a = 310, b = 450). In the previous table :math:`\mu_F=E(F)` and :math:`\sigma_F=\sqrt{V(F)}` are the mean and the standard deviation of :math:`F`. We assume that the random variables E, F, L and I are dependent and associated with a gaussian copula which correlation matrix is : .. math:: R = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & -0.2 \\ 0 & 0 & -0.2 & 1 \end{pmatrix} In other words, we consider that the variables L and I are negatively correlated : when the length L increases, the moment of intertia I decreases. **Output** The vertical displacement at free end of the cantilever beam is: .. math:: Y = \dfrac{F\, L^3}{3 \, E \, I} .. GENERATED FROM PYTHON SOURCE LINES 51-53 Definition of the model ----------------------- .. GENERATED FROM PYTHON SOURCE LINES 55-60 .. code-block:: default import openturns as ot import openturns.viewer as viewer from matplotlib import pylab as plt ot.Log.Show(ot.Log.NONE) .. GENERATED FROM PYTHON SOURCE LINES 61-62 We define the symbolic function which evaluates the output Y depending on the inputs E, F, L and I. .. GENERATED FROM PYTHON SOURCE LINES 64-66 .. code-block:: default model = ot.SymbolicFunction(["E", "F", "L", "I"], ["F*L^3/(3*E*I)"]) .. GENERATED FROM PYTHON SOURCE LINES 67-68 Then we define the distribution of the input random vector. .. GENERATED FROM PYTHON SOURCE LINES 70-71 Young's modulus E .. GENERATED FROM PYTHON SOURCE LINES 71-84 .. code-block:: default E = ot.Beta(0.9, 3.5, 2.5e7, 5.0e7) # in N/m^2 E.setDescription("E") # Load F F = ot.LogNormal() # in N F.setParameter(ot.LogNormalMuSigma()([30.e3, 9e3, 15.e3])) F.setDescription("F") # Length L L = ot.Uniform(250., 260.) # in cm L.setDescription("L") # Moment of inertia I I = ot.Beta(2.5, 4, 310, 450) # in cm^4 I.setDescription("I") .. GENERATED FROM PYTHON SOURCE LINES 85-86 Finally, we define the dependency using a `NormalCopula`. .. GENERATED FROM PYTHON SOURCE LINES 88-90 .. code-block:: default myDistribution = ot.ComposedDistribution([E, F, L, I]) .. GENERATED FROM PYTHON SOURCE LINES 91-93 Create the design of experiments -------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 95-96 We consider a simple Monte-Carlo sampling as a design of experiments. This is why we generate an input sample using the `getSample` method of the distribution. Then we evaluate the output using the `model` function. .. GENERATED FROM PYTHON SOURCE LINES 98-102 .. code-block:: default sampleSize_train = 10000 X_train = myDistribution.getSample(sampleSize_train) Y_train = model(X_train) .. GENERATED FROM PYTHON SOURCE LINES 103-104 The following figure presents the distribution of the vertical deviations Y on the training sample. We observe that the large deviations occur less often. .. GENERATED FROM PYTHON SOURCE LINES 106-112 .. code-block:: default histo = ot.HistogramFactory().build(Y_train).drawPDF() histo.setXTitle("Vertical deviation (cm)") histo.setTitle("Distribution of the vertical deviation") histo.setLegends([""]) view = viewer.View(histo) .. image-sg:: /auto_meta_modeling/low_rank_tensors_metamodel/images/sphx_glr_plot_tensor_cantilever_beam_001.png :alt: Distribution of the vertical deviation :srcset: /auto_meta_modeling/low_rank_tensors_metamodel/images/sphx_glr_plot_tensor_cantilever_beam_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 113-115 Create the metamodel -------------------- .. GENERATED FROM PYTHON SOURCE LINES 117-133 We recall that the metamodel writes as: .. math:: f(X_1, \dots, X_d) = \sum_{i=1}^m \prod_{j=1}^d v_j^{(i)} (x_j), \forall x \in \mathbb{R}^d with: .. math:: v_j^{(i)} (x_j) = \sum_{k=1}^{n_j} \beta_{j,k}^{(i)} \phi_{j,k} (x_j) We should define : - The family of univariate functions :math:`\phi_j`. We choose the orthogonal basis with respect to the marginal distribution measures. - The maximal rank :math:`m`. Here value is set to 5 - The marginal degrees :math:`n_j`. Here we set the degrees to [4, 15, 3, 2] .. GENERATED FROM PYTHON SOURCE LINES 135-141 .. code-block:: default factoryCollection = [ot.OrthogonalUniVariatePolynomialFunctionFactory( ot.StandardDistributionPolynomialFactory(_)) for _ in [E, F, L, I]] functionFactory = ot.OrthogonalProductFunctionFactory(factoryCollection) nk = [4, 15, 3, 2] maxRank = 1 .. GENERATED FROM PYTHON SOURCE LINES 142-143 Finally we might launch the algorithm: .. GENERATED FROM PYTHON SOURCE LINES 145-151 .. code-block:: default algo = ot.TensorApproximationAlgorithm( X_train, Y_train, myDistribution, functionFactory, nk, maxRank) algo.run() result = algo.getResult() metamodel = result.getMetaModel() .. GENERATED FROM PYTHON SOURCE LINES 152-155 The `run` method has optimized the hyperparameters of the metamodel (:math:`\beta` coefficients). We can then print the coefficients which have been estimated using a double loop. .. GENERATED FROM PYTHON SOURCE LINES 157-164 .. code-block:: default tensor = result.getTensor() for j in range(myDistribution.getDimension()): print("j =", j) for i in range(maxRank): for k in range(nk[j]): print(tensor.getCoefficients(i, j)[k]) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none j = 0 719143561570.0574 -91980985108.83176 14004062532.542831 -2251093571.136048 j = 1 -6.30626822462971e-07 -4.657488165602593e-05 -0.0014397588549081527 -0.020694132091859732 -0.1379607207157905 -0.4127554617426917 -0.6049683006384088 -0.5400697556011282 -0.3463264941831661 -0.16847279507584792 -0.06228708783634975 -0.01707825278652062 -0.003290787015845679 -0.00039922265235602014 -2.3008762793297084e-05 j = 2 0.9994255123575291 0.03389022239783041 0.00031316883026268885 j = 3 0.997736720661584 -0.06724162582410002 .. GENERATED FROM PYTHON SOURCE LINES 165-167 Validate the metamodel ---------------------- .. GENERATED FROM PYTHON SOURCE LINES 169-170 We finally want to validate the kriging metamodel. This is why we generate a validation sample which size is equal to 100 and we evaluate the output of the model on this sample. .. GENERATED FROM PYTHON SOURCE LINES 172-176 .. code-block:: default sampleSize_test = 200 X_test = myDistribution.getSample(sampleSize_test) Y_test = model(X_test) .. GENERATED FROM PYTHON SOURCE LINES 177-178 The `MetaModelValidation` classe makes the validation easy. To create it, we use the validation samples and the metamodel. .. GENERATED FROM PYTHON SOURCE LINES 180-182 .. code-block:: default val = ot.MetaModelValidation(X_test, Y_test, metamodel) .. GENERATED FROM PYTHON SOURCE LINES 183-184 The `computePredictivityFactor` computes the Q2 factor. .. GENERATED FROM PYTHON SOURCE LINES 186-189 .. code-block:: default Q2 = val.computePredictivityFactor()[0] Q2 .. rst-class:: sphx-glr-script-out Out: .. code-block:: none 0.9997215947699597 .. GENERATED FROM PYTHON SOURCE LINES 190-191 Since the Q2 is larger than 95%, we can say that the quality is acceptable. .. GENERATED FROM PYTHON SOURCE LINES 193-194 The residuals are the difference between the model and the metamodel. .. GENERATED FROM PYTHON SOURCE LINES 196-200 .. code-block:: default r = val.getResidualSample() graph = ot.HistogramFactory().build(r).drawPDF() view = viewer.View(graph) .. image-sg:: /auto_meta_modeling/low_rank_tensors_metamodel/images/sphx_glr_plot_tensor_cantilever_beam_002.png :alt: y0 PDF :srcset: /auto_meta_modeling/low_rank_tensors_metamodel/images/sphx_glr_plot_tensor_cantilever_beam_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 201-202 We observe that the negative residuals occur with nearly the same frequency of the positive residuals: this is a first sign of good quality. Furthermore, the residuals are most of the times contained in the [-1,1] interval, which is a sign of quality given the amplitude of the output (approximately from 5 to 25 cm). .. GENERATED FROM PYTHON SOURCE LINES 204-205 The `drawValidation` method allows to compare the observed outputs and the metamodel outputs. .. GENERATED FROM PYTHON SOURCE LINES 207-212 .. code-block:: default graph = val.drawValidation() graph.setTitle("Q2 = %.2f%%" % (100*Q2)) view = viewer.View(graph) plt.show() .. image-sg:: /auto_meta_modeling/low_rank_tensors_metamodel/images/sphx_glr_plot_tensor_cantilever_beam_003.png :alt: Q2 = 99.97% :srcset: /auto_meta_modeling/low_rank_tensors_metamodel/images/sphx_glr_plot_tensor_cantilever_beam_003.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 213-216 We observe that the metamodel predictions are close to the model outputs, since most red points are close to the diagonal. However, when we consider extreme deviations (i.e. less than 10 or larger than 20), then the quality is less obvious. Given that the kriging metamodel quality is sensitive to the design of experiments, it might be interesting to consider a Latin Hypercube Sampling (LHS) design to further improve the predictions quality. .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 2.169 seconds) .. _sphx_glr_download_auto_meta_modeling_low_rank_tensors_metamodel_plot_tensor_cantilever_beam.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_tensor_cantilever_beam.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_tensor_cantilever_beam.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_