.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_calibration/least_squares_and_gaussian_calibration/plot_calibration_withoutobservedinputs.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_calibration_least_squares_and_gaussian_calibration_plot_calibration_withoutobservedinputs.py: Calibration without observed inputs =================================== .. GENERATED FROM PYTHON SOURCE LINES 8-15 The goal of this example is to present the calibration of a parametric model which does not have observed inputs. We are going to see how to use the :class:`~openturns.Sample` class and create an empty sample. Indeed, this is mandatory in order to fit within the calibration framework that is used in the library. In this example there are, however, several outputs to be calibrated. .. GENERATED FROM PYTHON SOURCE LINES 15-19 .. code-block:: Python import openturns as ot import openturns.viewer as otv .. GENERATED FROM PYTHON SOURCE LINES 20-34 The vector of parameters is :math:`\boldsymbol{\theta} = (a, b, c)^T \in \mathbb{R}^3`. This model is linear in :math:`\boldsymbol{\theta}` and identifiable. It is derived from the equation: .. math:: y(x) = a + b x + c x^2 at :math:`x=-1.0, -0.6, -0.2, 0.2, 0.6, 1.0`. In the model, however, the abscissas are fixed and constant. Therefore, the parametric model has 3 parameters, no input and 6 outputs :math:`y_1, ..., y_6`. This produces a set of 5 observations for each output, leading to a total of 5 (observations per output) * 6 (outputs) = 30 observations. .. GENERATED FROM PYTHON SOURCE LINES 36-50 .. code-block:: Python g = ot.SymbolicFunction( ["a", "b", "c"], [ "a + -1.0 * b + 1.0 * c", "a + -0.6 * b + 0.36 * c", "a + -0.2 * b + 0.04 * c", "a + 0.2 * b + 0.04 * c", "a + 0.6 * b + 0.36 * c", "a + 1.0 * b + 1.0 * c", ], ) outputDimension = g.getOutputDimension() print(outputDimension) .. rst-class:: sphx-glr-script-out .. code-block:: none 6 .. GENERATED FROM PYTHON SOURCE LINES 51-52 We set the true value of the parameters. .. GENERATED FROM PYTHON SOURCE LINES 54-57 .. code-block:: Python trueParameter = ot.Point([12.0, 7.0, -8.0]) print(trueParameter) .. rst-class:: sphx-glr-script-out .. code-block:: none [12,7,-8] .. GENERATED FROM PYTHON SOURCE LINES 58-60 In order to generate the observed outputs, we create a distribution in dimension 3, with Dirac (i.e. constant) marginals. .. GENERATED FROM PYTHON SOURCE LINES 62-66 .. code-block:: Python inputRandomVector = ot.ComposedDistribution( [ot.Dirac(theta) for theta in trueParameter] ) .. GENERATED FROM PYTHON SOURCE LINES 67-68 The candidate value is chosen to be different from the true parameter value. .. GENERATED FROM PYTHON SOURCE LINES 70-74 .. code-block:: Python candidate = ot.Point([8.0, 9.0, -6.0]) calibratedIndices = [0, 1, 2] model = ot.ParametricFunction(g, calibratedIndices, candidate) .. GENERATED FROM PYTHON SOURCE LINES 75-78 We consider a multivariate Gaussian noise with zero mean, standard deviation equal to 0.05 and independent copula. The independent copula implies that the errors of the 6 outputs are independent. .. GENERATED FROM PYTHON SOURCE LINES 80-87 .. code-block:: Python outputObservationNoiseSigma = 1.0 meanNoise = ot.Point(outputDimension) covarianceNoise = [outputObservationNoiseSigma] * outputDimension R = ot.IdentityMatrix(outputDimension) observationOutputNoise = ot.Normal(meanNoise, covarianceNoise, R) print(observationOutputNoise) .. rst-class:: sphx-glr-script-out .. code-block:: none Normal(mu = [0,0,0,0,0,0], sigma = [1,1,1,1,1,1], R = 6x6 [[ 1 0 0 0 0 0 ] [ 0 1 0 0 0 0 ] [ 0 0 1 0 0 0 ] [ 0 0 0 1 0 0 ] [ 0 0 0 0 1 0 ] [ 0 0 0 0 0 1 ]]) .. GENERATED FROM PYTHON SOURCE LINES 88-91 Finally, we generate the outputs by evaluating the exact outputs of the function and adding the noise. We use a sample with size equal to 5. .. GENERATED FROM PYTHON SOURCE LINES 93-101 .. code-block:: Python size = 5 # Generate exact outputs inputSample = inputRandomVector.getSample(size) outputStress = g(inputSample) # Add noise sampleNoise = observationOutputNoise.getSample(size) outputObservations = outputStress + sampleNoise .. GENERATED FROM PYTHON SOURCE LINES 102-104 Now is the important part of this script: there are no input observations. This is why we create a sample of dimension equal to 0. .. GENERATED FROM PYTHON SOURCE LINES 106-108 .. code-block:: Python inputObservations = ot.Sample(0, 0) .. GENERATED FROM PYTHON SOURCE LINES 109-110 We are now ready to perform the calibration. .. GENERATED FROM PYTHON SOURCE LINES 112-121 .. code-block:: Python algo = ot.LinearLeastSquaresCalibration( model, inputObservations, outputObservations, candidate, "SVD" ) algo.run() calibrationResult = algo.getResult() parameterMAP = calibrationResult.getParameterMAP() print(parameterMAP) .. rst-class:: sphx-glr-script-out .. code-block:: none [11.9223,7.04246,-8.2184] .. GENERATED FROM PYTHON SOURCE LINES 122-124 We observe that the estimated parameter is relatively close to the true parameter value. .. GENERATED FROM PYTHON SOURCE LINES 126-129 .. code-block:: Python print(parameterMAP - trueParameter) .. rst-class:: sphx-glr-script-out .. code-block:: none [-0.0776552,0.0424638,-0.218396] .. GENERATED FROM PYTHON SOURCE LINES 130-135 Graphical validation -------------------- We now validate the calculation by drawing the exact function and compare it to the function with estimated parameters. .. GENERATED FROM PYTHON SOURCE LINES 137-148 .. code-block:: Python fullModel = ot.SymbolicFunction(["a", "b", "c", "x"], ["a + b * x + c * x^2"]) parameterIndices = [0, 1, 2] trueFunction = ot.ParametricFunction(fullModel, parameterIndices, trueParameter) print(trueFunction) beforeCalibrationFunction = ot.ParametricFunction( fullModel, parameterIndices, candidate ) print(beforeCalibrationFunction) calibratedFunction = ot.ParametricFunction(fullModel, parameterIndices, parameterMAP) print(calibratedFunction) .. rst-class:: sphx-glr-script-out .. code-block:: none ParametricEvaluation([a,b,c,x]->[a + b * x + c * x^2], parameters positions=[0,1,2], parameters=[a : 12, b : 7, c : -8], input positions=[3]) ParametricEvaluation([a,b,c,x]->[a + b * x + c * x^2], parameters positions=[0,1,2], parameters=[a : 8, b : 9, c : -6], input positions=[3]) ParametricEvaluation([a,b,c,x]->[a + b * x + c * x^2], parameters positions=[0,1,2], parameters=[a : 11.9223, b : 7.04246, c : -8.2184], input positions=[3]) .. GENERATED FROM PYTHON SOURCE LINES 149-152 In order to validate the calibration, we compare the parametric function with true parameters at given abscissas with the parametric function with calibrated parameters. .. GENERATED FROM PYTHON SOURCE LINES 154-192 .. code-block:: Python abscissas = [-1.0, -0.6, -0.2, 0.2, 0.6, 1.0] xmin = min(abscissas) xmax = max(abscissas) npoints = 50 palette = ot.Drawable.BuildDefaultPalette(4) graph = ot.Graph("Calibration without observations", "x", "y", True, "lower right") curve = trueFunction.draw(xmin, xmax, npoints).getDrawable(0) curve.setLineStyle("dotted") curve.setLegend("True model") curve.setColor(palette[0]) graph.add(curve) # Before calibration curve = beforeCalibrationFunction.draw(xmin, xmax, npoints).getDrawable(0) curve.setLegend("Model before calibration") curve.setColor(palette[1]) curve.setLineStyle(ot.ResourceMap.GetAsString("CalibrationResult-PriorLineStyle")) curve graph.add(curve) # After calibration curve = calibratedFunction.draw(xmin, xmax, npoints).getDrawable(0) curve.setLegend("Model after calibration") curve.setColor(palette[2]) curve.setLineStyle(ot.ResourceMap.GetAsString("CalibrationResult-PosteriorLineStyle")) graph.add(curve) # Observations for i in range(outputDimension): cloud = ot.Cloud(ot.Sample([[abscissas[i]]] * size), outputObservations[:, i]) cloud.setColor(palette[3]) cloud.setPointStyle( ot.ResourceMap.GetAsString("CalibrationResult-ObservationPointStyle") ) if i == 0: cloud.setLegend("Observations") graph.add(cloud) view = otv.View(graph) .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_withoutobservedinputs_001.png :alt: Calibration without observations :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_withoutobservedinputs_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 193-201 We notice that the calibration produces a good fit to the data. Still, we notice a small discrepancy between the true mode and the model after calibration, but this discrepancy is very small. Since the model is linear with respect to the parameters :math:`a`, :math:`b`, :math:`c`, the :class:`~openturns.LinearLeastSquares` class performs well. If the model were non linear w.r.t. :math:`a`, :math:`b`, :math:`c`, then the linear least squares method would not work that well and the parameters would be estimated with less accuracy. .. GENERATED FROM PYTHON SOURCE LINES 201-203 .. code-block:: Python otv.View.ShowAll() .. _sphx_glr_download_auto_calibration_least_squares_and_gaussian_calibration_plot_calibration_withoutobservedinputs.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_calibration_withoutobservedinputs.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_calibration_withoutobservedinputs.py `