.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_calibration/least_squares_and_gaussian_calibration/plot_calibration_flooding.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_calibration_least_squares_and_gaussian_calibration_plot_calibration_flooding.py: Calibration of the flooding model ================================= .. GENERATED FROM PYTHON SOURCE LINES 6-70 In this example we are interested in the calibration of the :ref:`flooding model `. In this example we calibrate the parameters of a flooding model where only the difference between the downstream and upstream riverbed levels can be calibrated. This example shows how to manage the lack of identifiability in a calibration problem. Parameters to calibrate ----------------------- The vector of parameters to calibrate is: .. math:: \theta = (K_s,Z_v,Z_m). The variables to calibrate are :math:`(K_s,Z_v,Z_m)` and are set to the following values: .. math:: K_s = 30, \qquad Z_v = 50, \qquad Z_m = 55. Observations ------------ In this section, we describe the statistical model associated with the :math:`n` observations. The errors of the water heights are associated with a normal distribution with a zero mean and a standard variation equal to: .. math:: \sigma=0.1. Therefore, the observed water heights are: .. math:: H_i = G(Q_i,K_s,Z_v,Z_m) + \epsilon_i for :math:`i=1,...,n` where .. math:: \epsilon \sim \mathcal{N}(0,\sigma^2) and we make the hypothesis that the observation errors are independent. We consider a sample size equal to: .. math:: n=100. The observations are the couples :math:`\{(Q_i,H_i)\}_{i=1,...,n}`, i.e. each observation is a couple made of the flowrate and the corresponding river height. Variables --------- - Q : Input. Observed. - Ks, Zv, Zm : Input. Calibrated. - H: Output. Observed. Analysis -------- In the description of the :ref:`flooding model`, we see that only one parameter can be identified. Hence, calibrating this model requires some regularization. We return to this topic when analyzing the singular values of the Jacobian matrix. .. GENERATED FROM PYTHON SOURCE LINES 72-74 Generate the observations ------------------------- .. GENERATED FROM PYTHON SOURCE LINES 76-84 .. code-block:: default from openturns.usecases import flood_model from matplotlib import pylab as plt import openturns.viewer as viewer import numpy as np import openturns as ot ot.ResourceMap.SetAsUnsignedInteger('Normal-SmallDimension', 1) ot.Log.Show(ot.Log.NONE) .. GENERATED FROM PYTHON SOURCE LINES 85-86 We load the flooding use case : .. GENERATED FROM PYTHON SOURCE LINES 86-88 .. code-block:: default fm = flood_model.FloodModel() .. GENERATED FROM PYTHON SOURCE LINES 89-97 We define the model :math:`g` which has 4 inputs and one output H. The nonlinear least squares does not take into account for bounds in the parameters. Therefore, we ensure that the output is computed whatever the inputs. The model fails into two situations: * if :math:`K_s<0`, * if :math:`Z_v-Z_m<0`. In these cases, we return an infinite number, so that the optimization algorithm does not get trapped. .. GENERATED FROM PYTHON SOURCE LINES 99-113 .. code-block:: default def functionFlooding(X): L = 5.0e3 B = 300.0 Q, K_s, Z_v, Z_m = X alpha = (Z_m - Z_v)/L if alpha < 0.0 or K_s <= 0.0: H = np.inf else: H = (Q/(K_s*B*np.sqrt(alpha)))**(3.0/5.0) return [H] .. GENERATED FROM PYTHON SOURCE LINES 114-118 .. code-block:: default g = ot.PythonFunction(4, 1, functionFlooding) g = ot.MemoizeFunction(g) g.setOutputDescription(["H (m)"]) .. GENERATED FROM PYTHON SOURCE LINES 119-120 We load the input distribution for :math:`Q` : .. GENERATED FROM PYTHON SOURCE LINES 122-125 .. code-block:: default Q = fm.Q print(Q) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none TruncatedDistribution(Gumbel(beta = 558, gamma = 1013), bounds = [0, (19000.8) +inf[) .. GENERATED FROM PYTHON SOURCE LINES 126-127 Set the parameters to be calibrated. .. GENERATED FROM PYTHON SOURCE LINES 129-136 .. code-block:: default K_s = ot.Dirac(30.0) Z_v = ot.Dirac(50.0) Z_m = ot.Dirac(55.0) K_s.setDescription(["Ks (m^(1/3)/s)"]) Z_v.setDescription(["Zv (m)"]) Z_m.setDescription(["Zm (m)"]) .. GENERATED FROM PYTHON SOURCE LINES 137-138 Create the joint input distribution. .. GENERATED FROM PYTHON SOURCE LINES 140-142 .. code-block:: default inputRandomVector = ot.ComposedDistribution([Q, K_s, Z_v, Z_m]) .. GENERATED FROM PYTHON SOURCE LINES 143-144 Create a Monte-Carlo sample of the output H. .. GENERATED FROM PYTHON SOURCE LINES 146-150 .. code-block:: default nbobs = 100 inputSample = inputRandomVector.getSample(nbobs) outputH = g(inputSample) .. GENERATED FROM PYTHON SOURCE LINES 151-152 Observe the distribution of the output H. .. GENERATED FROM PYTHON SOURCE LINES 154-157 .. code-block:: default graph = ot.HistogramFactory().build(outputH).drawPDF() view = viewer.View(graph) .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_001.png :alt: H (m) PDF :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 158-159 Generate the observation noise and add it to the output of the model. .. GENERATED FROM PYTHON SOURCE LINES 161-166 .. code-block:: default sigmaObservationNoiseH = 0.1 # (m) noiseH = ot.Normal(0., sigmaObservationNoiseH) sampleNoiseH = noiseH.getSample(nbobs) Hobs = outputH + sampleNoiseH .. GENERATED FROM PYTHON SOURCE LINES 167-168 Plot the Y observations versus the X observations. .. GENERATED FROM PYTHON SOURCE LINES 170-172 .. code-block:: default Qobs = inputSample[:, 0] .. GENERATED FROM PYTHON SOURCE LINES 173-178 .. code-block:: default graph = ot.Graph("Observations", "Q (m3/s)", "H (m)", True) cloud = ot.Cloud(Qobs, Hobs) graph.add(cloud) view = viewer.View(graph) .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_002.png :alt: Observations :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 179-181 Setting the calibration parameters ---------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 183-184 Define the value of the reference values of the :math:`\theta` parameter. In the bayesian framework, this is called the mean of the *prior* normal distribution. In the data assimilation framework, this is called the *background*. .. GENERATED FROM PYTHON SOURCE LINES 186-191 .. code-block:: default KsInitial = 20. ZvInitial = 49. ZmInitial = 51. thetaPrior = [KsInitial, ZvInitial, ZmInitial] .. GENERATED FROM PYTHON SOURCE LINES 192-193 The following statement create the calibrated function from the model. The calibrated parameters :math:`K_s`, :math:`Z_v`, :math:`Z_m` are at indices 1, 2, 3 in the inputs arguments of the model. .. GENERATED FROM PYTHON SOURCE LINES 195-198 .. code-block:: default calibratedIndices = [1, 2, 3] mycf = ot.ParametricFunction(g, calibratedIndices, thetaPrior) .. GENERATED FROM PYTHON SOURCE LINES 199-201 Calibration with linear least squares ------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 203-204 The `LinearLeastSquaresCalibration` class performs the linear least squares calibration by linearizing the model in the neighbourhood of the reference point. .. GENERATED FROM PYTHON SOURCE LINES 206-208 .. code-block:: default algo = ot.LinearLeastSquaresCalibration(mycf, Qobs, Hobs, thetaPrior, "SVD") .. GENERATED FROM PYTHON SOURCE LINES 209-210 The `run` method computes the solution of the problem. .. GENERATED FROM PYTHON SOURCE LINES 212-214 .. code-block:: default algo.run() .. GENERATED FROM PYTHON SOURCE LINES 215-217 .. code-block:: default calibrationResult = algo.getResult() .. GENERATED FROM PYTHON SOURCE LINES 218-219 The `getParameterMAP` method returns the maximum of the posterior distribution of :math:`\theta`. .. GENERATED FROM PYTHON SOURCE LINES 221-224 .. code-block:: default thetaStar = calibrationResult.getParameterMAP() print(thetaStar) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [9.19069e+08,1.52807e+23,1.52807e+23] .. GENERATED FROM PYTHON SOURCE LINES 225-226 In this case, we see that there seems to be a great distance from the reference value of :math:`\theta` to the optimum: the values seem too large in magnitude. The value of the optimum :math:`K_s` is nonpositive. In fact, there is an identification problem because the Jacobian matrix is rank-degenerate. .. GENERATED FROM PYTHON SOURCE LINES 228-230 Diagnostic of the identification issue -------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 232-235 In this section, we show how to diagnose the identification problem. The `getParameterPosterior` method returns the posterior normal distribution of :math:`\theta`. .. GENERATED FROM PYTHON SOURCE LINES 237-240 .. code-block:: default distributionPosterior = calibrationResult.getParameterPosterior() print(distributionPosterior) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Normal(mu = [9.19069e+08,1.52807e+23,1.52807e+23], sigma = [2.18636e+25,1.71881e+31,1.71881e+31], R = [[ 1 1.88703e-24 -1.88703e-24 ] [ 1.88703e-24 1 1 ] [ -1.88703e-24 1 1 ]]) .. GENERATED FROM PYTHON SOURCE LINES 241-244 We see that there is a large covariance matrix diagonal. Let us compute a 95% confidence interval for the solution :math:`\theta^\star`. .. GENERATED FROM PYTHON SOURCE LINES 246-249 .. code-block:: default print(distributionPosterior.computeBilateralConfidenceIntervalWithMarginalProbability( 0.95)[0]) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [-4.89311e+25, 4.89311e+25] [-3.84673e+31, 3.84673e+31] [-3.84673e+31, 3.84673e+31] .. GENERATED FROM PYTHON SOURCE LINES 250-251 The confidence interval is *very* large. In order to clarify the situation, we compute the Jacobian matrix of the model at the candidate point. .. GENERATED FROM PYTHON SOURCE LINES 253-260 .. code-block:: default mycf.setParameter(thetaPrior) thetaDim = len(thetaPrior) jacobianMatrix = ot.Matrix(nbobs, thetaDim) for i in range(nbobs): jacobianMatrix[i, :] = mycf.parameterGradient(Qobs[i]).transpose() print(jacobianMatrix[0:5, :]) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none 5x3 [[ -0.123861 0.619303 -0.619303 ] [ -0.178446 0.892231 -0.892231 ] [ -0.0557111 0.278556 -0.278556 ] [ -0.188611 0.943055 -0.943055 ] [ -0.113529 0.567643 -0.567643 ]] .. GENERATED FROM PYTHON SOURCE LINES 261-262 The rank of the problem can be seen from the singular values of the Jacobian matrix. .. GENERATED FROM PYTHON SOURCE LINES 264-266 .. code-block:: default print(jacobianMatrix.computeSingularValues()) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [9.02765,1.53625e-10,5.37508e-25] .. GENERATED FROM PYTHON SOURCE LINES 267-283 We can see that there are two singular values which are relatively close to zero. This explains why the Jacobian matrix is close to being rank-degenerate. Moreover, this allows one to compute the actual dimensionality of the problem. The algorithm we use computes the singular values in descending order. Moreover, by definition, the singular values are nonnegative. We see that the first singular value is close to :math:`10` and the others are very close to :math:`0` in comparison. This implies that the (numerical) rank of the Jacobian matrix is 1, even if there are 3 parameters. Hence, only one parameter can be identified, be it :math:`K_s`, :math:`Z_v` or :math:`Z_m`. The choice of the particular parameter to identify is free. However, in hydraulic studies, the parameter :math:`K_s` is classically calibrated while :math:`Z_v` and :math:`Z_m` are left constant. .. GENERATED FROM PYTHON SOURCE LINES 285-287 Conclusion of the linear least squares calibration -------------------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 289-293 There are several methods to solve the problem. * Given that the problem is not identifiable, we can use some regularization method. Two methods are provided in the library: the Gaussian linear least squares `GaussianLinearCalibration` and the Gaussian non linear least squares `GaussianNonlinearCalibration`. * We can change the problem, replacing it with a problem which is identifiable. In the flooding model, we can view :math:`Z_v` and :math:`Z_m` as constants and calibrate :math:`K_s` only. .. GENERATED FROM PYTHON SOURCE LINES 295-297 Calibration with non linear least squares ----------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 299-300 The `NonLinearLeastSquaresCalibration` class performs the non linear least squares calibration by minimizing the squared euclidian norm between the predictions and the observations. .. GENERATED FROM PYTHON SOURCE LINES 302-304 .. code-block:: default algo = ot.NonLinearLeastSquaresCalibration(mycf, Qobs, Hobs, thetaPrior) .. GENERATED FROM PYTHON SOURCE LINES 305-306 The `run` method computes the solution of the problem. .. GENERATED FROM PYTHON SOURCE LINES 308-310 .. code-block:: default algo.run() .. GENERATED FROM PYTHON SOURCE LINES 311-313 .. code-block:: default calibrationResult = algo.getResult() .. GENERATED FROM PYTHON SOURCE LINES 314-316 Analysis of the results ----------------------- .. GENERATED FROM PYTHON SOURCE LINES 318-319 The `getParameterMAP` method returns the maximum of the posterior distribution of :math:`\theta`. .. GENERATED FROM PYTHON SOURCE LINES 321-324 .. code-block:: default thetaMAP = calibrationResult.getParameterMAP() print(thetaMAP) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [27.7266,47.0401,52.9599] .. GENERATED FROM PYTHON SOURCE LINES 325-328 We can compute a 95% confidence interval of the parameter :math:`\theta^\star`. This confidence interval is based on bootstrap, based on a sample size equal to 100 (as long as the value of the `ResourceMap` key "NonLinearLeastSquaresCalibration-BootstrapSize" is unchanged). This confidence interval reflects the sensitivity of the optimum to the variability in the observations. .. GENERATED FROM PYTHON SOURCE LINES 330-334 .. code-block:: default thetaPosterior = calibrationResult.getParameterPosterior() print(thetaPosterior.computeBilateralConfidenceIntervalWithMarginalProbability( 0.95)[0]) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [27.5629, 27.8728] [46.9777, 47.11] [52.89, 53.0223] .. GENERATED FROM PYTHON SOURCE LINES 335-336 In this case, the value of the parameter :math:`K_s` is quite accurately computed, but there is a relatively large uncertainty on the values of :math:`Z_v` and :math:`Z_m`. .. GENERATED FROM PYTHON SOURCE LINES 338-342 .. code-block:: default graph = calibrationResult.drawObservationsVsInputs() graph.setLegendPosition("topleft") view = viewer.View(graph) .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_003.png :alt: plot calibration flooding :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_003.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 343-344 We see that there is a good fit after calibration, since the predictions after calibration (i.e. the green crosses) are close to the observations (i.e. the blue crosses). .. GENERATED FROM PYTHON SOURCE LINES 346-349 .. code-block:: default graph = calibrationResult.drawObservationsVsPredictions() view = viewer.View(graph) .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_004.png :alt: plot calibration flooding :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_004.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 350-351 We see that there is a much better fit after calibration, since the predictions are close to the diagonal of the graphics. .. GENERATED FROM PYTHON SOURCE LINES 353-356 .. code-block:: default observationError = calibrationResult.getObservationsError() print(observationError) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Normal(mu = 0.000199528, sigma = 0.0981944) .. GENERATED FROM PYTHON SOURCE LINES 357-358 We can see that the observation error is close to have a zero mean and a standard deviation equal to 0.1. .. GENERATED FROM PYTHON SOURCE LINES 360-364 .. code-block:: default graph = calibrationResult.drawResiduals() graph.setLegendPosition("topleft") view = viewer.View(graph) .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_005.png :alt: , Residual analysis :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_005.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 365-368 The analysis of the residuals shows that the distribution is centered on zero and symmetric. This indicates that the calibration performed well. Moreover, the distribution of the residuals is close to being Gaussian. .. GENERATED FROM PYTHON SOURCE LINES 370-373 .. code-block:: default graph = calibrationResult.drawParameterDistributions() view = viewer.View(graph) .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_006.png :alt: plot calibration flooding :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_006.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 374-376 Gaussian linear calibration --------------------------- .. GENERATED FROM PYTHON SOURCE LINES 378-379 The standard deviation of the observations. .. GENERATED FROM PYTHON SOURCE LINES 381-383 .. code-block:: default sigmaH = 0.5 # (m^2) .. GENERATED FROM PYTHON SOURCE LINES 384-385 Define the covariance matrix of the output Y of the model. .. GENERATED FROM PYTHON SOURCE LINES 387-390 .. code-block:: default errorCovariance = ot.CovarianceMatrix(1) errorCovariance[0, 0] = sigmaH**2 .. GENERATED FROM PYTHON SOURCE LINES 391-392 Define the covariance matrix of the parameters :math:`\theta` to calibrate. .. GENERATED FROM PYTHON SOURCE LINES 394-398 .. code-block:: default sigmaKs = 5. sigmaZv = 1. sigmaZm = 1. .. GENERATED FROM PYTHON SOURCE LINES 399-405 .. code-block:: default sigma = ot.CovarianceMatrix(3) sigma[0, 0] = sigmaKs**2 sigma[1, 1] = sigmaZv**2 sigma[2, 2] = sigmaZm**2 print(sigma) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [[ 25 0 0 ] [ 0 1 0 ] [ 0 0 1 ]] .. GENERATED FROM PYTHON SOURCE LINES 406-407 The `GaussianLinearCalibration` class performs Gaussian linear calibration by linearizing the model in the neighbourhood of the prior. .. GENERATED FROM PYTHON SOURCE LINES 409-412 .. code-block:: default algo = ot.GaussianLinearCalibration( mycf, Qobs, Hobs, thetaPrior, sigma, errorCovariance, "SVD") .. GENERATED FROM PYTHON SOURCE LINES 413-414 The `run` method computes the solution of the problem. .. GENERATED FROM PYTHON SOURCE LINES 416-418 .. code-block:: default algo.run() .. GENERATED FROM PYTHON SOURCE LINES 419-421 .. code-block:: default calibrationResult = algo.getResult() .. GENERATED FROM PYTHON SOURCE LINES 422-424 Analysis of the results ----------------------- .. GENERATED FROM PYTHON SOURCE LINES 426-427 The `getParameterMAP` method returns the maximum of the posterior distribution of :math:`\theta`. .. GENERATED FROM PYTHON SOURCE LINES 429-432 .. code-block:: default thetaStar = calibrationResult.getParameterMAP() print(thetaStar) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [24.5061,48.0988,51.9012] .. GENERATED FROM PYTHON SOURCE LINES 433-437 .. code-block:: default graph = calibrationResult.drawObservationsVsInputs() graph.setLegendPosition("topleft") view = viewer.View(graph) .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_007.png :alt: plot calibration flooding :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_007.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 438-439 We see that the output of the model after calibration is closer to the observations. However, there is still a distance from the outputs of the model to the observations. This indicates that the calibration cannot be performed with this method. .. GENERATED FROM PYTHON SOURCE LINES 441-444 .. code-block:: default graph = calibrationResult.drawObservationsVsPredictions() view = viewer.View(graph) .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_008.png :alt: plot calibration flooding :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_008.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 445-446 In this case, the fit is better after calibration, but not perfect. Indeed, the cloud of points after calibration is not centered on the diagonal. .. GENERATED FROM PYTHON SOURCE LINES 448-452 .. code-block:: default graph = calibrationResult.drawResiduals() graph.setLegendPosition("topleft") view = viewer.View(graph) .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_009.png :alt: , Residual analysis :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_009.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 453-454 We see that the distribution of the residual is not centered on zero: the mean residual is approximately :math:`-0.5`, which implies that the predictions are, on average, smaller than the observations. This is a proof that the calibration cannot be performed with this method in this particular case. .. GENERATED FROM PYTHON SOURCE LINES 456-457 The `getParameterPosterior` method returns the posterior normal distribution of :math:`\theta`. .. GENERATED FROM PYTHON SOURCE LINES 459-462 .. code-block:: default distributionPosterior = calibrationResult.getParameterPosterior() print(distributionPosterior) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Normal(mu = [24.5061,48.0988,51.9012], sigma = [4.08461,0.816921,0.816921], R = [[ 1 0.49844 -0.49844 ] [ 0.49844 1 0.49844 ] [ -0.49844 0.49844 1 ]]) .. GENERATED FROM PYTHON SOURCE LINES 463-464 We can compute a 95% confidence interval of the parameter :math:`\theta^\star`. .. GENERATED FROM PYTHON SOURCE LINES 466-469 .. code-block:: default print(distributionPosterior.computeBilateralConfidenceIntervalWithMarginalProbability( 0.95)[0]) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [14.9368, 34.0755] [46.1849, 50.0126] [49.9874, 53.8151] .. GENERATED FROM PYTHON SOURCE LINES 470-471 We see that there is a large uncertainty on the value of the parameter :math:`K_s` which can be as small as :math:`14` and as large as :math:`34`. .. GENERATED FROM PYTHON SOURCE LINES 473-474 We can compare the prior and posterior distributions of the marginals of :math:`\theta`. .. GENERATED FROM PYTHON SOURCE LINES 476-479 .. code-block:: default graph = calibrationResult.drawParameterDistributions() view = viewer.View(graph) .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_010.png :alt: plot calibration flooding :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_010.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 480-481 The two distributions are different, which shows that the calibration is sensible to the observations (if the observations were not sensible, the two distributions were superimposed). Moreover, the two distributions are quite close, which implies that the prior distribution has played a roled in the calibration (otherwise the two distributions would be completely different, indicating that only the observations were taken into account). .. GENERATED FROM PYTHON SOURCE LINES 483-485 Gaussian nonlinear calibration ------------------------------ .. GENERATED FROM PYTHON SOURCE LINES 487-488 The `GaussianNonLinearCalibration` class performs Gaussian nonlinear calibration. .. GENERATED FROM PYTHON SOURCE LINES 490-493 .. code-block:: default algo = ot.GaussianNonLinearCalibration( mycf, Qobs, Hobs, thetaPrior, sigma, errorCovariance) .. GENERATED FROM PYTHON SOURCE LINES 494-495 The `run` method computes the solution of the problem. .. GENERATED FROM PYTHON SOURCE LINES 497-499 .. code-block:: default algo.run() .. GENERATED FROM PYTHON SOURCE LINES 500-502 .. code-block:: default calibrationResult = algo.getResult() .. GENERATED FROM PYTHON SOURCE LINES 503-505 Analysis of the results ----------------------- .. GENERATED FROM PYTHON SOURCE LINES 507-508 The `getParameterMAP` method returns the maximum of the posterior distribution of :math:`\theta`. .. GENERATED FROM PYTHON SOURCE LINES 510-513 .. code-block:: default thetaStar = calibrationResult.getParameterMAP() print(thetaStar) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [30.553,47.6305,52.3695] .. GENERATED FROM PYTHON SOURCE LINES 514-518 .. code-block:: default graph = calibrationResult.drawObservationsVsInputs() graph.setLegendPosition("topleft") view = viewer.View(graph) .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_011.png :alt: plot calibration flooding :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_011.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 519-520 We see that the output of the model after calibration is in the middle of the observations: the calibration seems correct. .. GENERATED FROM PYTHON SOURCE LINES 522-525 .. code-block:: default graph = calibrationResult.drawObservationsVsPredictions() view = viewer.View(graph) .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_012.png :alt: plot calibration flooding :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_012.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 526-527 The fit is excellent after calibration. Indeed, the cloud of points after calibration is on the diagonal. .. GENERATED FROM PYTHON SOURCE LINES 529-532 .. code-block:: default graph = calibrationResult.drawResiduals() view = viewer.View(graph) .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_013.png :alt: , Residual analysis :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_013.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 533-534 We see that the histogram of the residual is centered on zero. This is a proof that the calibration did perform correctly. .. GENERATED FROM PYTHON SOURCE LINES 536-537 The `getParameterPosterior` method returns the posterior normal distribution of :math:`\theta`. .. GENERATED FROM PYTHON SOURCE LINES 539-541 .. code-block:: default distributionPosterior = calibrationResult.getParameterPosterior() .. GENERATED FROM PYTHON SOURCE LINES 542-543 We can compute a 95% confidence interval of the parameter :math:`\theta^\star`. .. GENERATED FROM PYTHON SOURCE LINES 545-548 .. code-block:: default print(distributionPosterior.computeBilateralConfidenceIntervalWithMarginalProbability( 0.95)[0]) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [30.5364, 31.0721] [47.5642, 47.6325] [52.3675, 52.4358] .. GENERATED FROM PYTHON SOURCE LINES 549-550 We see that there is a small uncertainty on the value of all parameters. .. GENERATED FROM PYTHON SOURCE LINES 552-553 We can compare the prior and posterior distributions of the marginals of :math:`\theta`. .. GENERATED FROM PYTHON SOURCE LINES 555-558 .. code-block:: default graph = calibrationResult.drawParameterDistributions() view = viewer.View(graph) .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_014.png :alt: plot calibration flooding :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_014.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 559-560 The two distributions are very different, with a spiky posterior distribution. This shows that the calibration is very sensible to the observations. .. GENERATED FROM PYTHON SOURCE LINES 562-571 Tuning the posterior distribution estimation -------------------------------------------- The "GaussianNonLinearCalibration-BootstrapSize" key controls the posterior distribution estimation. * If "GaussianNonLinearCalibration-BootstrapSize" > 0 (by default it is equal to 100), then a bootstrap resample algorithm is used to see the dispersion of the MAP estimator. This allows one to see the variability of the estimator with respect to the finite observation sample. * If "GaussianNonLinearCalibration-BootstrapSize" is zero, then the Gaussian linear calibration estimator is used (i.e. the `GaussianLinearCalibration` class) at the optimum. This is called the Laplace approximation. We must configure the key before creating the object (otherwise changing the parameter does not change the result). .. GENERATED FROM PYTHON SOURCE LINES 573-576 .. code-block:: default ot.ResourceMap.SetAsUnsignedInteger( "GaussianNonLinearCalibration-BootstrapSize", 0) .. GENERATED FROM PYTHON SOURCE LINES 577-580 .. code-block:: default algo = ot.GaussianNonLinearCalibration( mycf, Qobs, Hobs, thetaPrior, sigma, errorCovariance) .. GENERATED FROM PYTHON SOURCE LINES 581-583 .. code-block:: default algo.run() .. GENERATED FROM PYTHON SOURCE LINES 584-586 .. code-block:: default calibrationResult = algo.getResult() .. GENERATED FROM PYTHON SOURCE LINES 587-591 .. code-block:: default graph = calibrationResult.drawParameterDistributions() viewer = viewer.View(graph) plt.show() .. image-sg:: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_015.png :alt: plot calibration flooding :srcset: /auto_calibration/least_squares_and_gaussian_calibration/images/sphx_glr_plot_calibration_flooding_015.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 592-593 As we can see, this does not change much the posterior distribution, which remains spiky. .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 4.839 seconds) .. _sphx_glr_download_auto_calibration_least_squares_and_gaussian_calibration_plot_calibration_flooding.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_calibration_flooding.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_calibration_flooding.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_