.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_meta_modeling/kriging_metamodel/plot_kriging_hyperparameters_optimization.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_meta_modeling_kriging_metamodel_plot_kriging_hyperparameters_optimization.py: Kriging :configure the optimization solver ========================================== .. GENERATED FROM PYTHON SOURCE LINES 6-31 The goal of this example is to show how to fine-tune the optimization solver used to estimate the hyperparameters of the covariance model of the kriging metamodel. Introduction ------------ In a kriging metamodel, there are various types of parameters which are estimated from the data. * The parameters :math:`{\bf \beta}` associated with the deterministic trend. These parameters are computed based on linear least squares. * The parameters of the covariance model. The covariance model has two types of parameters. * The amplitude parameter :math:`\sigma^2` is estimated from the data. If the output dimension is equal to one, this parameter is estimated using the analytic variance estimator which maximizes the likelihood. Otherwise, if output dimension is greater than one or analytical sigma disabled, this parameter is estimated from numerical optimization. * The other parameters :math:`{\bf \theta}\in\mathbb{R}^d` where :math:`d` is the spatial dimension of the covariance model. Often, the parameter :math:`{\bf \theta}` is a scale parameter. This step involves an optimization algorithm. All these parameters are estimated with the `GeneralLinearModelAlgorithm` class. The estimation of the :math:`{\bf \theta}` parameters is the step which has the highest CPU cost. Moreover, the maximization of likelihood may be associated with difficulties e.g. many local maximums or even the non convergence of the optimization algorithm. In this case, it might be useful to fine tune the optimization algorithm so that the convergence of the optimization algorithm is, hopefully, improved. Furthermore, there are several situations in which the optimization can be initialized or completely bypassed. Suppose for example that we have already created an initial kriging metamodel with :math:`N` points and we want to add a single new point. * It might be interesting to initialize the optimization algorithm with the optimum found for the previous kriging metamodel: this may reduce the number of iterations required to maximize the likelihood. * We may as well completely bypass the optimization step: if the previous covariance model was correctly estimated, the update of the parameters may or may not significantly improve the estimates. This is why the goal of this example is to see how to configure the optimization of the hyperparameters of a kriging metamodel. .. GENERATED FROM PYTHON SOURCE LINES 33-35 Definition of the model ----------------------- .. GENERATED FROM PYTHON SOURCE LINES 37-40 .. code-block:: default import openturns as ot ot.Log.Show(ot.Log.NONE) .. GENERATED FROM PYTHON SOURCE LINES 41-42 We define the symbolic function which evaluates the output Y depending on the inputs E, F, L and I. .. GENERATED FROM PYTHON SOURCE LINES 44-46 .. code-block:: default model = ot.SymbolicFunction(["E", "F", "L", "I"], ["F*L^3/(3*E*I)"]) .. GENERATED FROM PYTHON SOURCE LINES 47-48 Then we define the distribution of the input random vector. .. GENERATED FROM PYTHON SOURCE LINES 50-51 Young's modulus E .. GENERATED FROM PYTHON SOURCE LINES 51-64 .. code-block:: default E = ot.Beta(0.9, 2.27, 2.5e7, 5.0e7) # in N/m^2 E.setDescription("E") # Load F F = ot.LogNormal() # in N F.setParameter(ot.LogNormalMuSigma()([30.e3, 9e3, 15.e3])) F.setDescription("F") # Length L L = ot.Uniform(250., 260.) # in cm L.setDescription("L") # Moment of inertia I I = ot.Beta(2.5, 1.5, 310, 450) # in cm^4 I.setDescription("I") .. GENERATED FROM PYTHON SOURCE LINES 65-66 Finally, we define the dependency using a `NormalCopula`. .. GENERATED FROM PYTHON SOURCE LINES 68-75 .. code-block:: default dim = 4 # number of inputs R = ot.CorrelationMatrix(dim) R[2, 3] = -0.2 myCopula = ot.NormalCopula( ot.NormalCopula.GetCorrelationFromSpearmanCorrelation(R)) myDistribution = ot.ComposedDistribution([E, F, L, I], myCopula) .. GENERATED FROM PYTHON SOURCE LINES 76-78 Create the design of experiments -------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 80-81 We consider a simple Monte-Carlo sampling as a design of experiments. This is why we generate an input sample using the `getSample` method of the distribution. Then we evaluate the output using the `model` function. .. GENERATED FROM PYTHON SOURCE LINES 83-87 .. code-block:: default sampleSize_train = 10 X_train = myDistribution.getSample(sampleSize_train) Y_train = model(X_train) .. GENERATED FROM PYTHON SOURCE LINES 88-90 Create the metamodel -------------------- .. GENERATED FROM PYTHON SOURCE LINES 92-93 In order to create the kriging metamodel, we first select a constant trend with the `ConstantBasisFactory` class. Then we use a squared exponential covariance model. Finally, we use the `KrigingAlgorithm` class to create the kriging metamodel, taking the training sample, the covariance model and the trend basis as input arguments. .. GENERATED FROM PYTHON SOURCE LINES 95-103 .. code-block:: default dimension = myDistribution.getDimension() basis = ot.ConstantBasisFactory(dimension).build() covarianceModel = ot.SquaredExponential([1.]*dimension, [1.0]) algo = ot.KrigingAlgorithm(X_train, Y_train, covarianceModel, basis) algo.run() result = algo.getResult() krigingMetamodel = result.getMetaModel() .. GENERATED FROM PYTHON SOURCE LINES 104-107 The `run` method has optimized the hyperparameters of the metamodel. We can then print the constant trend of the metamodel, which have been estimated using the least squares method. .. GENERATED FROM PYTHON SOURCE LINES 109-111 .. code-block:: default result.getTrendCoefficients() .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [class=Point name=Unnamed dimension=1 values=[13.3786]] .. GENERATED FROM PYTHON SOURCE LINES 112-113 We can also print the hyperparameters of the covariance model, which have been estimated by maximizing the likelihood. .. GENERATED FROM PYTHON SOURCE LINES 115-118 .. code-block:: default basic_covariance_model = result.getCovarianceModel() basic_covariance_model .. raw:: html

SquaredExponential(scale=[1,1,1,1], amplitude=[5.08434])



.. GENERATED FROM PYTHON SOURCE LINES 119-121 Get the optimizer algorithm --------------------------- .. GENERATED FROM PYTHON SOURCE LINES 123-124 The `getOptimizationAlgorithm` method returns the optimization algorithm used to optimize the :math:`{\bf \theta}` parameters of the `SquaredExponential` covariance model. .. GENERATED FROM PYTHON SOURCE LINES 126-128 .. code-block:: default solver = algo.getOptimizationAlgorithm() .. GENERATED FROM PYTHON SOURCE LINES 129-130 Get the default optimizer. .. GENERATED FROM PYTHON SOURCE LINES 132-135 .. code-block:: default solverImplementation = solver.getImplementation() solverImplementation.getClassName() .. rst-class:: sphx-glr-script-out Out: .. code-block:: none 'TNC' .. GENERATED FROM PYTHON SOURCE LINES 136-137 The `getOptimizationBounds` method returns the bounds. The dimension of these bounds correspond to the spatial dimension of the covariance model. In the metamodeling context, this correspond to the input dimension of the model. .. GENERATED FROM PYTHON SOURCE LINES 139-142 .. code-block:: default bounds = algo.getOptimizationBounds() bounds.getDimension() .. rst-class:: sphx-glr-script-out Out: .. code-block:: none 4 .. GENERATED FROM PYTHON SOURCE LINES 143-146 .. code-block:: default lbounds = bounds.getLowerBound() lbounds .. raw:: html

[0.01,0.01,0.01,0.01]



.. GENERATED FROM PYTHON SOURCE LINES 147-150 .. code-block:: default ubounds = bounds.getUpperBound() ubounds .. raw:: html

[2.27497e+07,81879.1,18.2918,178.813]



.. GENERATED FROM PYTHON SOURCE LINES 151-152 The `getOptimizeParameters` method returns `True` if these parameters are to be optimized. .. GENERATED FROM PYTHON SOURCE LINES 154-156 .. code-block:: default algo.getOptimizeParameters() .. rst-class:: sphx-glr-script-out Out: .. code-block:: none True .. GENERATED FROM PYTHON SOURCE LINES 157-159 Configure the starting point of the optimization ------------------------------------------------ .. GENERATED FROM PYTHON SOURCE LINES 161-162 The starting point of the optimization is based on the parameters of the covariance model. In the following example, we configure the parameters of the covariance model to the arbitrary values `[12.,34.,56.,78.]`. .. GENERATED FROM PYTHON SOURCE LINES 164-167 .. code-block:: default covarianceModel = ot.SquaredExponential([12., 34., 56., 78.], [1.0]) algo = ot.KrigingAlgorithm(X_train, Y_train, covarianceModel, basis) .. GENERATED FROM PYTHON SOURCE LINES 168-170 .. code-block:: default algo.run() .. GENERATED FROM PYTHON SOURCE LINES 171-174 .. code-block:: default result = algo.getResult() result.getCovarianceModel() .. raw:: html

SquaredExponential(scale=[12,34,18.2918,78], amplitude=[5.08434])



.. GENERATED FROM PYTHON SOURCE LINES 175-176 In order to see the difference with the default optimization, we print the previous optimized covariance model. .. GENERATED FROM PYTHON SOURCE LINES 178-180 .. code-block:: default basic_covariance_model .. raw:: html

SquaredExponential(scale=[1,1,1,1], amplitude=[5.08434])



.. GENERATED FROM PYTHON SOURCE LINES 181-182 We observe that this does not change much the values of the parameters in this case. .. GENERATED FROM PYTHON SOURCE LINES 184-186 Disabling the optimization -------------------------- .. GENERATED FROM PYTHON SOURCE LINES 188-189 It is sometimes useful to completely disable the optimization of the parameters. In order to see the effect of this, we first initialize the parameters of the covariance model with the arbitrary values `[12.,34.,56.,78.]`. .. GENERATED FROM PYTHON SOURCE LINES 191-194 .. code-block:: default covarianceModel = ot.SquaredExponential([12., 34., 56., 78.], [91.0]) algo = ot.KrigingAlgorithm(X_train, Y_train, covarianceModel, basis) .. GENERATED FROM PYTHON SOURCE LINES 195-196 The `setOptimizeParameters` method can be used to disable the optimization of the parameters. .. GENERATED FROM PYTHON SOURCE LINES 198-200 .. code-block:: default algo.setOptimizeParameters(False) .. GENERATED FROM PYTHON SOURCE LINES 201-202 Then we run the algorithm and get the result. .. GENERATED FROM PYTHON SOURCE LINES 204-207 .. code-block:: default algo.run() result = algo.getResult() .. GENERATED FROM PYTHON SOURCE LINES 208-209 We observe that the covariance model is unchanged. .. GENERATED FROM PYTHON SOURCE LINES 211-213 .. code-block:: default result.getCovarianceModel() .. raw:: html

SquaredExponential(scale=[12,34,56,78], amplitude=[91])



.. GENERATED FROM PYTHON SOURCE LINES 214-215 The trend, however, is still optimized, using linear least squares. .. GENERATED FROM PYTHON SOURCE LINES 217-219 .. code-block:: default result.getTrendCoefficients() .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [class=Point name=Unnamed dimension=1 values=[13.3786]] .. GENERATED FROM PYTHON SOURCE LINES 220-224 Reuse the parameters from a previous optimization ------------------------------------------------- In this example, we show how to reuse the optimized parameters of a previous kriging and configure a new one. Furthermore, we disable the optimization so that the parameters of the covariance model are not updated. This make the process of adding a new point very fast: it improves the quality by adding a new point in the design of experiments without paying the price of the update of the covariance model. .. GENERATED FROM PYTHON SOURCE LINES 226-227 Step 1: Run a first kriging algorithm. .. GENERATED FROM PYTHON SOURCE LINES 229-238 .. code-block:: default dimension = myDistribution.getDimension() basis = ot.ConstantBasisFactory(dimension).build() covarianceModel = ot.SquaredExponential([1.]*dimension, [1.0]) algo = ot.KrigingAlgorithm(X_train, Y_train, covarianceModel, basis) algo.run() result = algo.getResult() covarianceModel = result.getCovarianceModel() covarianceModel .. raw:: html

SquaredExponential(scale=[1,1,1,1], amplitude=[5.08434])



.. GENERATED FROM PYTHON SOURCE LINES 239-240 Step 2: Create a new point and add it to the previous training design. .. GENERATED FROM PYTHON SOURCE LINES 242-245 .. code-block:: default X_new = myDistribution.getSample(20) Y_new = model(X_new) .. GENERATED FROM PYTHON SOURCE LINES 246-249 .. code-block:: default X_train.add(X_new) X_train.getSize() .. rst-class:: sphx-glr-script-out Out: .. code-block:: none 30 .. GENERATED FROM PYTHON SOURCE LINES 250-253 .. code-block:: default Y_train.add(Y_new) Y_train.getSize() .. rst-class:: sphx-glr-script-out Out: .. code-block:: none 30 .. GENERATED FROM PYTHON SOURCE LINES 254-255 Step 3: Create an updated kriging, using the new point with the old covariance parameters. .. GENERATED FROM PYTHON SOURCE LINES 257-265 .. code-block:: default algo = ot.KrigingAlgorithm(X_train, Y_train, covarianceModel, basis) algo.setOptimizeParameters(False) algo.run() result = algo.getResult() notUpdatedCovarianceModel = result.getCovarianceModel() notUpdatedCovarianceModel .. raw:: html

SquaredExponential(scale=[1,1,1,1], amplitude=[5.08434])



.. GENERATED FROM PYTHON SOURCE LINES 266-275 .. code-block:: default def printCovarianceParameterChange(covarianceModel1, covarianceModel2): parameters1 = covarianceModel1.getFullParameter() parameters2 = covarianceModel2.getFullParameter() for i in range(parameters1.getDimension()): deltai = abs(parameters1[i] - parameters2[i]) print("Change in the parameter #%d = %s" % (i, deltai)) return .. GENERATED FROM PYTHON SOURCE LINES 276-278 .. code-block:: default printCovarianceParameterChange(covarianceModel, notUpdatedCovarianceModel) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Change in the parameter #0 = 0.0 Change in the parameter #1 = 0.0 Change in the parameter #2 = 0.0 Change in the parameter #3 = 0.0 Change in the parameter #4 = 0.0 .. GENERATED FROM PYTHON SOURCE LINES 279-280 We see that the parameters did not change *at all*: disabling the optimization allows to keep a constant covariance model. In a practical algorithm, we may, for example, add a block of 10 new points before updating the parameters of the covariance model. At this point, we may reuse the previous covariance model so that the optimization starts from a better point, compared to the parameters default values. This will reduce the cost of the optimization. .. GENERATED FROM PYTHON SOURCE LINES 282-284 Configure the optimization solver --------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 286-287 The following example checks the robustness of the optimization of the kriging algorithm with respect to the optimization of the likelihood function in the covariance model parameters estimation. We use a `MultiStart` algorithm in order to avoid to be trapped by a local minimum. Furthermore, we generate the design of experiments using a `LHSExperiments`, which guarantees that the points will fill the space. .. GENERATED FROM PYTHON SOURCE LINES 289-293 .. code-block:: default sampleSize_train = 10 X_train = myDistribution.getSample(sampleSize_train) Y_train = model(X_train) .. GENERATED FROM PYTHON SOURCE LINES 294-295 First, we create a multivariate distribution, based on independent `Uniform` marginals which have the bounds required by the covariance model. .. GENERATED FROM PYTHON SOURCE LINES 297-302 .. code-block:: default distributions = ot.DistributionCollection() for i in range(dim): distributions.add(ot.Uniform(lbounds[i], ubounds[i])) boundedDistribution = ot.ComposedDistribution(distributions) .. GENERATED FROM PYTHON SOURCE LINES 303-304 We first generate a Latin Hypercube Sampling (LHS) design made of 25 points in the sample space. This LHS is optimized so as to fill the space. .. GENERATED FROM PYTHON SOURCE LINES 306-317 .. code-block:: default K = 25 # design size LHS = ot.LHSExperiment(boundedDistribution, K) LHS.setAlwaysShuffle(True) SA_profile = ot.GeometricProfile(10., 0.95, 20000) LHS_optimization_algo = ot.SimulatedAnnealingLHS( LHS, ot.SpaceFillingC2(), SA_profile) LHS_optimization_algo.generate() LHS_design = LHS_optimization_algo.getResult() starting_points = LHS_design.getOptimalDesign() starting_points.getSize() .. rst-class:: sphx-glr-script-out Out: .. code-block:: none 25 .. GENERATED FROM PYTHON SOURCE LINES 318-319 We can check that the minimum and maximum in the sample correspond to the bounds of the design of experiment. .. GENERATED FROM PYTHON SOURCE LINES 321-323 .. code-block:: default lbounds, ubounds .. rst-class:: sphx-glr-script-out Out: .. code-block:: none (class=Point name=Unnamed dimension=4 values=[0.01,0.01,0.01,0.01], class=Point name=Unnamed dimension=4 values=[2.27497e+07,81879.1,18.2918,178.813]) .. GENERATED FROM PYTHON SOURCE LINES 324-326 .. code-block:: default starting_points.getMin(), starting_points.getMax() .. rst-class:: sphx-glr-script-out Out: .. code-block:: none (class=Point name=Unnamed dimension=4 values=[8192.53,1897.95,0.606147,5.1413], class=Point name=Unnamed dimension=4 values=[2.26853e+07,81252.5,17.7015,177.663]) .. GENERATED FROM PYTHON SOURCE LINES 327-328 Then we create a `MultiStart` algorithm based on the LHS starting points. .. GENERATED FROM PYTHON SOURCE LINES 330-333 .. code-block:: default solver.setMaximumIterationNumber(10000) multiStartSolver = ot.MultiStart(solver, starting_points) .. GENERATED FROM PYTHON SOURCE LINES 334-335 Finally, we configure the optimization algorithm so as to use the `MultiStart` algorithm. .. GENERATED FROM PYTHON SOURCE LINES 337-342 .. code-block:: default algo = ot.KrigingAlgorithm(X_train, Y_train, covarianceModel, basis) algo.setOptimizationBounds(bounds) algo.setOptimizationAlgorithm(multiStartSolver) algo.run() .. GENERATED FROM PYTHON SOURCE LINES 343-346 .. code-block:: default finetune_covariance_model = result.getCovarianceModel() finetune_covariance_model .. raw:: html

SquaredExponential(scale=[1,1,1,1], amplitude=[5.08434])



.. GENERATED FROM PYTHON SOURCE LINES 347-350 .. code-block:: default printCovarianceParameterChange( finetune_covariance_model, basic_covariance_model) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Change in the parameter #0 = 0.0 Change in the parameter #1 = 0.0 Change in the parameter #2 = 0.0 Change in the parameter #3 = 0.0 Change in the parameter #4 = 0.0 .. GENERATED FROM PYTHON SOURCE LINES 351-352 We see that there are no changes in the estimated parameters. This shows that the first optimization of the parameters worked fine. .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 0.267 seconds) .. _sphx_glr_download_auto_meta_modeling_kriging_metamodel_plot_kriging_hyperparameters_optimization.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_kriging_hyperparameters_optimization.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_kriging_hyperparameters_optimization.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_