.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_reliability_sensitivity/design_of_experiments/plot_lola_voronoi.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_reliability_sensitivity_design_of_experiments_plot_lola_voronoi.py: LOLA-Voronoi sequential design of experiment ============================================ .. GENERATED FROM PYTHON SOURCE LINES 7-11 The LOLA-Voronoi sequential experiment helps to generate an optimized design allowing better approximations of functions taking into account empty regions and gradient values. It can be relevant to build a design of experiment for a metamodel: we will compare the LOLA-Voronoi design against a Sobol' design as learning points for a chaos metamodel. .. GENERATED FROM PYTHON SOURCE LINES 13-17 .. code-block:: Python import openturns as ot import openturns.experimental as otexp import openturns.viewer as otv .. GENERATED FROM PYTHON SOURCE LINES 18-19 Lets use Franke's bivariate function .. GENERATED FROM PYTHON SOURCE LINES 19-31 .. code-block:: Python dim = 2 f1 = ot.SymbolicFunction( ["a0", "a1"], [ "3 / 4 * exp(-1 / 4 * (((9 * a0 - 2) ^ 2) + ((9 * a1 - 2) ^ 2))) + 3 / 4 * exp(-1 / 49 * " "((9 * a0 + 1) ^ 2) - 1 / 10 * (9 * a1 + 1) ^ 2) + 1 / 2 * exp(-1 / 4 * (((9 * a0 - 7) ^ 2) " "+ (9 * a1 - 3) ^ 2)) - 1 / 5 * exp(-((9 * a0 - 4) ^ 2) - ((9 * a1 + 1) ^ 2))" ], ) print(f1([0.5, 0.5])) distribution = ot.JointDistribution([ot.Uniform(0.0, 1.0)] * 2) .. rst-class:: sphx-glr-script-out .. code-block:: none [0.112312] .. GENERATED FROM PYTHON SOURCE LINES 32-33 Plot the function .. GENERATED FROM PYTHON SOURCE LINES 33-44 .. code-block:: Python ot.ResourceMap.SetAsString("Contour-DefaultColorMapNorm", "rank") graph = f1.draw( distribution.getRange().getLowerBound(), distribution.getRange().getUpperBound() ) contour = graph.getDrawable(0) contour.setLegend("model") graph.setTitle("Model") graph.setXTitle("x1") graph.setYTitle("x2") _ = otv.View(graph, square_axes=True) .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_001.svg :alt: Model :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_001.svg :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 45-46 Plot the hessian norm .. GENERATED FROM PYTHON SOURCE LINES 46-64 .. code-block:: Python def pyHessianNorm(X): h = f1.hessian(X).getSheet(0) h.squareElements() s = h.computeSumElements() return [s**0.5] hessNorm = ot.PythonFunction(f1.getInputDimension(), 1, pyHessianNorm) graph = hessNorm.draw( distribution.getRange().getLowerBound(), distribution.getRange().getUpperBound() ) graph.setTitle("Hessian norm") graph.setXTitle("x1") graph.setYTitle("x2") _ = otv.View(graph, square_axes=True) .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_002.svg :alt: Hessian norm :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_002.svg :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 65-66 Lets define an initial design of experiments .. GENERATED FROM PYTHON SOURCE LINES 66-70 .. code-block:: Python N = 50 x0 = ot.LowDiscrepancyExperiment(ot.HaltonSequence(), distribution, N).generate() y0 = f1(x0) .. GENERATED FROM PYTHON SOURCE LINES 71-72 Plot the initial input sample .. GENERATED FROM PYTHON SOURCE LINES 72-81 .. code-block:: Python graph = ot.Graph(f"Initial points N={N}", "x1", "x2", True) initial = ot.Cloud(x0) initial.setPointStyle("fcircle") initial.setColor("blue") initial.setLegend(f"initial ({len(x0)})") graph.add(initial) graph.add(contour) _ = otv.View(graph, square_axes=True) .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_003.svg :alt: Initial points N=50 :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_003.svg :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 82-83 Instantiate the algorithm from the initial DOE and the distribution .. GENERATED FROM PYTHON SOURCE LINES 83-85 .. code-block:: Python algo = otexp.LOLAVoronoi(x0, y0, distribution) .. GENERATED FROM PYTHON SOURCE LINES 86-87 Iteratively generate new samples: add 50 points, in 10 blocks of 5 points. .. GENERATED FROM PYTHON SOURCE LINES 87-118 .. code-block:: Python inc = 5 contour = contour.getImplementation() contour.setColorBarPosition("") # hide color bar for i in range(10): graph = ot.Graph("", "x1", "x2", True) graph.setLegendPosition("upper left") graph.setLegendFontSize(8) graph.add(contour) graph.add(initial) if i > 0: previous = ot.Cloud(algo.getInputSample()[len(x0) : N]) previous.setPointStyle("fcircle") previous.setColor("red") previous.setLegend(f"previous iterations ({N - len(x0)})") graph.add(previous) x = algo.generate(inc) y = f1(x) algo.update(x, y) N = algo.getGenerationIndices()[-1] current = ot.Cloud(x) current.setPointStyle("fcircle") current.setColor("orange") current.setLegend(f"current iteration ({inc})") graph.add(current) graph.setTitle(f"LOLA-Voronoi iteration #{i + 1} N={N}") otv.View(graph, square_axes=True) .. rst-class:: sphx-glr-horizontal * .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_004.svg :alt: LOLA-Voronoi iteration #1 N=55 :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_004.svg :class: sphx-glr-multi-img * .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_005.svg :alt: LOLA-Voronoi iteration #2 N=60 :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_005.svg :class: sphx-glr-multi-img * .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_006.svg :alt: LOLA-Voronoi iteration #3 N=65 :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_006.svg :class: sphx-glr-multi-img * .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_007.svg :alt: LOLA-Voronoi iteration #4 N=70 :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_007.svg :class: sphx-glr-multi-img * .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_008.svg :alt: LOLA-Voronoi iteration #5 N=75 :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_008.svg :class: sphx-glr-multi-img * .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_009.svg :alt: LOLA-Voronoi iteration #6 N=80 :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_009.svg :class: sphx-glr-multi-img * .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_010.svg :alt: LOLA-Voronoi iteration #7 N=85 :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_010.svg :class: sphx-glr-multi-img * .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_011.svg :alt: LOLA-Voronoi iteration #8 N=90 :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_011.svg :class: sphx-glr-multi-img * .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_012.svg :alt: LOLA-Voronoi iteration #9 N=95 :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_012.svg :class: sphx-glr-multi-img * .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_013.svg :alt: LOLA-Voronoi iteration #10 N=100 :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_013.svg :class: sphx-glr-multi-img .. GENERATED FROM PYTHON SOURCE LINES 119-120 Lets compare metamodels from LOLA samples versus other design .. GENERATED FROM PYTHON SOURCE LINES 120-135 .. code-block:: Python xLola, yLola = algo.getInputSample(), algo.getOutputSample() learnSize = xLola.getSize() def runMetaModel(x, y, tag): algo = ot.LeastSquaresExpansion(x, y, distribution) algo.run() metamodel = algo.getResult().getMetaModel() yPred = metamodel(xRef) validation = ot.MetaModelValidation(yRef, yPred) mse = validation.computeMeanSquaredError() maxerr = (yRef - yPred).asPoint().normInf() print(f"{tag} mse={mse} r2={validation.computeR2Score()} maxerr={maxerr:.3f}") .. GENERATED FROM PYTHON SOURCE LINES 136-137 Generate a large validation sample by Monte Carlo .. GENERATED FROM PYTHON SOURCE LINES 137-141 .. code-block:: Python nRef = int(1e6) xRef = distribution.getSample(nRef) yRef = f1(xRef) .. GENERATED FROM PYTHON SOURCE LINES 142-143 Build a metamodel from Sobol' samples .. GENERATED FROM PYTHON SOURCE LINES 143-149 .. code-block:: Python xSobol = ot.LowDiscrepancyExperiment( ot.SobolSequence(), distribution, learnSize ).generate() ySobol = f1(xSobol) runMetaModel(xSobol, ySobol, "Sobol") .. rst-class:: sphx-glr-script-out .. code-block:: none Sobol mse=[0.00400156] r2=[0.94908] maxerr=1.257 .. GENERATED FROM PYTHON SOURCE LINES 150-153 Build a metamodel on the LOLA global samples We observe that the metamodel error metrics (MSE, R2) and maximum error from the LOLA design are a bit better compared to the Sobol experiment .. GENERATED FROM PYTHON SOURCE LINES 153-155 .. code-block:: Python runMetaModel(xLola, yLola, "LOLA") .. rst-class:: sphx-glr-script-out .. code-block:: none LOLA mse=[0.00299108] r2=[0.961939] maxerr=0.650 .. GENERATED FROM PYTHON SOURCE LINES 156-157 Define a function to plot the different scores .. GENERATED FROM PYTHON SOURCE LINES 157-173 .. code-block:: Python def drawScore(score, tag): f = ot.DatabaseFunction(xLola, score) lb = distribution.getRange().getLowerBound() ub = distribution.getRange().getUpperBound() graph = f.draw(lb, ub) final = ot.Cloud(xLola) final.setPointStyle("fcircle") graph.add(final) graph.setTitle(f"{tag} score") graph.setXTitle("x1") graph.setYTitle("x2") otv.View(graph, square_axes=True) .. GENERATED FROM PYTHON SOURCE LINES 174-175 Plot the Voronoi score: it matches unexplored areas on top or right borders. .. GENERATED FROM PYTHON SOURCE LINES 175-179 .. code-block:: Python algo.generate(inc) # triggers score update of the last batch drawScore(algo.getVoronoiScore(), "Voronoi") .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_014.svg :alt: Voronoi score :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_014.svg :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 180-181 Plot the LOLA score: it underlines regions with high gradients variations .. GENERATED FROM PYTHON SOURCE LINES 181-184 .. code-block:: Python drawScore(algo.getLOLAScore(), "LOLA") .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_015.svg :alt: LOLA score :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_015.svg :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 185-186 Plot the hybrid score: it exposes regions with medium-intensity gradients variations left to explore .. GENERATED FROM PYTHON SOURCE LINES 186-189 .. code-block:: Python drawScore(algo.getHybridScore(), "hybrid") .. image-sg:: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_016.svg :alt: hybrid score :srcset: /auto_reliability_sensitivity/design_of_experiments/images/sphx_glr_plot_lola_voronoi_016.svg :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 190-191 Show all plots .. GENERATED FROM PYTHON SOURCE LINES 191-192 .. code-block:: Python otv.View.ShowAll() .. _sphx_glr_download_auto_reliability_sensitivity_design_of_experiments_plot_lola_voronoi.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_lola_voronoi.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_lola_voronoi.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_lola_voronoi.zip `