.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_meta_modeling/polynomial_chaos_metamodel/plot_chaos_cv.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_meta_modeling_polynomial_chaos_metamodel_plot_chaos_cv.py: Polynomial chaos expansion cross-validation =========================================== .. GENERATED FROM PYTHON SOURCE LINES 7-34 Introduction ------------ In this example, we show how to perform the cross-validation of the :ref:`Ishigami model` using polynomial chaos expansion. More precisely, we use the methods suggested in [muller2016]_, chapter 5, page 257. We make the assumption that a dataset is given and we create a metamodel using this data. Once done, we want to assess the quality of the metamodel using the data we have. Another example of this method is presented in :doc:`/auto_numerical_methods/general_methods/plot_pce_design`. In this example, we compare two methods: - split the data into two subsets, one for training and one for testing, - use k-fold validation. The split of the data is performed by the `compute_R2_score_by_splitting` function below. It uses 75% of the data to estimate the coefficients of the metamodel (this is the training step) and use 25% of the data to estimate the :math:`R^2` score (this is the validation step). To do this, we use the `split` method of the :class:`~openturns.Sample`. The K-Fold validation is performed by the `compute_R2_score_by_kfold` function below. It uses the K-Fold method with :math:`k = 5`. The code uses the :class:`~openturns.KFoldSplitter` class, which computes the appropriate indices. Similar results can be obtained with :class:`~openturns.LeaveOneOutSplitter` at a higher cost. This cross-validation method is used here to see which polynomial degree leads to an accurate metamodel of the Ishigami test function. .. GENERATED FROM PYTHON SOURCE LINES 36-40 .. code-block:: Python import openturns as ot import openturns.viewer as otv from openturns.usecases import ishigami_function .. GENERATED FROM PYTHON SOURCE LINES 41-43 Define helper functions ----------------------- .. GENERATED FROM PYTHON SOURCE LINES 45-47 The next function creates a polynomial chaos expansion from a training data set and a total degree. .. GENERATED FROM PYTHON SOURCE LINES 49-97 .. code-block:: Python def compute_sparse_least_squares_chaos( inputTrain, outputTrain, multivariateBasis, totalDegree, distribution ): """ Create a sparse polynomial chaos based on least squares. * Uses the enumerate rule in multivariateBasis. * Uses the LeastSquaresStrategy to compute the coefficients based on least squares. * Uses LeastSquaresMetaModelSelectionFactory to use the LARS selection method. * Uses FixedStrategy in order to keep all the coefficients that the LARS method selected. Parameters ---------- inputTrain : Sample The input design of experiments. outputTrain : Sample The output design of experiments. multivariateBasis : multivariateBasis The multivariate chaos multivariateBasis. totalDegree : int The total degree of the chaos polynomial. distribution : Distribution. The distribution of the input variable. Returns ------- result : PolynomialChaosResult The estimated polynomial chaos. """ selectionAlgorithm = ot.LeastSquaresMetaModelSelectionFactory() projectionStrategy = ot.LeastSquaresStrategy( inputTrain, outputTrain, selectionAlgorithm ) enumerateFunction = multivariateBasis.getEnumerateFunction() multivariateBasisSize = enumerateFunction.getBasisSizeFromTotalDegree(totalDegree) adaptiveStrategy = ot.FixedStrategy(multivariateBasis, multivariateBasisSize) chaosAlgo = ot.FunctionalChaosAlgorithm( inputTrain, outputTrain, distribution, adaptiveStrategy, projectionStrategy ) chaosAlgo.run() result = chaosAlgo.getResult() return result .. GENERATED FROM PYTHON SOURCE LINES 98-100 The next function computes the :math:`R^2` score by splitting the data set into a training set and a test set. .. GENERATED FROM PYTHON SOURCE LINES 103-155 .. code-block:: Python def compute_R2_score_by_splitting( inputSample, outputSample, multivariateBasis, totalDegree, distribution, split_fraction=0.75, ): """ Compute R2 score by splitting into train/test sets. Parameters ---------- inputSample : Sample(size, input_dimension) The X dataset. outputSample : Sample(size, output_dimension) The Y dataset. multivariateBasis : multivariateBasis The multivariate chaos multivariateBasis. totalDegree : int The total degree of the chaos polynomial. distribution : Distribution. The distribution of the input variable. split_fraction : float, in (0, 1) The proportion of the sample used in the training. Returns ------- r2Score : float The R2 score. """ training_sample_size = inputSample.getSize() inputSampleTrain = ot.Sample(inputSample) # Make a copy outputSampleTrain = ot.Sample(outputSample) split_index = int(split_fraction * training_sample_size) inputSampleTest = inputSampleTrain.split(split_index) outputSampleTest = outputSampleTrain.split(split_index) chaosResult = compute_sparse_least_squares_chaos( inputSampleTrain, outputSampleTrain, multivariateBasis, totalDegree, distribution, ) metamodel = chaosResult.getMetaModel() metamodelPredictions = metamodel(inputSampleTest) val = ot.MetaModelValidation(outputSampleTest, metamodelPredictions) r2Score = val.computeR2Score() return r2Score .. GENERATED FROM PYTHON SOURCE LINES 156-157 The next function computes the mean squared error by K-Fold. .. GENERATED FROM PYTHON SOURCE LINES 160-223 .. code-block:: Python def computeMSENaiveKFold( inputSample, outputSample, multivariateBasis, totalDegree, distribution, kParameter=5, ): """ Compute mean squared error by (naive) KFold. Parameters ---------- inputSample : Sample(size, input_dimension) The inputSample dataset. outputSample : Sample(size, output_dimension) The outputSample dataset. multivariateBasis : multivariateBasis The multivariate chaos multivariateBasis. totalDegree : int The total degree of the chaos polynomial. distribution : Distribution. The distribution of the input variable. kParameter : int, in (2, sampleSize) The parameter K. Returns ------- mse : Point(output_dimension) The mean squared error. """ # sampleSize = inputSample.getSize() outputDimension = outputSample.getDimension() splitter = ot.KFoldSplitter(sampleSize, kParameter) squaredResiduals = ot.Sample(sampleSize, outputDimension) for indicesTrain, indicesTest in splitter: inputSampleTrain, inputSampleTest = ( inputSample[indicesTrain], inputSample[indicesTest], ) outputSampleTrain, outputSampleTest = ( outputSample[indicesTrain], outputSample[indicesTest], ) chaosResultKFold = compute_sparse_least_squares_chaos( inputSampleTrain, outputSampleTrain, multivariateBasis, totalDegree, distribution, ) metamodelKFold = chaosResultKFold.getMetaModel() predictionsKFold = metamodelKFold(inputSampleTest) residualsKFold = outputSampleTest - predictionsKFold foldSize = indicesTest.getSize() for j in range(outputDimension): for i in range(foldSize): squaredResiduals[indicesTest[i], j] = residualsKFold[i, j] ** 2 mse = squaredResiduals.computeMean() return mse .. GENERATED FROM PYTHON SOURCE LINES 224-225 The next function computes the :math:`R^2` score by K-Fold. .. GENERATED FROM PYTHON SOURCE LINES 225-329 .. code-block:: Python def compute_R2_score_by_kfold( inputSample, outputSample, multivariateBasis, totalDegree, distribution, kParameter=5, ): """ Compute R2 score by KFold. Parameters ---------- inputSample : Sample(size, input_dimension) The X dataset. outputSample : Sample(size, output_dimension) The Y dataset. multivariateBasis : multivariateBasis The multivariate chaos multivariateBasis. totalDegree : int The total degree of the chaos polynomial. distribution : Distribution. The distribution of the input variable. kParameter : int The parameter K. Returns ------- r2Score : float The R2 score. """ # mse = computeMSENaiveKFold( inputSample, outputSample, multivariateBasis, totalDegree, distribution, kParameter, ) sampleVariance = outputSample.computeCentralMoment(2) outputDimension = outputSample.getDimension() r2Score = ot.Point(outputDimension) for i in range(outputDimension): r2Score[i] = 1.0 - mse[i] / sampleVariance[i] return r2Score """ Compute mean squared error by (naive) KFold. Parameters ---------- inputSample : Sample(size, input_dimension) The inputSample dataset. outputSample : Sample(size, output_dimension) The outputSample dataset. multivariateBasis : multivariateBasis The multivariate chaos multivariateBasis. totalDegree : int The total degree of the chaos polynomial. distribution : Distribution. The distribution of the input variable. kParameter : int, in (2, sampleSize) The parameter K. Returns ------- mse : Point(output_dimension) The mean squared error. """ # sampleSize = inputSample.getSize() outputDimension = outputSample.getDimension() splitter = ot.KFoldSplitter(sampleSize, kParameter) squaredResiduals = ot.Sample(sampleSize, outputDimension) for indicesTrain, indicesTest in splitter: inputSampleTrain, inputSampleTest = ( inputSample[indicesTrain], inputSample[indicesTest], ) outputSampleTrain, outputSampleTest = ( outputSample[indicesTrain], outputSample[indicesTest], ) chaosResultKFold = compute_sparse_least_squares_chaos( inputSampleTrain, outputSampleTrain, multivariateBasis, totalDegree, distribution, ) metamodelKFold = chaosResultKFold.getMetaModel() predictionsKFold = metamodelKFold(inputSampleTest) residualsKFold = outputSampleTest - predictionsKFold foldSize = indicesTest.getSize() for j in range(outputDimension): for i in range(foldSize): squaredResiduals[indicesTest[i], j] = residualsKFold[i, j] ** 2 mse = squaredResiduals.computeMean() return mse .. GENERATED FROM PYTHON SOURCE LINES 330-332 Define the training data set ---------------------------- .. GENERATED FROM PYTHON SOURCE LINES 334-335 We start by generating the input and output samples. We use a sample size equal to 1000. .. GENERATED FROM PYTHON SOURCE LINES 338-347 .. code-block:: Python im = ishigami_function.IshigamiModel() im.inputDistribution.setDescription(["X0", "X1", "X2"]) im.model.setOutputDescription(["$Y$"]) ot.RandomGenerator.SetSeed(0) sample_size = 500 X = im.inputDistribution.getSample(sample_size) print("Input sample:") X[:5] .. rst-class:: sphx-glr-script-out .. code-block:: none Input sample: .. raw:: html
X0X1X2
00.8160385-1.5469512.67553
12.405236-0.4441351-2.192122
2-2.291626-3.0132041.30032
3-2.9373720.0734501-0.7336488
4-0.960969-0.1161257-0.4336099


.. GENERATED FROM PYTHON SOURCE LINES 348-354 .. code-block:: Python Y = im.model(X) Y.setDescription(["Y"]) print("Output sample:") Y[:5] .. rst-class:: sphx-glr-script-out .. code-block:: none Output sample: .. raw:: html
Y
011.45723
13.514782
2-0.8512832
3-0.170983
4-0.728672


.. GENERATED FROM PYTHON SOURCE LINES 355-358 We compute a polynomial chaos decomposition with a total degree equal to 5. In order to reduce the number of coefficients, the `compute_sparse_least_squares_chaos` function creates a sparse chaos using regression and the LARS method. .. GENERATED FROM PYTHON SOURCE LINES 360-370 .. code-block:: Python dimension = im.inputDistribution.getDimension() multivariateBasis = ot.OrthogonalProductPolynomialFactory( [im.inputDistribution.getMarginal(i) for i in range(dimension)] ) totalDegree = 5 # Polynomial degree result = compute_sparse_least_squares_chaos( X, Y, multivariateBasis, totalDegree, im.inputDistribution ) result .. raw:: html
FunctionalChaosResult
  • input dimension: 3
  • output dimension: 1
  • distribution dimension: 3
  • transformation: 3 -> 3
  • inverse transformation: 3 -> 3
  • orthogonal basis dimension: 3
  • indices size: 12
  • relative errors: [0.000326285]
  • residuals: [0.0644485]
Index Multi-index Coeff.
0 [0,0,0] 3.661918
1 [1,0,0] 1.599859
2 [1,0,1] -0.1517826
3 [0,2,0] -0.616015
4 [0,0,2] -0.1429231
5 [3,0,0] -1.21482
6 [1,0,2] 1.421431
7 [0,4,0] -1.983213
8 [5,0,0] 0.161996
9 [4,1,0] 0.1837313
10 [3,0,2] -1.165708
11 [1,0,4] 0.3744418


.. GENERATED FROM PYTHON SOURCE LINES 371-372 Get the metamodel. .. GENERATED FROM PYTHON SOURCE LINES 374-376 .. code-block:: Python metamodel = result.getMetaModel() .. GENERATED FROM PYTHON SOURCE LINES 377-379 Validate the metamodel from a test set -------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 381-383 In order to validate our polynomial chaos, we generate an extra pair of input / output samples and use the :class:`~openturns.MetaModelValidation` class. .. GENERATED FROM PYTHON SOURCE LINES 383-394 .. code-block:: Python test_sample_size = 200 # Size of the validation design of experiments inputTest = im.inputDistribution.getSample(test_sample_size) outputTest = im.model(inputTest) metamodelPredictions = metamodel(inputTest) validation = ot.MetaModelValidation(outputTest, metamodelPredictions) r2Score = validation.computeR2Score()[0] graph = validation.drawValidation() graph.setTitle("R2=%.2f, n=%d" % (r2Score, test_sample_size)) view = otv.View(graph) .. image-sg:: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_chaos_cv_001.png :alt: R2=0.84, n=200 :srcset: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_chaos_cv_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 395-413 The plot shows that the score is relatively high and might be satisfactory for some purposes. There are however several issues with the previous procedure: - It may happen that the data in the validation sample with size 200 is more difficult to fit than the data in the training dataset. In this case, the :math:`R^2` score may be pessimistic. - It may happen that the data in the validation sample with size 200 is less difficult to fit than the data in the validation dataset. In this case, the :math:`R^2` score may be optimistic. - We may not be able to generate an extra dataset for validation. In this case, a part of the original dataset should be used for validation. - The polynomial degree may not be appropriate for this data. - The dataset may be ordered in some way, so that the split is somewhat arbitrary. One solution to circumvent this is to randomly shuffle the data. This can be done using the :class:`~openturns.KPermutationsDistribution`. The K-Fold validation aims at solving some of these issues, so that all the available data is used in order to estimate the :math:`R^2` score. .. GENERATED FROM PYTHON SOURCE LINES 415-417 Compute the R2 score from a test set ------------------------------------ .. GENERATED FROM PYTHON SOURCE LINES 419-420 In the following script, we compute the :math:`R^2` score associated with each polynomial degree from 1 to 10. .. GENERATED FROM PYTHON SOURCE LINES 420-434 .. code-block:: Python split_fraction = 0.75 print(f"Split cross-validation, with {100 * split_fraction:.0f}% for training") degree_max = 10 degree_list = list(range(1, 1 + degree_max)) n_degrees = len(degree_list) scoreSampleSplit = ot.Sample(len(degree_list), 1) for i in range(n_degrees): totalDegree = degree_list[i] scoreSampleSplit[i] = compute_R2_score_by_splitting( X, Y, multivariateBasis, totalDegree, im.inputDistribution, split_fraction ) print(f"Degree = {totalDegree}, score = {scoreSampleSplit[i, 0]:.4f}") .. rst-class:: sphx-glr-script-out .. code-block:: none Split cross-validation, with 75% for training Degree = 1, score = 0.1861 Degree = 2, score = 0.1861 Degree = 3, score = 0.4948 Degree = 4, score = 0.7338 Degree = 5, score = 0.8272 Degree = 6, score = 0.9772 Degree = 7, score = 0.9889 Degree = 8, score = 0.9994 Degree = 9, score = 0.9997 Degree = 10, score = 1.0000 .. GENERATED FROM PYTHON SOURCE LINES 435-445 .. code-block:: Python graph = ot.Graph( f"Split CV, {100 * split_fraction:.0f}% for training", "Degree", "$R^2$", True ) cloud = ot.Cloud(ot.Sample.BuildFromPoint(degree_list), scoreSampleSplit) cloud.setPointStyle("circle") graph.add(cloud) boundingBox = ot.Interval([0.0, 0.0], [1 + degree_max, 1.1]) graph.setBoundingBox(boundingBox) view = otv.View(graph, figure_kw={"figsize": (5.0, 4.0)}) .. image-sg:: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_chaos_cv_002.png :alt: Split CV, 75% for training :srcset: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_chaos_cv_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 446-448 We see that the polynomial degree may be increased up to degree 7, after which the :math:`R^2` score does not increase much. .. GENERATED FROM PYTHON SOURCE LINES 450-457 Compute the R2 score from K-Fold cross-validation ------------------------------------------------- One limitation of the previous method is that the estimate of the :math:`R^2` may be sensitive to the particular split of the dataset. The following script uses 5-Fold cross validation to estimate the :math:`R^2` score. .. GENERATED FROM PYTHON SOURCE LINES 459-469 .. code-block:: Python kParameter = 5 print(f"{kParameter}-Fold cross-validation") scoreSampleKFold = ot.Sample(len(degree_list), 1) for i in range(n_degrees): totalDegree = degree_list[i] scoreSampleKFold[i] = compute_R2_score_by_kfold( X, Y, multivariateBasis, totalDegree, im.inputDistribution, kParameter ) print(f"Degree = {totalDegree}, score = {scoreSampleKFold[i, 0]:.4f}") .. rst-class:: sphx-glr-script-out .. code-block:: none 5-Fold cross-validation Degree = 1, score = 0.1580 Degree = 2, score = 0.1569 Degree = 3, score = 0.4546 Degree = 4, score = 0.7188 Degree = 5, score = 0.8298 Degree = 6, score = 0.9809 Degree = 7, score = 0.9903 Degree = 8, score = 0.9996 Degree = 9, score = 0.9998 Degree = 10, score = 1.0000 .. GENERATED FROM PYTHON SOURCE LINES 470-477 .. code-block:: Python graph = ot.Graph(f"{kParameter}-Fold cross-validation", "Degree", "$R^2$", True) cloud = ot.Cloud(ot.Sample.BuildFromPoint(degree_list), scoreSampleKFold) cloud.setPointStyle("square") graph.add(cloud) graph.setBoundingBox(boundingBox) view = otv.View(graph, figure_kw={"figsize": (5.0, 4.0)}) .. image-sg:: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_chaos_cv_003.png :alt: 5-Fold cross-validation :srcset: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_chaos_cv_003.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 478-479 The conclusion is similar to the previous method. .. GENERATED FROM PYTHON SOURCE LINES 481-482 Compare the two cross-validation methods. .. GENERATED FROM PYTHON SOURCE LINES 482-497 .. code-block:: Python graph = ot.Graph("CV : split vs K-Fold", "Degree", "$R^2$", True) cloud = ot.Cloud(ot.Sample.BuildFromPoint(degree_list), scoreSampleSplit) cloud.setPointStyle("circle") cloud.setLegend("Split") graph.add(cloud) cloud = ot.Cloud(ot.Sample.BuildFromPoint(degree_list), scoreSampleKFold) cloud.setPointStyle("square") cloud.setLegend("K-Fold") graph.add(cloud) graph.setLegendPosition("topleft") graph.setBoundingBox(boundingBox) view = otv.View(graph, figure_kw={"figsize": (5.0, 4.0)}) # sphinx_gallery_thumbnail_number = 4 .. image-sg:: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_chaos_cv_004.png :alt: CV : split vs K-Fold :srcset: /auto_meta_modeling/polynomial_chaos_metamodel/images/sphx_glr_plot_chaos_cv_004.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 498-512 Conclusion ---------- When we select the best polynomial degree which maximizes the :math:`R^2` score, the danger is that the validation set is used both for computing the :math:`R^2` and to maximize it: hence, the :math:`R^2` score may be optimistic. In [muller2016]_, chapter 5, page 269, the authors advocate the split of the dataset into three subsets: - the training set, - the validation set, - the test set. When selecting the best parameters, the validation set is used. When estimating the :math:`R^2` score with the best parameters, the test set is used. .. GENERATED FROM PYTHON SOURCE LINES 514-515 .. code-block:: Python otv.View.ShowAll() .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 17.121 seconds) .. _sphx_glr_download_auto_meta_modeling_polynomial_chaos_metamodel_plot_chaos_cv.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_chaos_cv.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_chaos_cv.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_chaos_cv.zip `