Benchmark on a given set of problems

In this example, we show how to make a loop over the problems in the benchmark. We also show how to run various reliability algorithms on a given problem so that we can score the methods using number of digits or performance.

import openturns as ot
import numpy as np
import otbenchmark as otb

Browse the reliability problems

We present the BBRC test cases using the otbenchmark module.

benchmarkProblemList = otb.ReliabilityBenchmarkProblemList()
numberOfProblems = len(benchmarkProblemList)
numberOfProblems
26
for i in range(numberOfProblems):
    problem = benchmarkProblemList[i]
    name = problem.getName()
    pf = problem.getProbability()
    event = problem.getEvent()
    antecedent = event.getAntecedent()
    distribution = antecedent.getDistribution()
    dimension = distribution.getDimension()
    print("#", i, ":", name, " : pf = ", pf, ", dimension=", dimension)
# 0 : RP8  : pf =  0.0007897927545597477 , dimension= 6
# 1 : RP14  : pf =  0.00077285 , dimension= 5
# 2 : RP22  : pf =  0.004207305511299618 , dimension= 2
# 3 : RP24  : pf =  0.00286 , dimension= 2
# 4 : RP25  : pf =  4.148566293759747e-05 , dimension= 2
# 5 : RP28  : pf =  1.4532945550025393e-07 , dimension= 2
# 6 : RP31  : pf =  0.003226681209587691 , dimension= 2
# 7 : RP33  : pf =  0.00257 , dimension= 3
# 8 : RP35  : pf =  0.00347894632 , dimension= 2
# 9 : RP38  : pf =  0.0081 , dimension= 7
# 10 : RP53  : pf =  0.0313 , dimension= 2
# 11 : RP55  : pf =  0.5600144282863704 , dimension= 2
# 12 : RP54  : pf =  0.000998 , dimension= 20
# 13 : RP57  : pf =  0.0284 , dimension= 2
# 14 : RP75  : pf =  0.00981929872154689 , dimension= 2
# 15 : RP89  : pf =  0.00543 , dimension= 2
# 16 : RP107  : pf =  2.92e-07 , dimension= 10
# 17 : RP110  : pf =  3.19e-05 , dimension= 2
# 18 : RP111  : pf =  7.65e-07 , dimension= 2
# 19 : RP63  : pf =  0.000379 , dimension= 100
# 20 : RP91  : pf =  0.000697 , dimension= 5
# 21 : RP60  : pf =  0.0456 , dimension= 5
# 22 : RP77  : pf =  2.87e-07 , dimension= 3
# 23 : Four-branch serial system  : pf =  0.0022227950661944398 , dimension= 2
# 24 : R-S  : pf =  0.07864960352514257 , dimension= 2
# 25 : Axial stressed beam  : pf =  0.02919819462483095 , dimension= 2
maximumEvaluationNumber = 1000
maximumAbsoluteError = 1.0e-3
maximumRelativeError = 1.0e-3
maximumResidualError = 1.0e-3
maximumConstraintError = 1.0e-3
nearestPointAlgorithm = ot.AbdoRackwitz()
nearestPointAlgorithm.setMaximumCallsNumber(maximumEvaluationNumber)
nearestPointAlgorithm.setMaximumAbsoluteError(maximumAbsoluteError)
nearestPointAlgorithm.setMaximumRelativeError(maximumRelativeError)
nearestPointAlgorithm.setMaximumResidualError(maximumResidualError)
nearestPointAlgorithm.setMaximumConstraintError(maximumConstraintError)

The FORM method

problem = otb.ReliabilityProblem8()
metaAlgorithm = otb.ReliabilityBenchmarkMetaAlgorithm(problem)
benchmarkResult = metaAlgorithm.runFORM(nearestPointAlgorithm)
benchmarkResult.summary()
'computedProbability = 0.000659887791408224\nexactProbability = 0.0007897927545597477\nabsoluteError = 0.00012990496315152373\nnumberOfCorrectDigits = 0.7838874012130279\nnumberOfFunctionEvaluations = 8\nnumberOfDigitsPerEvaluation = 0.09798592515162849'

The SORM method

benchmarkResult = metaAlgorithm.runSORM(nearestPointAlgorithm)
benchmarkResult.summary()
'computedProbability = 0.0007838036444007651\nexactProbability = 0.0007897927545597477\nabsoluteError = 5.989110158982603e-06\nnumberOfCorrectDigits = 2.120150844037516\nnumberOfFunctionEvaluations = 8\nnumberOfDigitsPerEvaluation = 0.2650188555046895'

The LHS method

benchmarkResult = metaAlgorithm.runLHS(maximumOuterSampling=10000)
benchmarkResult.summary()
'computedProbability = 0.0006999999999999989\nexactProbability = 0.0007897927545597477\nabsoluteError = 8.979275455974877e-05\nnumberOfCorrectDigits = 0.9442718507112009\nnumberOfFunctionEvaluations = 10000\nnumberOfDigitsPerEvaluation = 9.442718507112009e-05'

The MonteCarloSampling method

benchmarkResult = metaAlgorithm.runMonteCarlo(maximumOuterSampling=10000)
benchmarkResult.summary()
'computedProbability = 0.0006000000000000008\nexactProbability = 0.0007897927545597477\nabsoluteError = 0.00018979275455974687\nnumberOfCorrectDigits = 0.619233516283543\nnumberOfFunctionEvaluations = 10000\nnumberOfDigitsPerEvaluation = 6.192335162835429e-05'

The FORM - Importance Sampling method

benchmarkResult = metaAlgorithm.runFORMImportanceSampling(nearestPointAlgorithm)
benchmarkResult.summary()
'computedProbability = 0.0007687158730375458\nexactProbability = 0.0007897927545597477\nabsoluteError = 2.1076881522201886e-05\nnumberOfCorrectDigits = 1.5737067909975317\nnumberOfFunctionEvaluations = 335\nnumberOfDigitsPerEvaluation = 0.004697632211932931'

The Subset method

benchmarkResult = metaAlgorithm.runSubsetSampling()
benchmarkResult.summary()
'computedProbability = 0.0007543799999999997\nexactProbability = 0.0007897927545597477\nabsoluteError = 3.541275455974798e-05\nnumberOfCorrectDigits = 1.3483534358601093\nnumberOfFunctionEvaluations = 4000\nnumberOfDigitsPerEvaluation = 0.00033708835896502734'

The following function computes the number of correct base-10 digits in the computed result compared to the exact result. The CompareMethods function takes as a parameter a problem and it returns the probabilities estimated by each method. In addition, it returns the performance of these methods.

def PrintResults(name, benchmarkResult):
    print("------------------------------------------------------------------")
    print(name)
    numberOfDigitsPerEvaluation = (
        benchmarkResult.numberOfCorrectDigits
        / benchmarkResult.numberOfFunctionEvaluations
    )
    print("Estimated probability:", benchmarkResult.computedProbability)
    print("Number of function calls:", benchmarkResult.numberOfFunctionEvaluations)
    print("Number of correct digits=%.1f" % (benchmarkResult.numberOfCorrectDigits))
    print(
        "Performance=%.2e (correct digits/evaluation)" % (numberOfDigitsPerEvaluation)
    )
    return [name, benchmarkResult.numberOfCorrectDigits, numberOfDigitsPerEvaluation]
def CompareMethods(problem, nearestPointAlgorithm, maximumOuterSampling=10000):
    """
    Runs various algorithms on a given problem.
    """
    summaryList = []
    pfReference = problem.getProbability()
    print("Exact probability:", pfReference)
    metaAlgorithm = otb.ReliabilityBenchmarkMetaAlgorithm(problem)
    # SubsetSampling
    benchmarkResult = metaAlgorithm.runSubsetSampling()
    summaryList.append(PrintResults("SubsetSampling", benchmarkResult))
    # FORM
    benchmarkResult = metaAlgorithm.runFORM(nearestPointAlgorithm)
    summaryList.append(PrintResults("FORM", benchmarkResult))
    # SORM
    benchmarkResult = metaAlgorithm.runSORM(nearestPointAlgorithm)
    summaryList.append(PrintResults("SORM", benchmarkResult))
    # FORM - ImportanceSampling
    benchmarkResult = metaAlgorithm.runFORMImportanceSampling(
        nearestPointAlgorithm, maximumOuterSampling=maximumOuterSampling
    )
    summaryList.append(PrintResults("FORM-IS", benchmarkResult))
    # MonteCarloSampling
    benchmarkResult = metaAlgorithm.runMonteCarlo(
        maximumOuterSampling=maximumOuterSampling
    )
    summaryList.append(PrintResults("MonteCarloSampling", benchmarkResult))
    # LHS
    benchmarkResult = metaAlgorithm.runLHS()
    summaryList.append(PrintResults("LHS", benchmarkResult))
    # Gather results
    numberOfMethods = len(summaryList)
    correctDigitsList = []
    performanceList = []
    algorithmNames = []
    for i in range(numberOfMethods):
        [name, numberOfCorrectDigits, numberOfDigitsPerEvaluation] = summaryList[i]
        algorithmNames.append(name)
        correctDigitsList.append(numberOfCorrectDigits)
        performanceList.append(numberOfDigitsPerEvaluation)
    print("------------------------------------------------------------------------")
    print("Scoring by number of correct digits")
    indices = np.argsort(correctDigitsList)
    rank = list(indices)
    for i in range(numberOfMethods):
        j = rank[i]
        print("%d : %s (%.1f)" % (j, algorithmNames[j], correctDigitsList[j]))
    print("------------------------------------------------------------------------")
    print("Scoring by performance (digits/evaluation)")
    indices = np.argsort(performanceList)
    rank = list(indices)
    for i in range(len(indices)):
        j = rank[i]
        print("%d : %s (%.1e)" % (j, algorithmNames[j], performanceList[j]))
    return correctDigitsList, performanceList
problem = otb.ReliabilityProblem8()
_ = CompareMethods(problem, nearestPointAlgorithm)
Exact probability: 0.0007897927545597477
------------------------------------------------------------------
SubsetSampling
Estimated probability: 0.0007590000000000005
Number of function calls: 4000
Number of correct digits=1.4
Performance=3.52e-04 (correct digits/evaluation)
------------------------------------------------------------------
FORM
Estimated probability: 0.000659887791408224
Number of function calls: 8
Number of correct digits=0.8
Performance=9.80e-02 (correct digits/evaluation)
------------------------------------------------------------------
SORM
Estimated probability: 0.0007838036444007651
Number of function calls: 8
Number of correct digits=2.1
Performance=2.65e-01 (correct digits/evaluation)
------------------------------------------------------------------
FORM-IS
Estimated probability: 0.0008521464618589394
Number of function calls: 362
Number of correct digits=1.1
Performance=3.05e-03 (correct digits/evaluation)
------------------------------------------------------------------
MonteCarloSampling
Estimated probability: 0.0011000000000000005
Number of function calls: 10000
Number of correct digits=0.4
Performance=4.06e-05 (correct digits/evaluation)
------------------------------------------------------------------
LHS
Estimated probability: 0.001999999999999997
Number of function calls: 1000
Number of correct digits=0.0
Performance=0.00e+00 (correct digits/evaluation)
------------------------------------------------------------------------
Scoring by number of correct digits
5 : LHS (0.0)
4 : MonteCarloSampling (0.4)
1 : FORM (0.8)
3 : FORM-IS (1.1)
0 : SubsetSampling (1.4)
2 : SORM (2.1)
------------------------------------------------------------------------
Scoring by performance (digits/evaluation)
5 : LHS (0.0e+00)
4 : MonteCarloSampling (4.1e-05)
0 : SubsetSampling (3.5e-04)
3 : FORM-IS (3.0e-03)
1 : FORM (9.8e-02)
2 : SORM (2.7e-01)

Remarks

  • We note that the FORM and SORM methods are faster, but, they do not converge to the exact proba.

  • We also notice the effectiveness of the FORM-ImportanceSampling method (inexpensive method, and converges).

  • The convergence of the MonteCarlo method requires a large number of simulations.

  • SubsetSampling converges even if the probability is very low.

Total running time of the script: (0 minutes 1.466 seconds)