Benchmark on a given set of problems

In this example, we show how to make a loop over the problems in the benchmark. We also show how to run various reliability algorithms on a given problem so that we can score the methods using number of digits or performance.

import openturns as ot
import numpy as np
import otbenchmark as otb

Browse the reliability problems

We present the BBRC test cases using the otbenchmark module.

benchmarkProblemList = otb.ReliabilityBenchmarkProblemList()
numberOfProblems = len(benchmarkProblemList)
numberOfProblems
26
for i in range(numberOfProblems):
    problem = benchmarkProblemList[i]
    name = problem.getName()
    pf = problem.getProbability()
    event = problem.getEvent()
    antecedent = event.getAntecedent()
    distribution = antecedent.getDistribution()
    dimension = distribution.getDimension()
    print("#", i, ":", name, " : pf = ", pf, ", dimension=", dimension)
# 0 : RP8  : pf =  0.0007897927545597477 , dimension= 6
# 1 : RP14  : pf =  0.00077285 , dimension= 5
# 2 : RP22  : pf =  0.004207305511299618 , dimension= 2
# 3 : RP24  : pf =  0.00286 , dimension= 2
# 4 : RP25  : pf =  4.148566293759747e-05 , dimension= 2
# 5 : RP28  : pf =  1.4532945550025393e-07 , dimension= 2
# 6 : RP31  : pf =  0.003226681209587691 , dimension= 2
# 7 : RP33  : pf =  0.00257 , dimension= 3
# 8 : RP35  : pf =  0.00347894632 , dimension= 2
# 9 : RP38  : pf =  0.0081 , dimension= 7
# 10 : RP53  : pf =  0.0313 , dimension= 2
# 11 : RP55  : pf =  0.5600144282863704 , dimension= 2
# 12 : RP54  : pf =  0.000998 , dimension= 20
# 13 : RP57  : pf =  0.0284 , dimension= 2
# 14 : RP75  : pf =  0.00981929872154689 , dimension= 2
# 15 : RP89  : pf =  0.00543 , dimension= 2
# 16 : RP107  : pf =  2.92e-07 , dimension= 10
# 17 : RP110  : pf =  3.19e-05 , dimension= 2
# 18 : RP111  : pf =  7.65e-07 , dimension= 2
# 19 : RP63  : pf =  0.000379 , dimension= 100
# 20 : RP91  : pf =  0.000697 , dimension= 5
# 21 : RP60  : pf =  0.0456 , dimension= 5
# 22 : RP77  : pf =  2.87e-07 , dimension= 3
# 23 : Four-branch serial system  : pf =  0.0022227950661944398 , dimension= 2
# 24 : R-S  : pf =  0.07864960352514257 , dimension= 2
# 25 : Axial stressed beam  : pf =  0.02919819462483095 , dimension= 2
maximumEvaluationNumber = 1000
maximumAbsoluteError = 1.0e-3
maximumRelativeError = 1.0e-3
maximumResidualError = 1.0e-3
maximumConstraintError = 1.0e-3
nearestPointAlgorithm = ot.AbdoRackwitz()
nearestPointAlgorithm.setMaximumCallsNumber(maximumEvaluationNumber)
nearestPointAlgorithm.setMaximumAbsoluteError(maximumAbsoluteError)
nearestPointAlgorithm.setMaximumRelativeError(maximumRelativeError)
nearestPointAlgorithm.setMaximumResidualError(maximumResidualError)
nearestPointAlgorithm.setMaximumConstraintError(maximumConstraintError)

The FORM method

problem = otb.ReliabilityProblem8()
metaAlgorithm = otb.ReliabilityBenchmarkMetaAlgorithm(problem)
benchmarkResult = metaAlgorithm.runFORM(nearestPointAlgorithm)
benchmarkResult.summary()
'computedProbability = 0.000659887791408224\nexactProbability = 0.0007897927545597477\nabsoluteError = 0.00012990496315152373\nnumberOfCorrectDigits = 0.7838874012130279\nnumberOfFunctionEvaluations = 8\nnumberOfDigitsPerEvaluation = 0.09798592515162849'

The SORM method

benchmarkResult = metaAlgorithm.runSORM(nearestPointAlgorithm)
benchmarkResult.summary()
'computedProbability = 0.0007838036444007651\nexactProbability = 0.0007897927545597477\nabsoluteError = 5.989110158982603e-06\nnumberOfCorrectDigits = 2.120150844037516\nnumberOfFunctionEvaluations = 8\nnumberOfDigitsPerEvaluation = 0.2650188555046895'

The LHS method

benchmarkResult = metaAlgorithm.runLHS(maximumOuterSampling=10000)
benchmarkResult.summary()
'computedProbability = 0.00029999999999999954\nexactProbability = 0.0007897927545597477\nabsoluteError = 0.0004897927545597482\nnumberOfCorrectDigits = 0.20750078889171192\nnumberOfFunctionEvaluations = 10000\nnumberOfDigitsPerEvaluation = 2.0750078889171193e-05'

The MonteCarloSampling method

benchmarkResult = metaAlgorithm.runMonteCarlo(maximumOuterSampling=10000)
benchmarkResult.summary()
'computedProbability = 0.00039999999999999785\nexactProbability = 0.0007897927545597477\nabsoluteError = 0.00038979275455974983\nnumberOfCorrectDigits = 0.3066793830449656\nnumberOfFunctionEvaluations = 10000\nnumberOfDigitsPerEvaluation = 3.066793830449656e-05'

The FORM - Importance Sampling method

benchmarkResult = metaAlgorithm.runFORMImportanceSampling(nearestPointAlgorithm)
benchmarkResult.summary()
'computedProbability = 0.0007672662944078312\nexactProbability = 0.0007897927545597477\nabsoluteError = 2.2526460151916497e-05\nnumberOfCorrectDigits = 1.5448201939896349\nnumberOfFunctionEvaluations = 393\nnumberOfDigitsPerEvaluation = 0.003930840188268791'

The Subset method

benchmarkResult = metaAlgorithm.runSubsetSampling()
benchmarkResult.summary()
'computedProbability = 0.0005751900000000005\nexactProbability = 0.0007897927545597477\nabsoluteError = 0.0002146027545597472\nnumberOfCorrectDigits = 0.5658778531610409\nnumberOfFunctionEvaluations = 4000\nnumberOfDigitsPerEvaluation = 0.0001414694632902602'

The following function computes the number of correct base-10 digits in the computed result compared to the exact result. The CompareMethods function takes as a parameter a problem and it returns the probabilities estimated by each method. In addition, it returns the performance of these methods.

def PrintResults(name, benchmarkResult):
    print("------------------------------------------------------------------")
    print(name)
    numberOfDigitsPerEvaluation = (
        benchmarkResult.numberOfCorrectDigits
        / benchmarkResult.numberOfFunctionEvaluations
    )
    print("Estimated probability:", benchmarkResult.computedProbability)
    print("Number of function calls:", benchmarkResult.numberOfFunctionEvaluations)
    print("Number of correct digits=%.1f" % (benchmarkResult.numberOfCorrectDigits))
    print(
        "Performance=%.2e (correct digits/evaluation)" % (numberOfDigitsPerEvaluation)
    )
    return [name, benchmarkResult.numberOfCorrectDigits, numberOfDigitsPerEvaluation]
def CompareMethods(problem, nearestPointAlgorithm, maximumOuterSampling=10000):
    """
    Runs various algorithms on a given problem.
    """
    summaryList = []
    pfReference = problem.getProbability()
    print("Exact probability:", pfReference)
    metaAlgorithm = otb.ReliabilityBenchmarkMetaAlgorithm(problem)
    # SubsetSampling
    benchmarkResult = metaAlgorithm.runSubsetSampling()
    summaryList.append(PrintResults("SubsetSampling", benchmarkResult))
    # FORM
    benchmarkResult = metaAlgorithm.runFORM(nearestPointAlgorithm)
    summaryList.append(PrintResults("FORM", benchmarkResult))
    # SORM
    benchmarkResult = metaAlgorithm.runSORM(nearestPointAlgorithm)
    summaryList.append(PrintResults("SORM", benchmarkResult))
    # FORM - ImportanceSampling
    benchmarkResult = metaAlgorithm.runFORMImportanceSampling(
        nearestPointAlgorithm, maximumOuterSampling=maximumOuterSampling
    )
    summaryList.append(PrintResults("FORM-IS", benchmarkResult))
    # MonteCarloSampling
    benchmarkResult = metaAlgorithm.runMonteCarlo(
        maximumOuterSampling=maximumOuterSampling
    )
    summaryList.append(PrintResults("MonteCarloSampling", benchmarkResult))
    # LHS
    benchmarkResult = metaAlgorithm.runLHS()
    summaryList.append(PrintResults("LHS", benchmarkResult))
    # Gather results
    numberOfMethods = len(summaryList)
    correctDigitsList = []
    performanceList = []
    algorithmNames = []
    for i in range(numberOfMethods):
        [name, numberOfCorrectDigits, numberOfDigitsPerEvaluation] = summaryList[i]
        algorithmNames.append(name)
        correctDigitsList.append(numberOfCorrectDigits)
        performanceList.append(numberOfDigitsPerEvaluation)
    print("------------------------------------------------------------------------")
    print("Scoring by number of correct digits")
    indices = np.argsort(correctDigitsList)
    rank = list(indices)
    for i in range(numberOfMethods):
        j = rank[i]
        print("%d : %s (%.1f)" % (j, algorithmNames[j], correctDigitsList[j]))
    print("------------------------------------------------------------------------")
    print("Scoring by performance (digits/evaluation)")
    indices = np.argsort(performanceList)
    rank = list(indices)
    for i in range(len(indices)):
        j = rank[i]
        print("%d : %s (%.1e)" % (j, algorithmNames[j], performanceList[j]))
    return correctDigitsList, performanceList
problem = otb.ReliabilityProblem8()
_ = CompareMethods(problem, nearestPointAlgorithm)
Exact probability: 0.0007897927545597477
------------------------------------------------------------------
SubsetSampling
Estimated probability: 0.0011300000000000004
Number of function calls: 3000
Number of correct digits=0.4
Performance=1.22e-04 (correct digits/evaluation)
------------------------------------------------------------------
FORM
Estimated probability: 0.000659887791408224
Number of function calls: 8
Number of correct digits=0.8
Performance=9.80e-02 (correct digits/evaluation)
------------------------------------------------------------------
SORM
Estimated probability: 0.0007838036444007651
Number of function calls: 8
Number of correct digits=2.1
Performance=2.65e-01 (correct digits/evaluation)
------------------------------------------------------------------
FORM-IS
Estimated probability: 0.0007504266628185941
Number of function calls: 453
Number of correct digits=1.3
Performance=2.88e-03 (correct digits/evaluation)
------------------------------------------------------------------
MonteCarloSampling
Estimated probability: 0.0006999999999999989
Number of function calls: 10000
Number of correct digits=0.9
Performance=9.44e-05 (correct digits/evaluation)
------------------------------------------------------------------
LHS
Estimated probability: 0.0
Number of function calls: 1000
Number of correct digits=0.0
Performance=0.00e+00 (correct digits/evaluation)
------------------------------------------------------------------------
Scoring by number of correct digits
5 : LHS (0.0)
0 : SubsetSampling (0.4)
1 : FORM (0.8)
4 : MonteCarloSampling (0.9)
3 : FORM-IS (1.3)
2 : SORM (2.1)
------------------------------------------------------------------------
Scoring by performance (digits/evaluation)
5 : LHS (0.0e+00)
4 : MonteCarloSampling (9.4e-05)
0 : SubsetSampling (1.2e-04)
3 : FORM-IS (2.9e-03)
1 : FORM (9.8e-02)
2 : SORM (2.7e-01)

Remarks

  • We note that the FORM and SORM methods are faster, but, they do not converge to the exact proba.

  • We also notice the effectiveness of the FORM-ImportanceSampling method (inexpensive method, and converges).

  • The convergence of the MonteCarlo method requires a large number of simulations.

  • SubsetSampling converges even if the probability is very low.

Total running time of the script: (0 minutes 1.461 seconds)