Compute reference probabilities with Monte-Carlo

In this example, we perform a reliability benchmark based on a large Monte-Carlo sample. In order to limit the elapsed time, we consider a limited elapsed time for each problem. In order to get the best possible accuracy within this time limit, we set the coefficient of variation to zero.

import otbenchmark as otb
import pandas as pd
import numpy as np
from tqdm import tqdm
import time
problemslist = otb.ReliabilityBenchmarkProblemList()
numberOfProblems = len(problemslist)
numberOfProblems
26
coefficientOfVariation = 0.0
maximumOuterSampling = 10 ** 4  # 10 ** 6 for real
blockSize = 10 ** 0  # 10 ** 4 for real simulations
blockSize
1
confidenceLevel = 0.95
maximumDurationSeconds = 5 * 60.0
totalDurationMinutes = numberOfProblems * maximumDurationSeconds / 60.0
totalDurationMinutes
130.0
model_names = [problemslist[i].getName() for i in range(numberOfProblems)]
metrics = ["PF", "N. function calls", "PMin", "PMax", "C.O.V.", "Digits", "Time (s)"]
resultArray = np.zeros((numberOfProblems, len(metrics)))
for i in tqdm(range(numberOfProblems)):
    startTime = time.time()
    problem = problemslist[i]
    name = problem.getName()
    event = problem.getEvent()
    g = event.getFunction()
    factory = otb.ProbabilitySimulationAlgorithmFactory()
    algo = factory.buildMonteCarlo(problem)
    algo.setMaximumOuterSampling(maximumOuterSampling)
    algo.setBlockSize(blockSize)
    algo.setMaximumCoefficientOfVariation(coefficientOfVariation)
    algo.setMaximumTimeDuration(maximumDurationSeconds)
    initialNumberOfCall = g.getEvaluationCallsNumber()
    algo.run()
    result = algo.getResult()
    numberOfFunctionEvaluations = g.getEvaluationCallsNumber() - initialNumberOfCall
    computedProbability = result.getProbabilityEstimate()
    confidenceLength = result.getConfidenceLength(confidenceLevel)
    pmin = computedProbability - 0.5 * confidenceLength
    pmax = computedProbability + 0.5 * confidenceLength
    cov = result.getCoefficientOfVariation()
    if cov > 0.0:
        expectedDigits = -np.log10(cov) - 1.0
    else:
        expectedDigits = 0.0
    stopTime = time.time()
    elapsedTime = stopTime - startTime
    resultArray[i][0] = computedProbability
    resultArray[i][1] = numberOfFunctionEvaluations
    resultArray[i][2] = pmin
    resultArray[i][3] = pmax
    resultArray[i][4] = cov
    resultArray[i][5] = expectedDigits
    resultArray[i][6] = elapsedTime
  0%|          | 0/26 [00:00<?, ?it/s]
  4%|▍         | 1/26 [00:00<00:08,  2.99it/s]
 12%|█▏        | 3/26 [00:00<00:02,  7.80it/s]
 27%|██▋       | 7/26 [00:00<00:01, 17.19it/s]
 42%|████▏     | 11/26 [00:00<00:00, 22.29it/s]
 54%|█████▍    | 14/26 [00:00<00:00, 22.44it/s]
 65%|██████▌   | 17/26 [00:00<00:00, 23.73it/s]
 77%|███████▋  | 20/26 [00:01<00:00, 11.88it/s]
 85%|████████▍ | 22/26 [00:01<00:00,  9.88it/s]
100%|██████████| 26/26 [00:01<00:00, 14.09it/s]
100%|██████████| 26/26 [00:01<00:00, 14.04it/s]
df = pd.DataFrame(resultArray, index=model_names, columns=metrics)
# df.to_csv("reliability_compute_reference_proba.csv")
df
PF N. function calls PMin PMax C.O.V. Digits Time (s)
RP8 0.0007 10000.0 0.000182 0.001218 0.377832 -0.577299 0.334805
RP14 0.0012 10000.0 0.000521 0.001879 0.288502 -0.460149 0.087393
RP22 0.0061 10000.0 0.004574 0.007626 0.127646 -0.106006 0.024108
RP24 0.0035 10000.0 0.002343 0.004657 0.168735 -0.227205 0.024113
RP25 0.0000 10000.0 0.000000 0.000000 -1.000000 0.000000 0.026124
RP28 0.0000 10000.0 0.000000 0.000000 -1.000000 0.000000 0.025764
RP31 0.0017 10000.0 0.000893 0.002507 0.242329 -0.384406 0.024328
RP33 0.0031 10000.0 0.002010 0.004190 0.179327 -0.253645 0.028419
RP35 0.0024 10000.0 0.001441 0.003359 0.203879 -0.309373 0.024529
RP38 0.0077 10000.0 0.005987 0.009413 0.113521 -0.055076 0.046808
RP53 0.0309 10000.0 0.027508 0.034292 0.056002 0.251795 0.024597
RP55 0.5597 10000.0 0.549970 0.569430 0.008869 1.052103 0.024402
RP54 0.0012 10000.0 0.000521 0.001879 0.288502 -0.460149 0.083084
RP57 0.0278 10000.0 0.024578 0.031022 0.059136 0.228145 0.024396
RP75 0.0081 10000.0 0.006343 0.009857 0.110660 -0.043991 0.023883
RP89 0.0048 10000.0 0.003445 0.006155 0.143991 -0.158335 0.024247
RP107 0.0000 10000.0 0.000000 0.000000 -1.000000 0.000000 0.063518
RP110 0.0000 10000.0 0.000000 0.000000 -1.000000 0.000000 0.026479
RP111 0.0000 10000.0 0.000000 0.000000 -1.000000 0.000000 0.025741
RP63 0.0006 10000.0 0.000120 0.001080 0.408126 -0.610794 0.467239
RP91 0.0006 10000.0 0.000120 0.001080 0.408126 -0.610794 0.039186
RP60 0.0465 10000.0 0.042373 0.050627 0.045283 0.344066 0.274900
RP77 0.0000 10000.0 0.000000 0.000000 -1.000000 0.000000 0.030447
Four-branch serial system 0.0023 10000.0 0.001361 0.003239 0.208274 -0.318636 0.024592
R-S 0.0768 10000.0 0.071581 0.082019 0.034671 0.460033 0.023778
Axial stressed beam 0.0260 10000.0 0.022881 0.029119 0.061206 0.213207 0.024014


The problems with higher failture probabilities are obviously solved with more accuracy with the Monte-Carlo method. For example, the RP55 problem which has the highest probability equal to 0.560 has more than 3 significant digits. On the opposite side, the problems with probabilities close to zero are much more difficult to solve. The RP28 with a probability close to 10^{-7} has no significant digit.

These previous results are consistent with the distribution of the Monte-Carlo estimator. The properties of the binomial distribution imply that its variance is:

\sigma_{p_f}^2 = \frac{p_f (1-p_f)}{n}

where n is the sample size and p_f is the failure probability. The coefficient of variation is:

CV = \frac{\sigma_{p_f}}{p_f}.

Since we do not know the exact value of p_f, we use is approximation \tilde{p_f} instead (this turns rigorous equations into approximate ones, but does not change the outcome). This implies:

CV = \sqrt{\frac{1 - p_f}{p_f n}}.

When p_f\rightarrow 0, we have p_f \rightarrow 0 which implies:

CV \rightarrow \sqrt{\frac{1}{p_f n}}.

Inverting the previous equation, we get the sample size given the coefficient of variation:

n \approx \frac{1}{p_f CV^2}.

This leads to the rule of thumb that, in order to estimate the probability p_f = 10^{-m}, where m is an integer, we need a sample size equal to:

n \approx \frac{1}{10^{-m} 10^{-2}} = 10^{m+2}.

For example, estimating the probability of the RP28 problem with just one single digit leads to a sample size equal to n=10^9, since the exact p_f \approx 10^{-7}.

Total running time of the script: (0 minutes 2.112 seconds)