Latin Hypercube Simulation

Let us note \cD_f = \{\ux \in \Rset^{\inputDim} \space | \space  \model(\ux) \leq 0\}. The goal is to estimate the following probability:

\begin{aligned}
    P_f  & = \int_{\cD_f} f_{\uX}(\ux)d\ux\\
    & = \int_{\Rset^{\inputDim}} \mathbf{1}_{\{\model(\ux) \leq 0 \}}f_{\uX}(\ux)d\ux\\
    & = \Prob {\{\space \model(\uX) \leq 0 \}}
  \end{aligned}

LHS or Latin Hypercube Sampling is a sampling method enabling to better cover the domain of variations of the input variables, thanks to a stratified sampling strategy. This method is applicable in the case of independent input variables. The sampling procedure is based on dividing the range of each variable into several intervals of equal probability. The sampling is undertaken as follows:
  • Step 1  The range of each input variable is stratified into isoprobabilistic cells,

  • Step 2  A cell is uniformly chosen among all the available cells,

  • Step 3  The random number is obtained by inverting the Cumulative Density Function locally in the chosen cell,

  • Step 4  All the cells having a common strate with the previous cell are put apart from the list of available cells.

The estimator of the probability of failure with LHS is given by:

\hat{P}_{f,LHS}^\sampleSize = \frac{1}{\sampleSize}\sum_{i=1}^\sampleSize \mathbf{1}_{\{\model(\uX^i) \leq 0 \}}

where the sample of \{ \uX^i,i=1 \hdots \sampleSize \} is obtained as described previously.

One can show that:

\Var{\hat{P}_{f,LHS}^\sampleSize} \leq \frac{\sampleSize}{\sampleSize-1} . \Var{    \hat{P}_{f,MC}^\sampleSize}

where:

  • \Var {\hat{P}_{f,LHS}^\sampleSize} is the variance of the estimator of the probability of exceeding a threshold computed by the LHS technique,

  • \Var {\hat{P}_{f,MC}^\sampleSize} is the variance of the estimator of the probability of exceeding a threshold computed by a crude Monte Carlo method.

With the notations

\begin{aligned}
    \mu_\sampleSize &=& \frac{1}{\sampleSize}\sum_{i=1}^\sampleSize \mathbf{1}_{\{\model(\underline{x}_i)) \leq 0 \}}\\
    \sigma_\sampleSize^2 &=& \frac{1}{\sampleSize}\sum_{i=1}^\sampleSize (\mathbf{1}_{\{\model(\underline{x}^i)) \leq 0 \}} - \mu_\sampleSize)^2
  \end{aligned}

the asymptotic confidence interval of order 1-\alpha associated to the estimator P_{f,LHS}^\sampleSize is

[ \mu_\sampleSize - \frac{q_{1-\alpha / 2} . \sigma_\sampleSize}{\sqrt{\sampleSize}} \space ; \space \mu_\sampleSize + \frac{q_{1-\alpha / 2} . \sigma_\sampleSize}{\sqrt{\sampleSize}} ]

where q_{1-\alpha /2} is the 1-\alpha / 2 quantile from the reduced standard gaussian law \cN(0,1).

It gives an unbiased estimate for P_f (reminding that all input variables must be independent).

This method is derived from a more general method called ’Stratified Sampling’.