Quasi Monte Carlo

Let us note \cD_f = \{\ux \in \Rset^{n} \: | \:  g(\ux,\underline{d}) \leq 0\}. The goal is to estimate the following probability:

     P_f &=& \int_{\cD_f} f_{\uX}(\ux)d\ux\\
     &=& \int_{\Rset^{n}} \mathbf{1}_{\{g(\ux,\underline{d}) \leq 0 \}}f_{\uX}(\ux)d\ux\\
     &=& \Prob {\{\:g(\uX,\underline{d}) \leq 0 \}}

Quasi-Monte Carlo is a technique which approximates the integral (1) using low discrepancy sequences \{\vect{x}^1, ..., \vect{x}^N\} instead of randomly generated sequences, as follows:

P_f \approx \frac{1}{N}\,\sum_{i=1}^N  \mathbf{1}_{\cD_f}(\ux_i) f(\ux_i).

In general, the integral of a function f on \Delta = [0,1]^s can be approximated by using some low discrepancy sequence \{\vect{x}_1, \hdots, \vect{x}_N\} as follows:

\int_{\Delta} f(\vect{u})\,d\vect{u} \approx \frac{1}{N}\,\sum_{i=1}^N f(\vect{x}_i).

The low discrepancy sequence is generated on \Delta according to the Lebesgue measure then may be transformed to any measure \mu thanks to the inverse CDF technique in order to approximate the integral :

    \int_{\Rset^s} f(\vect{u})\,d\vect{u} \approx \frac{1}{N}\,\sum_{i=1}^N f(\vect{x}_i).

As the sequence is not randomly generated, we can not use the Central Limit Theorem in order to control the quality of the approximation. This quality is given by the Koksma-Hlawka inequality that we recall here :

    \left\lvert \frac{1}{N}\sum_{i=1}^Nf(\vect{x}_i) - \int_If(\vect{x})d\vect{x} \right\rvert \le Var(f)D^N(\vect{x}_1, ..., \vect{x}_N)

where D^N(\vect{x}_1, ..., \vect{x}_N) is the discrepancy of the sequence \{\vect{x}_1, \hdots, \vect{x}_N\}.

For Halton, Inverse Halton and Sobol sequences, we have :

    D^N = O\biggl(\frac{\log^s{N}}{N}\biggr)

Thus, asymptotically the error converges in O\biggl(\frac{\log^s{N}}{N}\biggr), instead of O\biggl(\frac{1}{\sqrt{N}}\biggr) for traditional Monte Carlo. The convergence rate depends on the dimension s so one must have N >> e^s.