A low-discrepancy sequence is a sequence with the property that for
all values of , its subsequence
has a low discrepancy.
The discrepancy of a sequence is low if the number of points in the
sequence falling into an arbitrary set B is close to proportional to
the measure of B, as would happen on average (but not for particular
samples) in the case of a uniform distribution. Specific definitions
of discrepancy differ regarding the choice of B (hyper-spheres,
hypercubes, etc.) and how the discrepancy for every B is computed
(usually normalized) and combined (usually by taking the worst value).
Low-discrepancy sequences are also called quasi-random or sub-random
sequences, due to their common use as a replacement of uniformly
distributed random numbers. The “quasi” modifier is used to denote
more clearly that the values of a low-discrepancy sequence are neither
random nor pseudorandom, but such sequences share some properties of
random variables and in certain applications such as the quasi-Monte
Carlo method their lower discrepancy is an important advantage.
At least three methods of numerical integration can be phrased as
follows. Given a set in the interval [0,1],
approximate the integral of a function f as the average of the function
evaluated at those points:
If the points are chosen as , this is the rectangle
rule.
If the points are chosen to be randomly distributed, this is the
Monte Carlo method.
If the points are chosen as elements of a low-discrepancy sequence,
this is the quasi-Monte Carlo method.
The discrepancy of a set is
defined, using Niederreiter’s notation, as
where is the s-dimensional Lebesgue measure,
is the number of points in that fall into
, and is the set of s-dimensional intervals or boxes
of the form:
where .
The star-discrepancy D*N(P) is defined similarly, except that the
supremum is taken over the set J* of intervals of the form
where is in the half-open interval .
The two are related by
The Koksma-lawka inequality, shows that the error of such a method
can be bounded by the product of two terms, one of which depends only
on f, and the other one is the discrepancy of the set
.
Let be the s-dimensional unit cube,
. Let have
bounded variation on in the sense of Hardy
and Krause. Then for any in
,
The Koksma-Hlawka inequality is sharp in the following sense: For any
point set in and any
> 0, there is a function with bounded
variation and such that:
Therefore, the quality of a numerical integration rule depends only on
the discrepancy .
Constructions of sequence are known, due to Faure, Halton, Hammersley,
Sobol’, Niederreiter and van der Corput, such that:
where is a certain constant, depending on the sequence.
These sequences are believed to have the best possible order of
convergence. See also: Van der Corput sequence, Halton sequences,
Sobol sequences. In the case of the Haselgrove sequence, we have:
which means a worse asymptotic performance than the previous
sequence, but can be interesting for finite sample size.
Remark 1:
If is a low-discrepancy sequence, then
converges weakly towards the -dimensional
Lebesgue measure on , which guarantees that for all test
function (continuous and bounded) ,
converges towards .
We then obtain:
Be careful: using low discrepancy sequences instead of random
distributed points do not lead to the same control of the variance of
the approximation: in the case of random distributed points, this
control is given by the Central Limit Theorem that provides confidence
intervals. In the case of low discrepancy sequences, it is given by
the Koksma-Hlawka inequality.
Remark 2:
It is possible to generate low discrepancy sequence according to
measures different from the Lebesgue one, by using the inverse CDF
technique. But be careful: the inverse CDF technique
is not the one used in all the cases (some distributions are generated
thanks to the rejection method for example): that’s why it is not
recommended, in the general case, to substitute a low discrepancy
sequence to the uniform random generator.
Remark 3:
The low-discrepancy sequences have performances that deteriorate
rapidly with the problem dimension, as the bound on the discrepancy
increases exponentially with the dimension. This behavior is shared
by all the low discrepancy sequences, even if all the standard
low-discrepancy sequences don’t exhibit this behavior with the same
intensity. According to the given reference, the following
recommendation can be made:
The Sobol can be used for dimensions up to several hundreds (but
our implementation of the Sobol sequence is limited to
dimension less or equal to 40).
The Halton or reverse Halton sequences should preferably not be used for dimensions greater than 8;
The Faure sequences should preferably not be used for dimensions greater than 25;
Use Haselgrove sequences should preferably not be used for dimensions greater than 50;
Low-discrepancy sequences are also called quasi-random or sub-random
sequences, but it can be confusing as they are deterministic and that
they don’t have the same statistical properties as traditional
pseudo-random sequences.