ANCOVA

class ANCOVA(*args)

ANalysis of COVAriance method (ANCOVA).

Refer to Sensivity analysis with correlated inputs.

Parameters:
functionalChaosResultFunctionalChaosResult

Functional chaos result approximating the model response with uncorrelated inputs.

correlatedInput2-d sequence of float

Correlated inputs used to compute the real values of the output. Its dimension must be equal to the number of inputs of the model.

Notes

ANCOVA, a variance-based method described in [caniou2012], is a generalization of the ANOVA (ANalysis Of VAriance) decomposition for models with correlated input parameters.

Let us consider a model Y = h(\vect{X}) without making any hypothesis on the dependence structure of \vect{X} = \{X^1, \ldots, X^{n_X} \}, a n_X-dimensional random vector. The covariance decomposition requires a functional decomposition of the model. Thus the model response Y is expanded as a sum of functions of increasing dimension as follows:

(1)h(\vect{X}) = h_0 + \sum_{u\subseteq\{1,\dots,n_X\}} h_u(X_u)

h_0 is the mean of Y. Each function h_u represents, for any non empty set u\subseteq\{1, \dots, n_X\}, the combined contribution of the variables X_u to Y.

Using the properties of the covariance, the variance of Y can be decomposed into a variance part and a covariance part as follows:

Var[Y]&= Cov\left[h_0 + \sum_{u\subseteq\{1,\dots,n_X\}} h_u(X_u), h_0 + \sum_{u\subseteq\{1,\dots,n_X\}} h_u(X_u)\right] \\
      &= \sum_{u\subseteq\{1,\dots,n_X\}} \left[Var[h_u(X_u)] + Cov[h_u(X_u), \sum_{v\subseteq\{1,\dots,n_X\}, v\cap u=\varnothing} h_v(X_v)]\right]

This variance formula enables to define each total part of variance of Y due to X_u, S_u, as the sum of a physical (or uncorrelated) part and a correlated part such as:

S_u = \frac{Cov[Y, h_u(X_u)]} {Var[Y]} = S_u^U + S_u^C

where S_u^U is the uncorrelated part of variance of Y due to X_u:

S_u^U = \frac{Var[h_u(X_u)]} {Var[Y]}

and S_u^C is the contribution of the correlation of X_u with the other parameters:

S_u^C = \frac{Cov\left[h_u(X_u), \displaystyle \sum_{v\subseteq\{1,\dots,n_X\}, v\cap u=\varnothing} h_v(X_v)\right]}
             {Var[Y]}

As the computational cost of the indices with the numerical model h can be very high, [caniou2012] suggests to approximate the model response with a polynomial chaos expansion:

Y \simeq \hat{h} = \sum_{j=0}^{P-1} \alpha_j \Psi_j(x)

However, for the sake of computational simplicity, the latter is constructed considering independent components \{X^1,\dots,X^{n_X}\}. Thus the chaos basis is not orthogonal with respect to the correlated inputs under consideration, and it is only used as a metamodel to generate approximated evaluations of the model response and its summands (1).

The next step consists in identifying the component functions. For instance, for u = \{1\}:

h_1(X_1) = \sum_{\alpha | \alpha_1 \neq 0, \alpha_{i \neq 1} = 0} y_{\alpha} \Psi_{\alpha}(\vect{X})

where \alpha is a set of degrees associated to the n_X univariate polynomial \psi_i^{\alpha_i}(X_i).

Then the model response Y is evaluated using a sample X=\{x_k, k=1,\dots,N\} of the correlated joint distribution. Finally, the several indices are computed using the model response and its component functions that have been identified on the polynomial chaos.

Examples

>>> import openturns as ot
>>> ot.RandomGenerator.SetSeed(0)
>>> # Model and distribution definition
>>> model = ot.SymbolicFunction(['X1','X2'], ['4.*X1 + 5.*X2'])
>>> distribution = ot.ComposedDistribution([ot.Normal()] * 2)
>>> S = ot.CorrelationMatrix(2)
>>> S[1, 0] = 0.3
>>> R = ot.NormalCopula().GetCorrelationFromSpearmanCorrelation(S)
>>> CorrelatedInputDistribution = ot.ComposedDistribution([ot.Normal()] * 2, ot.NormalCopula(R))
>>> sample = CorrelatedInputDistribution.getSample(200)
>>> # Functional chaos computation
>>> productBasis = ot.OrthogonalProductPolynomialFactory([ot.HermiteFactory()] * 2, ot.LinearEnumerateFunction(2))
>>> adaptiveStrategy = ot.FixedStrategy(productBasis, 15)
>>> experiment = ot.MonteCarloExperiment(distribution, 100)
>>> X = experiment.generate()
>>> Y = model(X)
>>> algo = ot.FunctionalChaosAlgorithm(X, Y, distribution, adaptiveStrategy)
>>> algo.run()
>>> ancovaResult = ot.ANCOVA(algo.getResult(), sample)
>>> indices = ancovaResult.getIndices()
>>> print(indices)
[0.408398,0.591602]
>>> uncorrelatedIndices = ancovaResult.getUncorrelatedIndices()
>>> print(uncorrelatedIndices)
[0.284905,0.468108]
>>> # Get indices measuring the correlated effects
>>> print(indices - uncorrelatedIndices)
[0.123494,0.123494]

Methods

getIndices([marginalIndex])

Accessor to the ANCOVA indices.

getUncorrelatedIndices([marginalIndex])

Accessor to the ANCOVA indices measuring uncorrelated effects.

__init__(*args)
getIndices(marginalIndex=0)

Accessor to the ANCOVA indices.

Parameters:
marginalIndexint, 0 \leq i < n, optional

Index of the model’s marginal used to estimate the indices. By default, marginalIndex is equal to 0.

Returns:
indicesPoint

List of the ANCOVA indices measuring the contribution of the input variables to the variance of the model. These indices are made up of a physical part and a correlated part. The first one is obtained thanks to getUncorrelatedIndices(). The effects of the correlation are represented by the indices resulting from the subtraction of the getIndices() and getUncorrelatedIndices() lists.

getUncorrelatedIndices(marginalIndex=0)

Accessor to the ANCOVA indices measuring uncorrelated effects.

Parameters:
marginalIndexint, 0 \leq i < n, optional

Index of the model’s marginal used to estimate the indices. By default, marginalIndex is equal to 0.

Returns:
indicesPoint

List of the ANCOVA indices measuring uncorrelated effects of the inputs. The effects of the correlation are represented by the indices resulting from the subtraction of the getIndices() and getUncorrelatedIndices() lists.

Examples using the class

Use the ANCOVA indices

Use the ANCOVA indices