FunctionalChaosResult

class FunctionalChaosResult(*args)

Functional chaos result.

Returned by functional chaos algorithms, see FunctionalChaosAlgorithm.

Parameters:
sampleX2-d sequence of float

Input sample of \inputRV \in \Rset^{\inputDim}.

sampleY2-d sequence of float

Output sample of \outputRV \in \Rset^{\outputDim}.

distributionDistribution

Distribution of the random vector \inputRV

transformationFunction

The function that maps the physical input \inputRV to the standardized input \standardRV.

inverseTransformationFunction

The function that maps the standardized input \standardRV to the physical input \inputRV.

orthogonalBasisOrthogonalBasis

The multivariate orthogonal basis.

indicessequence of int

The indices of the selected basis function within the orthogonal basis.

alpha_k2-d sequence of float

The coefficients of the functional chaos expansion.

Psi_ksequence of Function

The functions of the multivariate basis selected by the algorithm.

residualssequence of float, \hat{\vect{r}} \in \Rset^{\outputDim}

For each output component, the residual is the square root of the sum of squared differences between the model and the meta model, divided by the sample size.

relativeErrorssequence of float, \widehat{\vect{re}} \in \Rset^{\outputDim}

The relative error is the empirical error divided by the sample variance of the output.

isLeastSquaresbool

True if the expansion is computed using least squares.

isModelSelectionbool

True if the expansion is computed using model selection.

Methods

drawErrorHistory()

Draw the error history.

drawSelectionHistory()

Draw the basis selection history.

getClassName()

Accessor to the object's name.

getCoefficients()

Get the coefficients.

getCoefficientsHistory()

The coefficients values selection history accessor.

getComposedMetaModel()

Get the composed metamodel.

getConditionalExpectation(conditioningIndices)

Get the conditional expectation of the expansion given one vector input.

getDistribution()

Get the input distribution.

getErrorHistory()

The error history accessor.

getIndices()

Get the indices of the final basis.

getIndicesHistory()

The basis indices selection history accessor.

getInputSample()

Accessor to the input sample.

getInverseTransformation()

Get the inverse isoprobabilistic transformation.

getMetaModel()

Accessor to the metamodel.

getName()

Accessor to the object's name.

getOrthogonalBasis()

Get the orthogonal basis.

getOutputSample()

Accessor to the output sample.

getReducedBasis()

Get the reduced basis.

getRelativeErrors()

Accessor to the relative errors.

getResiduals()

Accessor to the residuals.

getSampleResiduals()

Get residuals sample.

getTransformation()

Get the isoprobabilistic transformation.

hasName()

Test if the object is named.

involvesModelSelection()

Get the model selection flag.

isLeastSquares()

Get the least squares flag.

setErrorHistory(errorHistory)

The error history accessor.

setInputSample(sampleX)

Accessor to the input sample.

setInvolvesModelSelection(involvesModelSelection)

Set the model selection flag.

setIsLeastSquares(isLeastSquares)

Set the least squares flag.

setMetaModel(metaModel)

Accessor to the metamodel.

setName(name)

Accessor to the object's name.

setOutputSample(sampleY)

Accessor to the output sample.

setRelativeErrors(relativeErrors)

Accessor to the relative errors.

setResiduals(residuals)

Accessor to the residuals.

setSelectionHistory(indicesHistory, ...)

The basis coefficients and indices accessor.

Notes

Let \sampleSize \in \Nset be the sample size. Let \outputDim \in \Nset be the dimension of the output of the physical model. For any j = 1, ..., \sampleSize and any i = 1, ..., \outputDim, let y_{j, i} \in \Rset be the output of the physical model and let \hat{y}_{j, i} \in \Rset be the output of the metamodel. For any i = 1, ..., \outputDim, let \outputRV_i \in \Rset^\sampleSize be the sample output and let \widehat{\outputRV}_i \in \Rset^\sampleSize be the output predicted by the metamodel. The marginal residual is:

\hat{r}_i = \frac{\sqrt{SS_i}}{\sampleSize}

for i = 1, ..., \outputDim, where SS_i is the marginal sum of squares:

SS_i = \sum_{j = 1}^\sampleSize (y_{j, i} - \hat{y}_{j, i})^2.

The marginal relative error is:

\widehat{re}_i = \frac{\hat{r}_i / \sampleSize}{\hat{s}_{Y, i}^2}

for i = 1, ..., \outputDim, where \hat{s}_{Y, i}^2 is the unbiased sample variance of the i-th output.

This structure is created by the method run() of FunctionalChaosAlgorithm, and obtained thanks to the getResult() method.

__init__(*args)
drawErrorHistory()

Draw the error history.

This is only available with LARS, and when the output dimension is 1.

Returns:
graphGraph

The evolution of the error at each selection iteration

drawSelectionHistory()

Draw the basis selection history.

This is only available with LARS, and when the output dimension is 1.

Returns:
graphGraph

The evolution of the basis coefficients at each selection iteration

getClassName()

Accessor to the object’s name.

Returns:
class_namestr

The object class name (object.__class__.__name__).

getCoefficients()

Get the coefficients.

Returns:
coefficients2-d sequence of float

Coefficients (\vect{a_k})_{k \in \set{J}^P_s}.

getCoefficientsHistory()

The coefficients values selection history accessor.

This is only available with LARS, and when the output dimension is 1.

Returns:
coefficientsHistory2-d sequence of float

The coefficients values selection history, for each iteration. Each inner list gives the coefficients values of the basis terms at i-th iteration.

getComposedMetaModel()

Get the composed metamodel.

The composed metamodel is defined on the standard space \standardInputSpace. It is defined by the equation:

\tilde{h}(\standardReal) =  \sum_{k \in \set{J}^P_s} \vect{a}_k \psi_k(\standardReal)

for any \standardReal \in \standardInputSpace.

Returns:
composedMetamodelFunction

The metamodel in the standard space \standardInputSpace.

getConditionalExpectation(conditioningIndices)

Get the conditional expectation of the expansion given one vector input.

This method returns the functional chaos result corresponding to the conditional expectation of the output given an input vector. Indeed, the conditional expectation of a polynomial chaos expansion is, again, a polynomial chaos expansion. This is possible only if the marginals of the input distribution are independent. Otherwise, an exception is generated. An example is provided in Conditional expectation of a polynomial chaos expansion.

We consider the notations introduced in Functional Chaos Expansion. Let \inputRV \in \Rset^{\inputDim} be the input and let \vect{u} \subseteq \{1, ..., \inputDim\} be a set of marginal indices. Let \inputRV_{\vect{u}} \in \Rset^{|\vect{u}|} be the vector corresponding to the group of input variables where |\vect{u}| = \operatorname{card}(\vect{u}) is the number of input variables in the group. Let \metaModel(\inputRV) be the polynomial chaos expansion of the physical model \model. This function returns the functional chaos expansion of:

\metaModel_{\vect{u}}\left(\inputReal_{\vect{u}}\right) 
= \Expect{\metaModel(\inputRV) | \inputRV_{\vect{u}} = \inputReal_{\vect{u}}}

for any \inputReal_{\vect{u}} \in \Rset^{|\vect{u}|}.

Mathematical analysis

The mathematical derivation is better described in the standard space \standardInputSpace than in the physical space \physicalInputSpace and this is why we consider the former. Assume that the basis functions \{\psi_{\vect{\alpha}}\}_{\vect{\alpha} \in \set{J}^P} are defined by the tensor product:

\psi_{\vect{\alpha}}(\standardReal)
= \prod_{i = 1}^\inputDim \pi_{\alpha_i}^{(i)}(z_i)

for any \vect{\alpha} \in \set{J}^P and any \standardReal \in \standardInputSpace where \left\{\pi_k^{(i)}\right\}_{k \geq 0} is the set of orthonormal polynomials of degree k for the i-th input marginal. Assume that the PCE to order P is:

\widetilde{h}(\standardReal) 
= \sum_{\vect{\alpha} \in \set{J}^P} 
a_{\vect{\alpha}} \psi_{\vect{\alpha}}(\standardReal)

for any \standardReal \in \standardInputSpace. Assume that the input marginals \{Z_i\}_{i = 1, ..., \inputDim} are independent. Let \vect{u} \subseteq \{1, ..., \inputDim\} be a group of variables with dimension \operatorname{card}(\vect{u}) \in \Nset. Assume that \standardInputSpace is the Cartesian product of vectors which have components in the group \vect{u} and other components, i.e. assume that:

\standardInputSpace = \standardInputSpace_{\vect{u}} \times \standardInputSpace_{\overline{\vect{u}}}

where \standardInputSpace_{\vect{u}} \subseteq \Rset^{|\vect{u}|} and \standardInputSpace_{\overline{\vect{u}}} \subseteq \Rset^{|\overline{\vect{u}}|}. Let \widetilde{h}_{\vect{u}}^{\operatorname{ce}} be the conditional expectation of the function \widetilde{h} given \standardReal_{\vect{u}}:

\widetilde{h}_{\vect{u}}^{\operatorname{ce}}(\standardReal_{\vect{u}})
= \mathbb{E}_{\standardRV_{\overline{\vect{u}}}} 
\left[\widetilde{h}\left(\standardRV\right) | \standardRV_{\vect{u}} 
= \standardReal_{\vect{u}}\right]

for any \standardReal_{\vect{u}} \in \standardInputSpace_{\vect{u}}. Let \set{J}_{\vect{u}}^{\operatorname{ce}} \subseteq \set{J}^P be the set of multi-indices having zero components when the marginal multi-index is not in \vect{u}:

\set{J}_{\vect{u}}^{\operatorname{ce}} 
= \left\{\vect{\alpha} \in \set{J}^P \; | \; 
\alpha_i = 0 \textrm{ if } i \not \in \vect{u}, \; i = 1, ..., \inputDim\right\}.

This set of multi-indices defines the functions that depends on the variables in the group \vect{u} and only them. For any \vect{\alpha} \in \set{J}_{\vect{u}}^{\operatorname{ce}}, let \psi_{\vect{\alpha}}^{\operatorname{ce}} be the orthogonal polynomial defined by :

\psi_{\vect{\alpha}}^{\operatorname{ce}}(\standardReal_{\vect{u}})
= 
\begin{cases}
\prod_{\substack{i = 1 \\ i \in \vect{u}}}^\inputDim \pi_{\alpha_i}^{(i)} (z_i) & \textrm{if } \alpha_i = 0 \textrm{ for any } i \not \in \vect{u}, \\
1 & \textrm{if } \alpha_i = 0 \textrm{ for } i = 1, ..., \inputDim, \\
0 & \textrm{otherwise}.
\end{cases}

Therefore :

\widetilde{h}_{\vect{u}}^{\operatorname{ce}}(\standardReal_{\vect{u}})
= \sum_{\vect{\alpha} \in \set{J}_{\vect{u}}^{\operatorname{ce}}}
a_{\vect{\alpha}} \psi_{\vect{\alpha}}^{\operatorname{ce}}(\standardReal_{\vect{u}})

for any \standardReal_{\vect{u}} \in \standardInputSpace_{\vect{u}}. Finally, the conditional expectation of the surrogate model is:

\metaModel_{\vect{u}}\left(\inputReal_{\vect{u}}\right)
= \widetilde{h}_{\vect{u}}^{\operatorname{ce}}(T_{\vect{u}}\left(\inputReal_{\vect{u}}\right))

where \standardRV_{\vect{u}} = T_{\vect{u}}(\inputRV_{\vect{u}}) is the corresponding marginal mapping of the iso-probabilistic mapping \standardRV = T(\inputRV).

Parameters:
conditioningIndicessequence of int in [0, inputDimension - 1]

The indices \vect{u} of the input random vector to condition.

Returns:
conditionalPCEFunctionalChaosResult

The functional chaos result of the conditional expectation. Its input dimension is \operatorname{card}(\vect{u}) and its output dimension is \outputDim (i.e. the output dimension is unchanged).

getDistribution()

Get the input distribution.

Returns:
distributionDistribution

Distribution of the input random vector \inputRV.

getErrorHistory()

The error history accessor.

This is only available with LARS, and when the output dimension is 1.

Returns:
errorHistorysequence of float

The error history

getIndices()

Get the indices of the final basis.

Returns:
indicesIndices

Indices \set{J}^P_s of the elements of the multivariate basis used in the decomposition. Each integer in this list is the input argument of the EnumerateFunction. If a model selection method such as LARS is used, these indices are not contiguous.

getIndicesHistory()

The basis indices selection history accessor.

This is only available with LARS, and when the output dimension is 1.

Returns:
indicesHistory2-d sequence of int

The basis indices selection history, for each iteration. Each inner list gives the indices of the basis terms at i-th iteration.

getInputSample()

Accessor to the input sample.

Returns:
inputSampleSample

The input sample.

getInverseTransformation()

Get the inverse isoprobabilistic transformation.

Returns:
invTransfFunction

T^{-1} such that T(\inputRV) = \standardRV.

getMetaModel()

Accessor to the metamodel.

Returns:
metaModelFunction

Metamodel.

getName()

Accessor to the object’s name.

Returns:
namestr

The name of the object.

getOrthogonalBasis()

Get the orthogonal basis.

Returns:
basisOrthogonalBasis

Factory of the orthogonal basis.

getOutputSample()

Accessor to the output sample.

Returns:
outputSampleSample

The output sample.

getReducedBasis()

Get the reduced basis.

Returns:
basislist of Function

Collection of the functions (\psi_k)_{k\in \set{J}^P_s} used in the decomposition.

getRelativeErrors()

Accessor to the relative errors.

Returns:
relativeErrorsPoint

The relative errors defined as follows for each output of the model: \displaystyle \frac{\sum_{i=1}^N (y_i - \hat{y_i})^2}{N \Var{\vect{Y}}} with \vect{Y} the vector of the N model’s values y_i and \hat{y_i} the metamodel’s values.

getResiduals()

Accessor to the residuals.

Returns:
residualsPoint

The residual values defined as follows for each output of the model: \displaystyle \frac{\sqrt{\sum_{i=1}^N (y_i - \hat{y_i})^2}}{N} with y_i the N model’s values and \hat{y_i} the metamodel’s values.

getSampleResiduals()

Get residuals sample.

Returns:
residualsSampleSample

The sample of residuals r_{ji} = y_{ji} - \metaModel_i(\vect{x^{(j)}}) for i = 1, ..., n_Y and j = 1, ..., n.

getTransformation()

Get the isoprobabilistic transformation.

Returns:
transformationFunction

Transformation T such that T(\inputRV) = \standardRV.

hasName()

Test if the object is named.

Returns:
hasNamebool

True if the name is not empty.

involvesModelSelection()

Get the model selection flag.

A model selection method can be used to select the coefficients of the decomposition which enable to best predict the output. Model selection can lead to a sparse functional chaos expansion.

Returns:
involvesModelSelectionbool

True if the method involves a model selection method.

isLeastSquares()

Get the least squares flag.

Returns:
isLeastSquaresbool

True if the coefficients were estimated from least squares.

setErrorHistory(errorHistory)

The error history accessor.

Parameters:
errorHistorysequence of float

The error history

setInputSample(sampleX)

Accessor to the input sample.

Parameters:
inputSampleSample

The input sample.

setInvolvesModelSelection(involvesModelSelection)

Set the model selection flag.

A model selection method can be used to select the coefficients of the decomposition which enable to best predict the output. Model selection can lead to a sparse functional chaos expansion.

Parameters:
involvesModelSelectionbool

True if the method involves a model selection method.

setIsLeastSquares(isLeastSquares)

Set the least squares flag.

Parameters:
isLeastSquaresbool

True if the coefficients were estimated from least squares.

setMetaModel(metaModel)

Accessor to the metamodel.

Parameters:
metaModelFunction

Metamodel.

setName(name)

Accessor to the object’s name.

Parameters:
namestr

The name of the object.

setOutputSample(sampleY)

Accessor to the output sample.

Parameters:
outputSampleSample

The output sample.

setRelativeErrors(relativeErrors)

Accessor to the relative errors.

Parameters:
relativeErrorssequence of float

The relative errors defined as follows for each output of the model: \displaystyle \frac{\sum_{i=1}^N (y_i - \hat{y_i})^2}{N \Var{\vect{Y}}} with \vect{Y} the vector of the N model’s values y_i and \hat{y_i} the metamodel’s values.

setResiduals(residuals)

Accessor to the residuals.

Parameters:
residualssequence of float

The residual values defined as follows for each output of the model: \displaystyle \frac{\sqrt{\sum_{i=1}^N (y_i - \hat{y_i})^2}}{N} with y_i the N model’s values and \hat{y_i} the metamodel’s values.

setSelectionHistory(indicesHistory, coefficientsHistory)

The basis coefficients and indices accessor.

Parameters:
indicesHistory2-d sequence of int

The basis indices selection history

coefficientsHistory2-d sequence of float

The coefficients values selection history Must be of same size as indicesHistory.

Examples using the class

Polynomial chaos exploitation

Polynomial chaos exploitation

Compute grouped indices for the Ishigami function

Compute grouped indices for the Ishigami function

Validate a polynomial chaos

Validate a polynomial chaos

Create a full or sparse polynomial chaos expansion

Create a full or sparse polynomial chaos expansion

Create a polynomial chaos metamodel by integration on the cantilever beam

Create a polynomial chaos metamodel by integration on the cantilever beam

Create a polynomial chaos metamodel from a data set

Create a polynomial chaos metamodel from a data set

Create a polynomial chaos for the Ishigami function: a quick start guide to polynomial chaos

Create a polynomial chaos for the Ishigami function: a quick start guide to polynomial chaos

Polynomial chaos is sensitive to the degree

Polynomial chaos is sensitive to the degree

Create a sparse chaos by integration

Create a sparse chaos by integration

Compute Sobol’ indices confidence intervals

Compute Sobol' indices confidence intervals

Conditional expectation of a polynomial chaos expansion

Conditional expectation of a polynomial chaos expansion

Polynomial chaos expansion cross-validation

Polynomial chaos expansion cross-validation

Metamodel of a field function

Metamodel of a field function

Sobol’ sensitivity indices from chaos

Sobol' sensitivity indices from chaos

Use the ANCOVA indices

Use the ANCOVA indices

Example of sensitivity analyses on the wing weight model

Example of sensitivity analyses on the wing weight model

Compute leave-one-out error of a polynomial chaos expansion

Compute leave-one-out error of a polynomial chaos expansion