FunctionalChaosAlgorithm

class FunctionalChaosAlgorithm(*args)

Functional chaos algorithm.

Refer to Functional Chaos Expansion, Least squares polynomial response surface.

Available constructors:

FunctionalChaosAlgorithm(inputSample, outputSample)

FunctionalChaosAlgorithm(inputSample, outputSample, distribution)

FunctionalChaosAlgorithm(inputSample, outputSample, distribution, adaptiveStrategy)

FunctionalChaosAlgorithm(inputSample, outputSample, distribution, adaptiveStrategy, projectionStrategy)

FunctionalChaosAlgorithm(inputSample, weights, outputSample, distribution, adaptiveStrategy)

FunctionalChaosAlgorithm(inputSample, weights, outputSample, distribution, adaptiveStrategy, projectionStrategy)

Parameters:
inputSample, outputSample2-d sequence of float

Sample of the input - output random vectors

distributionDistribution

Distribution of the random vector \vect{X}

adaptiveStrategyAdaptiveStrategy

Strategy of selection of the different terms of the multivariate basis.

projectionStrategyProjectionStrategy

Strategy of evaluation of the coefficients \alpha_k

weightssequence of float

Weights \omega_i associated to the data base

Default values are \omega_i = \frac{1}{N} where N=inputSample.getSize()

Notes

Consider \vect{Y} = g(\vect{X}) with g: \Rset^d \rightarrow \Rset^p, \vect{X} \sim \cL_{\vect{X}} and \vect{Y} with finite variance: g\in L_{\cL_{\vect{X}}}^2(\Rset^d, \Rset^p).

When p>1, the functional chaos algorithm is used on each marginal of \vect{Y}, using the same multivariate orthonormal basis for all the marginals. Thus, the algorithm is detailed here for a scalar output Y and g: \Rset^d \rightarrow \Rset.

Let T: \Rset^d \rightarrow \Rset^d be an isoprobabilistic transformation such that \vect{Z} = T(\vect{X}) \sim \mu. We note f = g \circ T^{-1}, then f \in L_{\mu}^2(\Rset^d, \Rset).

Let (\Psi_k)_{k \in \Nset} be an orthonormal multivariate basis of L^2_{\mu}(\Rset^d,\Rset).

Then the functional chaos decomposition of f writes:

f = g\circ T^{-1} = \sum_{k=0}^{\infty} \vect{\alpha}_k \Psi_k

which can be truncated to the finite set K \in \Nset:

\tilde{f} =  \sum_{k \in K} \vect{\alpha}_k \Psi_k

The approximation \tilde{f} can be used to build an efficient random generator of Y based on the random vector \vect{Z}. It writes:

\tilde{Y} = \tilde{f}(\vect{Z})

For more details, see FunctionalChaosRandomVector.

The functional chaos decomposition can be used to build a meta model of g, which writes:

\tilde{g} = \tilde{f} \circ T

If the basis (\Psi_k)_{k \in \Nset} has been obtained by tensorisation of univariate orthonormal basis, then the distribution \mu writes \mu = \prod_{i=1}^d \mu_i. In that case only, the Sobol indices can easily be deduced from the coefficients \alpha_k.

We detail here all the steps required in order to create a functional chaos algorithm.

Step 1 - Construction of the multivariate orthonormal basis: the multivariate orthonornal basis (\Psi_k(\vect{x}))_{k \in \Nset} is built as the tensor product of orthonormal univariate families.

The univariate bases may be:

  • polynomials: the associated distribution \mu_i is continuous or discrete. Note that it is possible to build the polynomial family orthonormal to any univariate distribution \mu_i under some conditions. For more details, see StandardDistributionPolynomialFactory;

  • Haar wavelets: they enable to approximate functions with discontinuities. For more details, see HaarWaveletFactory,;

  • Fourier series: for more details, see FourierSeriesFactory.

Furthermore, the numerotation of the multivariate orthonormal basis (\Psi_k(\vect{z}))_k is given by an enumerate function which defines a regular way to generate the collection of degres used for the univariate polynomials : an enumerate function represents a bijection \Nset \rightarrow \Nset^d. See LinearEnumerateFunction or HyperbolicAnisotropicEnumerateFunction for more details.

Step 2 - Truncation strategy of the multivariate orthonormal basis: a strategy must be chosen for the selection of the different terms of the multivariate basis. The selected terms are gathered in the subset K.

For more details on the possible strategies, see FixedStrategy and CleaningStrategy.

Step 3 - Evaluation strategy of the coefficients: a strategy must be chosen for the estimation of te coefficients \alpha_k. The vector \vect{\alpha} = (\alpha_k)_{k \in K} is equivalently defined by:

(1)\vect{\alpha} = \argmin_{\vect{\alpha} \in \Rset^K}\Expect{\left( g \circ T^{-1}(\vect{Z}) - \sum_{k \in K} \alpha_k \Psi_k (\vect{Z})\right)^2}

or

(2)\alpha_k =  <g \circ T^{-1}(\vect{Z}), \Psi_k (\vect{Z})>_{\mu} = \Expect{  g \circ T^{-1}(\vect{Z}) \Psi_k (\vect{Z}) }

where the mean \Expect{.} is evaluated with respect to the measure \mu.

Relation (1) means that the coefficients (\alpha_k)_{k \in K} minimize the quadratic error between the model and the polynomial approximation. For more details, see LeastSquaresStrategy.

Relation (2) means that \alpha_k is the scalar product of the model with the k-th element of the orthonormal basis (\Psi_k)_{k \in \Nset}. For more details, see IntegrationStrategy.

Examples

Create the model:

>>> import openturns as ot
>>> ot.RandomGenerator.SetSeed(0)
>>> inputDim = 1
>>> model = ot.SymbolicFunction(['x'], ['x*sin(x)'])
>>> distribution = ot.ComposedDistribution([ot.Uniform()]*inputDim)

Build the multivariate orthonormal basis:

>>> polyColl = [0.0]*inputDim
>>> for i in range(distribution.getDimension()):
...     polyColl[i] = ot.StandardDistributionPolynomialFactory(distribution.getMarginal(i))
>>> enumerateFunction = ot.LinearEnumerateFunction(inputDim)
>>> productBasis = ot.OrthogonalProductPolynomialFactory(polyColl, enumerateFunction)

Define the strategy to truncate the multivariate orthonormal basis: We choose all the polynomials of degree <= 4

>>> degree = 4
>>> indexMax = enumerateFunction.getStrataCumulatedCardinal(degree)
>>> print(indexMax)
5

We keep all the polynomials of degree <= 4 (which corresponds to the 5 first ones):

>>> adaptiveStrategy = ot.FixedStrategy(productBasis, indexMax)

Define the evaluation strategy of the coefficients:

>>> samplingSize = 50
>>> experiment = ot.MonteCarloExperiment(distribution, samplingSize)
>>> X = experiment.generate()
>>> Y = model(X)
>>> projectionStrategy = ot.LeastSquaresStrategy()

Create the chaos algorithm:

>>> algo = ot.FunctionalChaosAlgorithm(X, Y, distribution, adaptiveStrategy,
...                                    projectionStrategy)
>>> algo.run()

Get the result:

>>> functionalChaosResult = algo.getResult()
>>> metamodel = functionalChaosResult.getMetaModel()

Test it:

>>> X = [0.5]
>>> print(model(X))
[0.239713]
>>> print(metamodel(X))
[0.239514]

Methods

BuildDistribution(inputSample)

Recover the distribution, with metamodel performance in mind.

getAdaptiveStrategy()

Get the adaptive strategy.

getClassName()

Accessor to the object's name.

getDistribution()

Accessor to the joint probability density function of the physical input vector.

getId()

Accessor to the object's id.

getInputSample()

Accessor to the input sample.

getMaximumResidual()

Get the maximum residual.

getName()

Accessor to the object's name.

getOutputSample()

Accessor to the output sample.

getProjectionStrategy()

Get the projection strategy.

getResult()

Get the results of the metamodel computation.

getShadowedId()

Accessor to the object's shadowed id.

getVisibility()

Accessor to the object's visibility state.

hasName()

Test if the object is named.

hasVisibleName()

Test if the object has a distinguishable name.

run()

Compute the metamodel.

setDistribution(distribution)

Accessor to the joint probability density function of the physical input vector.

setMaximumResidual(residual)

Set the maximum residual.

setName(name)

Accessor to the object's name.

setProjectionStrategy(projectionStrategy)

Set the projection strategy.

setShadowedId(id)

Accessor to the object's shadowed id.

setVisibility(visible)

Accessor to the object's visibility state.

__init__(*args)
static BuildDistribution(inputSample)

Recover the distribution, with metamodel performance in mind.

For each marginal, find the best 1-d continuous parametric model else fallback to the use of a nonparametric one.

The selection is done as follow:

  • We start with a list of all parametric models (all factories)

  • For each model, we estimate its parameters if feasible.

  • We check then if model is valid, ie if its Kolmogorov score exceeds a threshold fixed in the MetaModelAlgorithm-PValueThreshold ResourceMap key. Default value is 5%

  • We sort all valid models and return the one with the optimal criterion.

For the last step, the criterion might be BIC, AIC or AICC. The specification of the criterion is done through the MetaModelAlgorithm-ModelSelectionCriterion ResourceMap key. Default value is fixed to BIC. Note that if there is no valid candidate, we estimate a non-parametric model (KernelSmoothing or Histogram). The MetaModelAlgorithm-NonParametricModel ResourceMap key allows selecting the preferred one. Default value is Histogram

One each marginal is estimated, we use the Spearman independence test on each component pair to decide whether an independent copula. In case of non independence, we rely on a NormalCopula.

Parameters:
sampleSample

Input sample.

Returns:
distributionDistribution

Input distribution.

getAdaptiveStrategy()

Get the adaptive strategy.

Returns:
adaptiveStrategyAdaptiveStrategy

Strategy of selection of the different terms of the multivariate basis.

getClassName()

Accessor to the object’s name.

Returns:
class_namestr

The object class name (object.__class__.__name__).

getDistribution()

Accessor to the joint probability density function of the physical input vector.

Returns:
distributionDistribution

Joint probability density function of the physical input vector.

getId()

Accessor to the object’s id.

Returns:
idint

Internal unique identifier.

getInputSample()

Accessor to the input sample.

Returns:
inputSampleSample

Input sample of a model evaluated apart.

getMaximumResidual()

Get the maximum residual.

Returns:
residualfloat

Residual value needed in the projection strategy.

Default value is 0.

getName()

Accessor to the object’s name.

Returns:
namestr

The name of the object.

getOutputSample()

Accessor to the output sample.

Returns:
outputSampleSample

Output sample of a model evaluated apart.

getProjectionStrategy()

Get the projection strategy.

Returns:
strategyProjectionStrategy

Projection strategy.

Notes

The projection strategy selects the different terms of the multivariate basis to define the subset K.

getResult()

Get the results of the metamodel computation.

Returns:
resultFunctionalChaosResult

Result structure, created by the method run().

getShadowedId()

Accessor to the object’s shadowed id.

Returns:
idint

Internal unique identifier.

getVisibility()

Accessor to the object’s visibility state.

Returns:
visiblebool

Visibility flag.

hasName()

Test if the object is named.

Returns:
hasNamebool

True if the name is not empty.

hasVisibleName()

Test if the object has a distinguishable name.

Returns:
hasVisibleNamebool

True if the name is not empty and not the default one.

run()

Compute the metamodel.

Notes

Evaluates the metamodel and stores all the results in a result structure.

setDistribution(distribution)

Accessor to the joint probability density function of the physical input vector.

Parameters:
distributionDistribution

Joint probability density function of the physical input vector.

setMaximumResidual(residual)

Set the maximum residual.

Parameters:
residualfloat

Residual value needed in the projection strategy.

Default value is 0.

setName(name)

Accessor to the object’s name.

Parameters:
namestr

The name of the object.

setProjectionStrategy(projectionStrategy)

Set the projection strategy.

Parameters:
strategyProjectionStrategy

Strategy to estimate the coefficients \alpha_k.

setShadowedId(id)

Accessor to the object’s shadowed id.

Parameters:
idint

Internal unique identifier.

setVisibility(visible)

Accessor to the object’s visibility state.

Parameters:
visiblebool

Visibility flag.

Examples using the class

Mixture of experts

Mixture of experts

Fit a distribution from an input sample

Fit a distribution from an input sample

Polynomial chaos exploitation

Polynomial chaos exploitation

Polynomial chaos over database

Polynomial chaos over database

Compute grouped indices for the Ishigami function

Compute grouped indices for the Ishigami function

Validate a polynomial chaos

Validate a polynomial chaos

Create a polynomial chaos metamodel by integration on the cantilever beam

Create a polynomial chaos metamodel by integration on the cantilever beam

Advanced polynomial chaos construction

Advanced polynomial chaos construction

Create a polynomial chaos metamodel

Create a polynomial chaos metamodel

Create a polynomial chaos for the Ishigami function: a quick start guide to polynomial chaos

Create a polynomial chaos for the Ishigami function: a quick start guide to polynomial chaos

Chaos cross-validation

Chaos cross-validation

Polynomial chaos is sensitive to the degree

Polynomial chaos is sensitive to the degree

Create a sparse chaos by integration

Create a sparse chaos by integration

Compute Sobol’ indices confidence intervals

Compute Sobol' indices confidence intervals

Viscous free fall: metamodel of a field function

Viscous free fall: metamodel of a field function

Metamodel of a field function

Metamodel of a field function

Sobol’ sensitivity indices from chaos

Sobol' sensitivity indices from chaos

Use the ANCOVA indices

Use the ANCOVA indices

Example of sensitivity analyses on the wing weight model

Example of sensitivity analyses on the wing weight model