FunctionalChaosAlgorithm

class FunctionalChaosAlgorithm(*args)

Functional chaos algorithm.

Refer to Functional Chaos Expansion, Least squares polynomial response surface.

Available constructors:

FunctionalChaosAlgorithm(inputSample, outputSample)

FunctionalChaosAlgorithm(inputSample, outputSample, distribution, adaptiveStrategy)

FunctionalChaosAlgorithm(inputSample, outputSample, distribution, adaptiveStrategy, projectionStrategy)

FunctionalChaosAlgorithm(model, distribution, adaptiveStrategy)

FunctionalChaosAlgorithm(model, distribution, adaptiveStrategy, projectionStrategy)

FunctionalChaosAlgorithm(inputSample, weights, outputSample, distribution, adaptiveStrategy)

FunctionalChaosAlgorithm(inputSample, weights, outputSample, distribution, adaptiveStrategy, projectionStrategy)

Parameters
inputSample, outputSample2-d sequence of float

Sample of the input - output random vectors

modelFunction

Model g such as \vect{Y} = g(\vect{X}).

distributionDistribution

Distribution of the random vector \vect{X}

adaptiveStrategyAdaptiveStrategy

Strategy of selection of the different terms of the multivariate basis.

projectionStrategyProjectionStrategy

Strategy of evaluation of the coefficients \alpha_k

weightssequence of float

Weights \omega_i associated to the data base

Default values are \omega_i = \frac{1}{N} where N=inputSample.getSize()

Notes

Consider \vect{Y} = g(\vect{X}) with g: \Rset^d \rightarrow \Rset^p, \vect{X} \sim \cL_{\vect{X}} and \vect{Y} with finite variance: g\in L_{\cL_{\vect{X}}}^2(\Rset^d, \Rset^p).

When p>1, the functional chaos algorithm is used on each marginal of \vect{Y}, using the same multivariate orthonormal basis for all the marginals. Thus, the algorithm is detailed here for a scalar output Y and g: \Rset^d \rightarrow \Rset.

Let T: \Rset^d \rightarrow \Rset^d be an isoprobabilistic transformation such that \vect{Z} = T(\vect{X}) \sim \mu. We note f = g \circ T^{-1}, then f \in L_{\mu}^2(\Rset^d, \Rset).

Let (\Psi_k)_{k \in \Nset} be an orthonormal multivariate basis of L^2_{\mu}(\Rset^d,\Rset).

Then the functional chaos decomposition of f writes:

f = g\circ T^{-1} = \sum_{k=0}^{\infty} \vect{\alpha}_k \Psi_k

which can be truncated to the finite set K \in \Nset:

\tilde{f} =  \sum_{k \in K} \vect{\alpha}_k \Psi_k

The approximation \tilde{f} can be used to build an efficient random generator of Y based on the random vector \vect{Z}. It writes:

\tilde{Y} = \tilde{f}(\vect{Z})

For more details, see FunctionalChaosRandomVector.

The functional chaos decomposition can be used to build a meta model of g, which writes:

\tilde{g} = \tilde{f} \circ T

If the basis (\Psi_k)_{k \in \Nset} has been obtained by tensorisation of univariate orthonormal basis, then the distribution \mu writes \mu = \prod_{i=1}^d \mu_i. In that case only, the Sobol indices can easily be deduced from the coefficients \alpha_k.

We detail here all the steps required in order to create a functional chaos algorithm.

Step 1 - Construction of the multivariate orthonormal basis: the multivariate orthonornal basis (\Psi_k(\vect{x}))_{k \in \Nset} is built as the tensor product of orthonormal univariate families.

The univariate bases may be:

  • polynomials: the associated distribution \mu_i is continuous or discrete. Note that it is possible to build the polynomial family orthonormal to any univariate distribution \mu_i under some conditions. For more details, see StandardDistributionPolynomialFactory;

  • Haar wavelets: they enable to approximate functions with discontinuities. For more details, see HaarWaveletFactory,;

  • Fourier series: for more details, see FourierSeriesFactory.

Furthermore, the numerotation of the multivariate orthonormal basis (\Psi_k(\vect{z}))_k is given by an enumerate function which defines a regular way to generate the collection of degres used for the univariate polynomials : an enumerate function represents a bijection \Nset \rightarrow \Nset^d. See LinearEnumerateFunction or HyperbolicAnisotropicEnumerateFunction for more details.

Step 2 - Truncation strategy of the multivariate orthonormal basis: a strategy must be chosen for the selection of the different terms of the multivariate basis. The selected terms are gathered in the subset K.

For more details on the possible strategies, see FixedStrategy, SequentialStrategy and CleaningStrategy.

Step 3 - Evaluation strategy of the coefficients: a strategy must be chosen for the estimation of te coefficients \alpha_k. The vector \vect{\alpha} = (\alpha_k)_{k \in K} is equivalently defined by:

(1)\vect{\alpha} = \argmin_{\vect{\alpha} \in \Rset^K}\Expect{\left( g \circ T^{-1}(\vect{Z}) - \sum_{k \in K} \alpha_k \Psi_k (\vect{Z})\right)^2}

or

(2)\alpha_k =  <g \circ T^{-1}(\vect{Z}), \Psi_k (\vect{Z})>_{\mu} = \Expect{  g \circ T^{-1}(\vect{Z}) \Psi_k (\vect{Z}) }

where the mean \Expect{.} is evaluated with respect to the measure \mu.

Relation (1) means that the coefficients (\alpha_k)_{k \in K} minimize the quadratic error between the model and the polynomial approximation. For more details, see LeastSquaresStrategy.

Relation (2) means that \alpha_k is the scalar product of the model with the k-th element of the orthonormal basis (\Psi_k)_{k \in \Nset}. For more details, see IntegrationStrategy.

Examples

Create the model:

>>> import openturns as ot
>>> ot.RandomGenerator.SetSeed(0)
>>> inputDim = 1
>>> model = ot.SymbolicFunction(['x'], ['x*sin(x)'])
>>> distribution = ot.ComposedDistribution([ot.Uniform()]*inputDim)

Build the multivariate orthonormal basis:

>>> polyColl = [0.0]*inputDim
>>> for i in range(distribution.getDimension()):
...     polyColl[i] = ot.StandardDistributionPolynomialFactory(distribution.getMarginal(i))
>>> enumerateFunction = ot.LinearEnumerateFunction(inputDim)
>>> productBasis = ot.OrthogonalProductPolynomialFactory(polyColl, enumerateFunction)

Define the strategy to truncate the multivariate orthonormal basis: We choose all the polynomials of degree <= 4

>>> degree = 4
>>> indexMax = enumerateFunction.getStrataCumulatedCardinal(degree)
>>> print(indexMax)
5

We keep all the polynomials of degree <= 4 (which corresponds to the 5 first ones):

>>> adaptiveStrategy = ot.FixedStrategy(productBasis, indexMax)

Define the evaluation strategy of the coefficients:

>>> samplingSize = 50
>>> experiment = ot.MonteCarloExperiment(samplingSize)
>>> projectionStrategy = ot.LeastSquaresStrategy(experiment)

Create the Functional Chaos Algorithm:

>>> algo = ot.FunctionalChaosAlgorithm(model, distribution, adaptiveStrategy,
...                                    projectionStrategy)
>>> algo.run()

Get the result:

>>> functionalChaosResult = algo.getResult()
>>> metamodel = functionalChaosResult.getMetaModel()

Test it:

>>> X = [0.5]
>>> print(model(X))
[0.239713]
>>> print(metamodel(X))
[0.239514]

Methods

BuildDistribution(inputSample)

Recover the distribution, with metamodel performance in mind.

getAdaptiveStrategy(self)

Get the adaptive strategy.

getClassName(self)

Accessor to the object’s name.

getDistribution(self)

Accessor to the joint probability density function of the physical input vector.

getId(self)

Accessor to the object’s id.

getInputSample(self)

Accessor to the input sample.

getMaximumResidual(self)

Get the maximum residual.

getName(self)

Accessor to the object’s name.

getOutputSample(self)

Accessor to the output sample.

getProjectionStrategy(self)

Get the projection strategy.

getResult(self)

Get the results of the metamodel computation.

getShadowedId(self)

Accessor to the object’s shadowed id.

getVisibility(self)

Accessor to the object’s visibility state.

hasName(self)

Test if the object is named.

hasVisibleName(self)

Test if the object has a distinguishable name.

run(self)

Compute the metamodel.

setDistribution(self, distribution)

Accessor to the joint probability density function of the physical input vector.

setMaximumResidual(self, residual)

Set the maximum residual.

setName(self, name)

Accessor to the object’s name.

setProjectionStrategy(self, projectionStrategy)

Set the projection strategy.

setShadowedId(self, id)

Accessor to the object’s shadowed id.

setVisibility(self, visible)

Accessor to the object’s visibility state.

__init__(self, \*args)

Initialize self. See help(type(self)) for accurate signature.

static BuildDistribution(inputSample)

Recover the distribution, with metamodel performance in mind.

For each marginal, find the best 1-d continuous parametric model, else fallback to the use of KernelSmoothing. For the copula, the Spearman independance test is used on each component pair to decide whether an independent copula can be used, else use a NormalCopula.

Parameters
sampleSample

Input sample.

Returns
distributionDistribution

Input distribution.

getAdaptiveStrategy(self)

Get the adaptive strategy.

Returns
adaptiveStrategyAdaptiveStrategy

Strategy of selection of the different terms of the multivariate basis.

getClassName(self)

Accessor to the object’s name.

Returns
class_namestr

The object class name (object.__class__.__name__).

getDistribution(self)

Accessor to the joint probability density function of the physical input vector.

Returns
distributionDistribution

Joint probability density function of the physical input vector.

getId(self)

Accessor to the object’s id.

Returns
idint

Internal unique identifier.

getInputSample(self)

Accessor to the input sample.

Returns
inputSampleSample

Input sample of a model evaluated apart.

getMaximumResidual(self)

Get the maximum residual.

Returns
residualfloat

Residual value needed in the projection strategy.

Default value is 0.

getName(self)

Accessor to the object’s name.

Returns
namestr

The name of the object.

getOutputSample(self)

Accessor to the output sample.

Returns
outputSampleSample

Output sample of a model evaluated apart.

getProjectionStrategy(self)

Get the projection strategy.

Returns
strategyProjectionStrategy

Projection strategy.

Notes

The projection strategy selects the different terms of the multivariate basis to define the subset K.

getResult(self)

Get the results of the metamodel computation.

Returns
resultFunctionalChaosResult

Result structure, created by the method run().

getShadowedId(self)

Accessor to the object’s shadowed id.

Returns
idint

Internal unique identifier.

getVisibility(self)

Accessor to the object’s visibility state.

Returns
visiblebool

Visibility flag.

hasName(self)

Test if the object is named.

Returns
hasNamebool

True if the name is not empty.

hasVisibleName(self)

Test if the object has a distinguishable name.

Returns
hasVisibleNamebool

True if the name is not empty and not the default one.

run(self)

Compute the metamodel.

Notes

Evaluates the metamodel and stores all the results in a result structure.

setDistribution(self, distribution)

Accessor to the joint probability density function of the physical input vector.

Parameters
distributionDistribution

Joint probability density function of the physical input vector.

setMaximumResidual(self, residual)

Set the maximum residual.

Parameters
residualfloat

Residual value needed in the projection strategy.

Default value is 0.

setName(self, name)

Accessor to the object’s name.

Parameters
namestr

The name of the object.

setProjectionStrategy(self, projectionStrategy)

Set the projection strategy.

Parameters
strategyProjectionStrategy

Strategy to estimate the coefficients \alpha_k.

setShadowedId(self, id)

Accessor to the object’s shadowed id.

Parameters
idint

Internal unique identifier.

setVisibility(self, visible)

Accessor to the object’s visibility state.

Parameters
visiblebool

Visibility flag.