ProjectionStrategy

class ProjectionStrategy(*args)

Base class for the evaluation strategies of the approximation coefficients.

Available constructors:

ProjectionStrategy(projectionStrategy)

Parameters:
projectionStrategyProjectionStrategy

A projection strategy which is a LeastSquaresStrategy or an IntegrationStrategy.

Methods

getClassName()

Accessor to the object's name.

getCoefficients()

Accessor to the coefficients.

getDesignProxy()

Accessor to the design proxy.

getExperiment()

Accessor to the experiments.

getId()

Accessor to the object's id.

getImplementation()

Accessor to the underlying implementation.

getInputSample()

Accessor to the input sample.

getMeasure()

Accessor to the measure.

getName()

Accessor to the object's name.

getOutputSample()

Accessor to the output sample.

getRelativeError()

Accessor to the relative error.

getResidual()

Accessor to the residual.

getWeights()

Accessor to the weights.

involvesModelSelection()

Get the model selection flag.

isLeastSquares()

Get the least squares flag.

setExperiment(weightedExperiment)

Accessor to the design of experiment.

setInputSample(inputSample)

Accessor to the input sample.

setMeasure(measure)

Accessor to the measure.

setName(name)

Accessor to the object's name.

setOutputSample(outputSample)

Accessor to the output sample.

setWeights(weights)

Accessor to the weights.

Notes

Consider \vect{Y} = g(\vect{X}) with g: \Rset^d \rightarrow \Rset^p, \vect{X} \sim \cL_{\vect{X}} and \vect{Y} with finite variance: g\in L_{\cL_{\vect{X}}}^2(\Rset^d, \Rset^p).

The functional chaos expansion approximates \vect{Y} using an isoprobabilistic transformation T and an orthonormal multivariate basis (\Psi_k)_{k \in \Nset} of L^2_{\mu}(\Rset^d,\Rset). See FunctionalChaosAlgorithm to get more details.

The meta model of g, based on the functional chaos decomposition of f = g \circ T^{-1} writes:

\tilde{g} = \sum_{k \in K} \vect{\alpha}_k \Psi_k  \circ T

where K is a non empty finite set of indices, whose cardinality is denoted by P.

We detail the case where p=1.

The vector \vect{\alpha} = (\alpha_k)_{k \in K} is equivalently defined by:

(1)\vect{\alpha} = \argmin_{\vect{\alpha} \in \Rset^K} \Expect{ \left( g \circ T^{-1}(\vect{Z}) -  \sum_{k \in K} \alpha_k \Psi_k (\vect{Z})\right)^2 }

and:

(2)\alpha_k =  <g \circ T^{-1}(\vect{Z}), \Psi_k (\vect{Z})>_{\mu} = \Expect{  g \circ T^{-1}(\vect{Z}) \Psi_k (\vect{Z}) }

where \vect{Z} = T(\vect{X}) and the mean \Expect{.} is evaluated with respect to the measure \mu.

It corresponds to two points of view:

  • relation (1) means that the coefficients (\alpha_k)_{k \in K} minimize the quadratic error between the model and the polynomial approximation. Use LeastSquaresStrategy.

  • relation (2) means that \alpha_k is the scalar product of the model with the k-th element of the orthonormal basis (\Psi_k)_{k \in \Nset}. Use IntegrationStrategy.

In both cases, the mean \Expect{.} is approximated by a linear quadrature formula:

(3)\Expect{ f(\vect{Z})} \simeq \sum_{i \in I} \omega_i f(\Xi_i)

where f is a function in L^1(\mu).

In the approximation (3), the set I, the points (\Xi_i)_{i \in I} and the weights (\omega_i)_{i \in I} are evaluated from different methods implemented in the WeightedExperiment.

The convergence criterion used to evaluate the coefficients is based on the residual value defined in the FunctionalChaosAlgorithm.

__init__(*args)
getClassName()

Accessor to the object’s name.

Returns:
class_namestr

The object class name (object.__class__.__name__).

getCoefficients()

Accessor to the coefficients.

Returns:
coefPoint

Coefficients (\alpha_k)_{k \in K}.

getDesignProxy()

Accessor to the design proxy.

Parameters:
designProxyDesignProxy

The design matrix.

getExperiment()

Accessor to the experiments.

Returns:
expWeightedExperiment

Weighted experiment used to evaluate the coefficients.

getId()

Accessor to the object’s id.

Returns:
idint

Internal unique identifier.

getImplementation()

Accessor to the underlying implementation.

Returns:
implImplementation

A copy of the underlying implementation object.

getInputSample()

Accessor to the input sample.

Returns:
XSample

Input Sample.

getMeasure()

Accessor to the measure.

Returns:
muDistribution

Measure \mu defining the scalar product.

getName()

Accessor to the object’s name.

Returns:
namestr

The name of the object.

getOutputSample()

Accessor to the output sample.

Returns:
YSample

Output Sample.

getRelativeError()

Accessor to the relative error.

Returns:
efloat

Relative error.

getResidual()

Accessor to the residual.

Returns:
erfloat

Residual error.

getWeights()

Accessor to the weights.

Returns:
wPoint

Weights of the design of experiments.

involvesModelSelection()

Get the model selection flag.

A model selection method can be used to select the coefficients of the decomposition which enable to best predict the output. Model selection can lead to a sparse functional chaos expansion.

Returns:
involvesModelSelectionbool

True if the method involves a model selection method.

isLeastSquares()

Get the least squares flag.

There are two methods to compute the coefficients: integration or least squares.

Returns:
isLeastSquaresbool

True if the coefficients are estimated from least squares.

setExperiment(weightedExperiment)

Accessor to the design of experiment.

Parameters:
expWeightedExperiment

Weighted design of experiment.

setInputSample(inputSample)

Accessor to the input sample.

Parameters:
XSample

Input Sample.

setMeasure(measure)

Accessor to the measure.

Parameters:
mDistribution

Measure \mu defining the scalar product.

setName(name)

Accessor to the object’s name.

Parameters:
namestr

The name of the object.

setOutputSample(outputSample)

Accessor to the output sample.

Parameters:
YSample

Output Sample.

setWeights(weights)

Accessor to the weights.

Parameters:
wPoint

Weights of the design of experiments.