FunctionalChaosRandomVector

class FunctionalChaosRandomVector(*args)

Functional chaos random vector.

Allows one to simulate a variable through a chaos decomposition, and retrieve its mean and covariance analytically from the chaos coefficients.

Parameters:
functionalChaosResultFunctionalChaosResult

A result from a functional chaos decomposition.

Notes

This class can be used to get probabilistic properties of a functional chaos expansion or polynomial chaos expansion (PCE). For example, we can get the output mean or the output covariance matrix using the coefficients of the expansion.

Moreover, we can use this class to simulate random observations of the output. We consider the same notations as in the FunctionalChaosAlgorithm class. The functional chaos decomposition of h is:

h = \model \circ T^{-1} = \sum_{k=0}^{\infty} \vect{a}_k \Psi_k

which can be truncated to the finite set \cK \subset \Nset:

\widetilde{h} =  \sum_{k \in \cK} \vect{a}_k \Psi_k.

The approximation \widetilde{h} can be used to build an efficient random generator of Y based on the random vector \standardRV, using the equation:

\widetilde{Y} = \widetilde{h}(\standardRV).

This equation can be used to simulate independent random observations from the PCE. This can be done by simulating independent observations from the distribution of the standardized random vector \standardRV, which are then pushed forward through the expansion.

Examples

First, we create the PCE.

>>> import openturns as ot
>>> ot.RandomGenerator.SetSeed(0)
>>> inputDimension = 1
>>> model = ot.SymbolicFunction(['x'], ['x * sin(x)'])
>>> distribution = ot.ComposedDistribution([ot.Uniform()] * inputDimension)
>>> polyColl = [0.0] * inputDimension
>>> for i in range(distribution.getDimension()):
...     polyColl[i] = ot.StandardDistributionPolynomialFactory(distribution.getMarginal(i))
>>> enumerateFunction = ot.LinearEnumerateFunction(inputDimension)
>>> productBasis = ot.OrthogonalProductPolynomialFactory(polyColl, enumerateFunction)
>>> degree = 4
>>> indexMax = enumerateFunction.getBasisSizeFromTotalDegree(degree)
>>> adaptiveStrategy = ot.FixedStrategy(productBasis, indexMax)
>>> samplingSize = 50
>>> experiment = ot.MonteCarloExperiment(distribution, samplingSize)
>>> inputSample = experiment.generate()
>>> outputSample = model(inputSample)
>>> projectionStrategy = ot.LeastSquaresStrategy()
>>> algo = ot.FunctionalChaosAlgorithm(inputSample, outputSample, \
...     distribution, adaptiveStrategy, projectionStrategy)
>>> algo.run()
>>> functionalChaosResult = algo.getResult()

Secondly, we get the probabilistic properties of the PCE. We can get an estimate of the mean of the output of the physical model.

>>> functionalChaosRandomVector = ot.FunctionalChaosRandomVector(functionalChaosResult)
>>> mean = functionalChaosRandomVector.getMean()
>>> print(mean)
[0.301168]

We can get an estimate of the covariance matrix of the output of the physical model.

>>> covariance = functionalChaosRandomVector.getCovariance()
>>> print(covariance)
[[ 0.0663228 ]]

We can finally generate observations from the PCE random vector.

>>> simulatedOutputSample = functionalChaosRandomVector.getSample(5)
>>> print(simulatedOutputSample)
    [ v0         ]
0 : [ 0.302951   ]
1 : [ 0.0664952  ]
2 : [ 0.0257105  ]
3 : [ 0.00454319 ]
4 : [ 0.149589   ]

Methods

asComposedEvent()

If the random vector can be viewed as the composition of several ThresholdEvent objects, this method builds and returns the composition.

getAntecedent()

Accessor to the antecedent RandomVector in case of a composite RandomVector.

getClassName()

Accessor to the object's name.

getCovariance()

Accessor to the covariance of the functional chaos expansion.

getDescription()

Accessor to the description of the RandomVector.

getDimension()

Accessor to the dimension of the RandomVector.

getDistribution()

Accessor to the distribution of the RandomVector.

getDomain()

Accessor to the domain of the Event.

getFrozenRealization(fixedPoint)

Compute realizations of the RandomVector.

getFrozenSample(fixedSample)

Compute realizations of the RandomVector.

getFunction()

Accessor to the Function in case of a composite RandomVector.

getFunctionalChaosResult()

Accessor to the functional chaos result.

getMarginal(*args)

Get the random vector corresponding to the i^{th} marginal component(s).

getMean()

Accessor to the mean of the functional chaos expansion.

getName()

Accessor to the object's name.

getOperator()

Accessor to the comparaison operator of the Event.

getParameter()

Accessor to the parameter of the distribution.

getParameterDescription()

Accessor to the parameter description of the distribution.

getProcess()

Get the stochastic process.

getRealization()

Compute one realization of the RandomVector.

getSample(size)

Compute realizations of the RandomVector.

getThreshold()

Accessor to the threshold of the Event.

hasName()

Test if the object is named.

isComposite()

Accessor to know if the RandomVector is a composite one.

isEvent()

Whether the random vector is an event.

setDescription(description)

Accessor to the description of the RandomVector.

setName(name)

Accessor to the object's name.

setParameter(parameters)

Accessor to the parameter of the distribution.

__init__(*args)
asComposedEvent()

If the random vector can be viewed as the composition of several ThresholdEvent objects, this method builds and returns the composition. Otherwise throws.

Returns:
composedRandomVector

Composed event.

getAntecedent()

Accessor to the antecedent RandomVector in case of a composite RandomVector.

Returns:
antecedentRandomVector

Antecedent RandomVector \vect{X} in case of a CompositeRandomVector such as: \vect{Y}=f(\vect{X}).

getClassName()

Accessor to the object’s name.

Returns:
class_namestr

The object class name (object.__class__.__name__).

getCovariance()

Accessor to the covariance of the functional chaos expansion.

Let \inputDim \in \Nset be the dimension of the input random vector, let \outputDim \in \Nset be the dimension of the output random vector. and let P + 1 \in \Nset be the size of the basis. We consider the following functional chaos expansion:

\widetilde{\outputRV} = \sum_{k = 0}^P \vect{a}_k \psi_k(\standardRV)

where \widetilde{\outputRV} \in \Rset^{\outputDim} is the approximation of the output random variable \outputRV by the expansion, \left\{\vect{a}_k \in \Rset^{\outputDim}\right\}_{k = 0, ..., P} are the coefficients, \left\{\psi_k: \Rset^{\inputDim} \rightarrow \Rset\right\}_{k = 0, ..., P} are the orthonormal functions in the basis, and \standardRV \in \Rset^{\inputDim} is the standardized random input vector. The previous equation can be equivalently written as follows:

\widetilde{Y}_i = \sum_{k = 0}^P a_{k, i} \psi_k(\standardRV)

for i = 1, ..., \outputDim where a_{ki} \in \Rset is the i-th component of the k-th coefficient in the expansion:

\vect{a}_k = \begin{pmatrix}a_{k, 1} \\ \vdots\\ a_{k, \outputDim} \end{pmatrix}.

The covariance matrix of the functional chaos expansion is the matrix \matcov \in \Rset^{\outputDim \times \outputDim}, where each component is:

c_{ij} = \Cov{\widetilde{Y}_i, \widetilde{Y}_j}

for i,j = 1, ..., \outputDim. The covariance can be computed using the coefficients of the expansion:

\Cov{\widetilde{Y}_i, \widetilde{Y}_j} = \sum_{k = 1}^P a_{k, i} a_{k, j}

for i,j = 1, ..., \outputDim. This covariance involves all the coefficients, except the first one. The diagonal of the covariance matrix is the marginal variance:

\Var{\widetilde{Y}_i} = \sum_{k = 1}^P a_{k, i}^2

for i = 1, ..., \outputDim.

Returns:
covarianceCovarianceMatrix, dimension \outputDim \times \outputDim

The covariance of the functional chaos expansion.

Examples

>>> from openturns.usecases import ishigami_function
>>> import openturns as ot
>>> import math
>>> im = ishigami_function.IshigamiModel()
>>> sampleSize = 1000
>>> inputTrain = im.distributionX.getSample(sampleSize)
>>> outputTrain = im.model(inputTrain)
>>> multivariateBasis = ot.OrthogonalProductPolynomialFactory([im.X1, im.X2, im.X3])
>>> selectionAlgorithm = ot.LeastSquaresMetaModelSelectionFactory()
>>> projectionStrategy = ot.LeastSquaresStrategy(selectionAlgorithm)
>>> totalDegree = 10
>>> enumerateFunction = multivariateBasis.getEnumerateFunction()
>>> basisSize = enumerateFunction.getBasisSizeFromTotalDegree(totalDegree)
>>> adaptiveStrategy = ot.FixedStrategy(multivariateBasis, basisSize)
>>> chaosAlgo = ot.FunctionalChaosAlgorithm(
...     inputTrain, outputTrain, im.distributionX, adaptiveStrategy, projectionStrategy
... )
>>> chaosAlgo.run()
>>> chaosResult = chaosAlgo.getResult()
>>> randomVector = ot.FunctionalChaosRandomVector(chaosResult)
>>> covarianceMatrix = randomVector.getCovariance()
>>> print('covarianceMatrix=', covarianceMatrix[0, 0])
covarianceMatrix= 13.8...
>>> outputDimension = outputTrain.getDimension()
>>> stdDev = ot.Point([math.sqrt(covarianceMatrix[i, i]) for i in range(outputDimension)])
>>> print('stdDev=', stdDev[0])
stdDev= 3.72...
getDescription()

Accessor to the description of the RandomVector.

Returns:
descriptionDescription

Describes the components of the RandomVector.

getDimension()

Accessor to the dimension of the RandomVector.

Returns:
dimensionpositive int

Dimension of the RandomVector.

getDistribution()

Accessor to the distribution of the RandomVector.

Returns:
distributionDistribution

Distribution of the considered UsualRandomVector.

Examples

>>> import openturns as ot
>>> distribution = ot.Normal([0.0, 0.0], [1.0, 1.0], ot.CorrelationMatrix(2))
>>> randomVector = ot.RandomVector(distribution)
>>> ot.RandomGenerator.SetSeed(0)
>>> print(randomVector.getDistribution())
Normal(mu = [0,0], sigma = [1,1], R = [[ 1 0 ]
 [ 0 1 ]])
getDomain()

Accessor to the domain of the Event.

Returns:
domainDomain

Describes the domain of an event.

getFrozenRealization(fixedPoint)

Compute realizations of the RandomVector.

In the case of a CompositeRandomVector or an event of some kind, this method returns the value taken by the random vector if the root cause takes the value given as argument.

Parameters:
fixedPointPoint

Point chosen as the root cause of the random vector.

Returns:
realizationPoint

The realization corresponding to the chosen root cause.

Examples

>>> import openturns as ot
>>> distribution = ot.Normal()
>>> randomVector = ot.RandomVector(distribution)
>>> f = ot.SymbolicFunction('x', 'x')
>>> compositeRandomVector = ot.CompositeRandomVector(f, randomVector)
>>> event = ot.ThresholdEvent(compositeRandomVector, ot.Less(), 0.0)
>>> print(event.getFrozenRealization([0.2]))
[0]
>>> print(event.getFrozenRealization([-0.1]))
[1]
getFrozenSample(fixedSample)

Compute realizations of the RandomVector.

In the case of a CompositeRandomVector or an event of some kind, this method returns the different values taken by the random vector when the root cause takes the values given as argument.

Parameters:
fixedSampleSample

Sample of root causes of the random vector.

Returns:
sampleSample

Sample of the realizations corresponding to the chosen root causes.

Examples

>>> import openturns as ot
>>> distribution = ot.Normal()
>>> randomVector = ot.RandomVector(distribution)
>>> f = ot.SymbolicFunction('x', 'x')
>>> compositeRandomVector = ot.CompositeRandomVector(f, randomVector)
>>> event = ot.ThresholdEvent(compositeRandomVector, ot.Less(), 0.0)
>>> print(event.getFrozenSample([[0.2], [-0.1]]))
    [ y0 ]
0 : [ 0  ]
1 : [ 1  ]
getFunction()

Accessor to the Function in case of a composite RandomVector.

Returns:
functionFunction

Function used to define a CompositeRandomVector as the image through this function of the antecedent \vect{X}: \vect{Y}=f(\vect{X}).

getFunctionalChaosResult()

Accessor to the functional chaos result.

Returns:
functionalChaosResultFunctionalChaosResult

The result from a functional chaos decomposition.

getMarginal(*args)

Get the random vector corresponding to the i^{th} marginal component(s).

Parameters:
iint or list of ints, 0\leq i < dim

Indicates the component(s) concerned. dim is the dimension of the RandomVector.

Returns:
vectorRandomVector

RandomVector restricted to the concerned components.

Notes

Let’s note \vect{Y}=\Tr{(Y_1,\dots,Y_n)} a random vector and I \in [1,n] a set of indices. If \vect{Y} is a UsualRandomVector, the subvector is defined by \tilde{\vect{Y}}=\Tr{(Y_i)}_{i \in I}. If \vect{Y} is a CompositeRandomVector, defined by \vect{Y}=f(\vect{X}) with f=(f_1,\dots,f_n), f_i some scalar functions, the subvector is \tilde{\vect{Y}}=(f_i(\vect{X}))_{i \in I}.

Examples

>>> import openturns as ot
>>> distribution = ot.Normal([0.0, 0.0], [1.0, 1.0], ot.CorrelationMatrix(2))
>>> randomVector = ot.RandomVector(distribution)
>>> ot.RandomGenerator.SetSeed(0)
>>> print(randomVector.getMarginal(1).getRealization())
[0.608202]
>>> print(randomVector.getMarginal(1).getDistribution())
Normal(mu = 0, sigma = 1)
getMean()

Accessor to the mean of the functional chaos expansion.

Let \inputDim \in \Nset be the dimension of the input random vector, let \outputDim \in \Nset be the dimension of the output random vector, and let P + 1 \in \Nset be the size of the basis. We consider the following functional chaos expansion:

\widetilde{\outputRV} = \sum_{k = 0}^P \vect{a}_k \psi_k(\standardRV)

where \widetilde{\outputRV} \in \Rset^{\outputDim} is the approximation of the output random variable \outputRV by the expansion, \left\{\vect{a}_k \in \Rset^{\outputDim}\right\}_{k = 0, ..., P} are the coefficients, \left\{\psi_k: \Rset^{\inputDim} \rightarrow \Rset\right\}_{k = 0, ..., P} are the orthonormal functions in the basis, and \standardRV \in \Rset^{\inputDim} is the standardized random input vector. The previous equation can be equivalently written as follows:

\widetilde{Y}_i = \sum_{k = 0}^P a_{ki} \psi_k(\standardRV)

for i = 1, ..., \outputDim where a_{ki} \in \Rset is the i-th component of the k-th coefficient in the expansion:

\vect{a}_k = \begin{pmatrix}a_{k, 1} \\ \vdots\\ a_{k, \outputDim} \end{pmatrix}.

The mean of the functional chaos expansion is the first coefficient in the expansion:

\Expect{\widetilde{\outputRV}} = \vect{a}_0.

Returns:
meanPoint, dimension \outputDim

The mean of the functional chaos expansion.

Examples

>>> from openturns.usecases import ishigami_function
>>> import openturns as ot
>>> im = ishigami_function.IshigamiModel()
>>> sampleSize = 1000
>>> inputTrain = im.distributionX.getSample(sampleSize)
>>> outputTrain = im.model(inputTrain)
>>> multivariateBasis = ot.OrthogonalProductPolynomialFactory([im.X1, im.X2, im.X3])
>>> selectionAlgorithm = ot.LeastSquaresMetaModelSelectionFactory()
>>> projectionStrategy = ot.LeastSquaresStrategy(selectionAlgorithm)
>>> totalDegree = 10
>>> enumerateFunction = multivariateBasis.getEnumerateFunction()
>>> basisSize = enumerateFunction.getBasisSizeFromTotalDegree(totalDegree)
>>> adaptiveStrategy = ot.FixedStrategy(multivariateBasis, basisSize)
>>> chaosAlgo = ot.FunctionalChaosAlgorithm(
...     inputTrain, outputTrain, im.distributionX, adaptiveStrategy, projectionStrategy
... )
>>> chaosAlgo.run()
>>> chaosResult = chaosAlgo.getResult()
>>> randomVector = ot.FunctionalChaosRandomVector(chaosResult)
>>> mean = randomVector.getMean()
>>> print('mean=', mean[0])
mean= 3.50...
getName()

Accessor to the object’s name.

Returns:
namestr

The name of the object.

getOperator()

Accessor to the comparaison operator of the Event.

Returns:
operatorComparisonOperator

Comparaison operator used to define the RandomVector.

getParameter()

Accessor to the parameter of the distribution.

Returns:
parameterPoint

Parameter values.

getParameterDescription()

Accessor to the parameter description of the distribution.

Returns:
descriptionDescription

Parameter names.

getProcess()

Get the stochastic process.

Returns:
processProcess

Stochastic process used to define the RandomVector.

getRealization()

Compute one realization of the RandomVector.

Returns:
realizationPoint

Sequence of values randomly determined from the RandomVector definition. In the case of an event: one realization of the event (considered as a Bernoulli variable) which is a boolean value (1 for the realization of the event and 0 else).

Examples

>>> import openturns as ot
>>> distribution = ot.Normal([0.0, 0.0], [1.0, 1.0], ot.CorrelationMatrix(2))
>>> randomVector = ot.RandomVector(distribution)
>>> ot.RandomGenerator.SetSeed(0)
>>> print(randomVector.getRealization())
[0.608202,-1.26617]
>>> print(randomVector.getRealization())
[-0.438266,1.20548]
getSample(size)

Compute realizations of the RandomVector.

Parameters:
nint, n \geq 0

Number of realizations needed.

Returns:
realizationsSample

n sequences of values randomly determined from the RandomVector definition. In the case of an event: n realizations of the event (considered as a Bernoulli variable) which are boolean values (1 for the realization of the event and 0 else).

Examples

>>> import openturns as ot
>>> distribution = ot.Normal([0.0, 0.0], [1.0, 1.0], ot.CorrelationMatrix(2))
>>> randomVector = ot.RandomVector(distribution)
>>> ot.RandomGenerator.SetSeed(0)
>>> print(randomVector.getSample(3))
    [ X0        X1        ]
0 : [  0.608202 -1.26617  ]
1 : [ -0.438266  1.20548  ]
2 : [ -2.18139   0.350042 ]
getThreshold()

Accessor to the threshold of the Event.

Returns:
thresholdfloat

Threshold of the RandomVector.

hasName()

Test if the object is named.

Returns:
hasNamebool

True if the name is not empty.

isComposite()

Accessor to know if the RandomVector is a composite one.

Returns:
isCompositebool

Indicates if the RandomVector is of type Composite or not.

isEvent()

Whether the random vector is an event.

Returns:
isEventbool

Whether it takes it values in {0, 1}.

setDescription(description)

Accessor to the description of the RandomVector.

Parameters:
descriptionstr or sequence of str

Describes the components of the RandomVector.

setName(name)

Accessor to the object’s name.

Parameters:
namestr

The name of the object.

setParameter(parameters)

Accessor to the parameter of the distribution.

Parameters:
parametersequence of float

Parameter values.

Examples using the class

Create a polynomial chaos for the Ishigami function: a quick start guide to polynomial chaos

Create a polynomial chaos for the Ishigami function: a quick start guide to polynomial chaos