LARS

(Source code, png)

../../../_images/LARS.png
class LARS(*args)

Least Angle Regression.

Refer to Sparse least squares metamodel.

Methods

build(x, y, psi, indices)

Run the algorithm.

getClassName()

Accessor to the object's name.

getMaximumRelativeConvergence()

Accessor to the stopping criterion on the L1-norm of the coefficients.

getName()

Accessor to the object's name.

hasName()

Test if the object is named.

setMaximumRelativeConvergence(coefficientsPaths)

Accessor to the stopping criterion on the L1-norm of the coefficients.

setName(name)

Accessor to the object's name.

Notes

LARS inherits from BasisSequenceFactory.

If the size P of the PC basis is of similar size to N, or even possibly significantly larger than N , then the following ordinary least squares problem is ill-posed:

\vect{a} = \argmin_{\vect{b} \in \Rset^P} E_{\mu} \left[ \left( g \circ T^{-1}
        (\vect{U}) - \vect{b}^{\intercal} \vect{\Psi}(\vect{U}) \right)^2 \right]

The sparse least squares approaches may be employed instead. Eventually a sparse PC representation is obtained, that is an approximation which only contains a small number of active basis functions.

Examples

>>> import openturns as ot
>>> from openturns.usecases import ishigami_function
>>> im = ishigami_function.IshigamiModel()
>>> # Create the orthogonal basis
>>> polynomialCollection = [ot.LegendreFactory()] * im.dim
>>> enumerateFunction = ot.LinearEnumerateFunction(im.dim)
>>> productBasis = ot.OrthogonalProductPolynomialFactory(polynomialCollection, enumerateFunction)
>>> # experimental design
>>> samplingSize = 75
>>> experiment = ot.LowDiscrepancyExperiment(ot.SobolSequence(), im.inputDistribution, samplingSize)
>>> # generate sample
>>> x = experiment.generate()
>>> y = im.model(x)
>>> # iso transfo
>>> xToU = ot.DistributionTransformation(im.inputDistribution, productBasis.getMeasure())
>>> u = xToU(x)
>>> # build basis
>>> degree = 10
>>> basisSize = enumerateFunction.getStrataCumulatedCardinal(degree)
>>> basis = [productBasis.build(i) for i in range(basisSize)]
>>> # run algorithm
>>> factory = ot.BasisSequenceFactory(ot.LARS())
>>> seq = factory.build(u, y, basis, list(range(basisSize)))
__init__(*args)
build(x, y, psi, indices)

Run the algorithm.

Parameters:
x2-d sequence of float

Input sample

y2-d sequence of float

Output sample

psisequence of Function

Basis

indicessequence of int

Current indices of the basis

Returns:
measureBasisSequence

Fitting measure

getClassName()

Accessor to the object’s name.

Returns:
class_namestr

The object class name (object.__class__.__name__).

getMaximumRelativeConvergence()

Accessor to the stopping criterion on the L1-norm of the coefficients.

Returns:
efloat

Stopping criterion.

getName()

Accessor to the object’s name.

Returns:
namestr

The name of the object.

hasName()

Test if the object is named.

Returns:
hasNamebool

True if the name is not empty.

setMaximumRelativeConvergence(coefficientsPaths)

Accessor to the stopping criterion on the L1-norm of the coefficients.

Parameters:
efloat

Stopping criterion.

setName(name)

Accessor to the object’s name.

Parameters:
namestr

The name of the object.

Examples using the class

Trend computation

Trend computation

Polynomial chaos exploitation

Polynomial chaos exploitation

Advanced polynomial chaos construction

Advanced polynomial chaos construction

Conditional expectation of a polynomial chaos expansion

Conditional expectation of a polynomial chaos expansion

Metamodel of a field function

Metamodel of a field function

Compute leave-one-out error of a polynomial chaos expansion

Compute leave-one-out error of a polynomial chaos expansion