LogNormalMuSigma

class LogNormalMuSigma(*args)

LogNormal distribution parameters.

Parameters:
mufloat

The mean of the LogNormal random variable.

Default value is e^{0.5}.

sigmafloat

The standard deviation of the LogNormal random variable, with \sigma > 0.

Default value is \sqrt{e^{2}-e}.

gammafloat, optional

Location parameter.

Default value is 0.0.

See also

LogNormal

Notes

Let X be a random variable that follows a LogNormal distribution such that:

\Expect{X} &= \mu \\
\Var{X} &= \sigma^2

The native parameters of X are \mu_\ell and \sigma_\ell, which are such that \log X follows a normal distribution whose mean is \mu_\ell and whose variance is \sigma_\ell^2. Then we have:

\sigma_\ell &= \sqrt{\log{ \left( 1+\frac{\sigma^2}{(\mu-\gamma)^2} \right) }} \\
\mu_\ell &= \log{(\mu-\gamma)} - \frac{\sigma_\ell^2}{2}

The default values of (\mu, \sigma, \gamma) are defined so that the associated native parameters have the default values: (\mu_\ell, \sigma_\ell, \gamma_\ell) = (0.0, 1.0, 0.0).

Examples

Create the parameters of the LogNormal distribution:

>>> import openturns as ot
>>> parameters = ot.LogNormalMuSigma(0.63, 3.3, -0.5)

Convert parameters into the native parameters:

>>> print(parameters.evaluate())
[-1.00492,1.50143,-0.5]

The gradient of the transformation of the native parameters into the new parameters:

>>> print(parameters.gradient())
[[  1.67704  -0.527552  0        ]
 [ -0.271228  0.180647  0        ]
 [ -1.67704   0.527552  1        ]]

Methods

evaluate()

Compute native parameters values.

getClassName()

Accessor to the object's name.

getDescription()

Get the description of the parameters.

getDistribution()

Build a distribution based on a set of native parameters.

getName()

Accessor to the object's name.

getValues()

Accessor to the parameters values.

gradient()

Get the gradient.

hasName()

Test if the object is named.

inverse(inP)

Convert to native parameters.

setName(name)

Accessor to the object's name.

setValues(values)

Accessor to the parameters values.

__init__(*args)
evaluate()

Compute native parameters values.

Returns:
valuesPoint

The native parameter values.

getClassName()

Accessor to the object’s name.

Returns:
class_namestr

The object class name (object.__class__.__name__).

getDescription()

Get the description of the parameters.

Returns:
collectionDescription

List of parameters names.

getDistribution()

Build a distribution based on a set of native parameters.

Returns:
distributionDistribution

Distribution built with the native parameters.

getName()

Accessor to the object’s name.

Returns:
namestr

The name of the object.

getValues()

Accessor to the parameters values.

Returns:
valuesPoint

List of parameters values.

gradient()

Get the gradient.

Returns:
gradientMatrix

The gradient of the transformation of the native parameters into the new parameters.

Notes

If we note (p_1, \dots, p_q) the native parameters and (p'_1, \dots, p'_q) the new ones, then the gradient matrix is \left( \dfrac{\partial p'_i}{\partial p_j} \right)_{1 \leq i,j \leq  q}.

hasName()

Test if the object is named.

Returns:
hasNamebool

True if the name is not empty.

inverse(inP)

Convert to native parameters.

Parameters:
inPsequence of float

The non-native parameters.

Returns:
outPPoint

The native parameters.

setName(name)

Accessor to the object’s name.

Parameters:
namestr

The name of the object.

setValues(values)

Accessor to the parameters values.

Parameters:
valuessequence of float

List of parameters values.

Examples using the class

Apply a transform or inverse transform on your polynomial chaos

Apply a transform or inverse transform on your polynomial chaos

Polynomial chaos is sensitive to the degree

Polynomial chaos is sensitive to the degree

Kriging: configure the optimization solver

Kriging: configure the optimization solver

Specify a simulation algorithm

Specify a simulation algorithm

Exploitation of simulation algorithm results

Exploitation of simulation algorithm results

Cross Entropy Importance Sampling

Cross Entropy Importance Sampling

Mix/max search and sensitivity from design

Mix/max search and sensitivity from design

Mix/max search using optimization

Mix/max search using optimization