IdentityMatrix

class IdentityMatrix(*args)

Identity matrix.

Parameters:
sizeint, n > 0, optional

Matrix size. Default is 1.

Methods

checkSymmetry()

Check if the internal representation is really symmetric.

clean(threshold)

Set elements smaller than a threshold to zero.

computeCholesky()

Compute the Cholesky factor.

computeCholeskyInPlace()

Compute the Cholesky factor in place.

computeDeterminant()

Compute the determinant.

computeDeterminantInPlace()

Compute the determinant in place.

computeEV()

Compute the eigenvalues decomposition (EVD).

computeEVInPlace()

Compute the eigenvalues decomposition (EVD) in place.

computeEigenValues()

Compute eigenvalues.

computeEigenValuesInPlace()

Compute eigenvalues in place.

computeGram([transpose])

Compute the associated Gram matrix.

computeHadamardProduct(other)

Compute the Hadamard product matrix.

computeLargestEigenValueModule(*args)

Compute the largest eigenvalue module.

computeLogAbsoluteDeterminant()

Compute the logarithm of the absolute value of the determinant.

computeLogAbsoluteDeterminantInPlace()

Compute the determinant in place.

computeQR([fullQR])

Compute the QR factorization.

computeQRInPlace([fullQR])

Compute the QR factorization in place.

computeRegularizedCholesky()

Compute the regularized Cholesky factor.

computeSVD([fullSVD])

Compute the singular values decomposition (SVD).

computeSVDInPlace([fullSVD])

Compute the singular values decomposition (SVD).

computeSingularValues()

Compute the singular values.

computeSingularValuesInPlace()

Compute the singular values in place.

computeSumElements()

Compute the sum of the matrix elements.

computeTrace()

Compute the trace of the matrix.

frobeniusNorm()

Frobenius norm accessor.

getClassName()

Accessor to the object's name.

getDiagonal([k])

Get the k-th diagonal of the matrix.

getDiagonalAsPoint([k])

Get the k-th diagonal of the matrix.

getDimension()

Accessor to the dimension (the number of rows).

getId()

Accessor to the object's id.

getImplementation()

Accessor to the underlying implementation.

getName()

Accessor to the object's name.

getNbColumns()

Accessor to the number of columns.

getNbRows()

Accessor to the number of rows.

inverse()

Compute the inverse of the matrix.

isDiagonal()

Test whether the matrix is diagonal or not.

isEmpty()

Tell if the matrix is empty.

isPositiveDefinite()

Test whether the matrix is positive definite or not.

reshape(newRowDim, newColDim)

Reshape the matrix.

reshapeInPlace(newRowDim, newColDim)

Reshape the matrix, in place.

setDiagonal(*args)

Set the k-th diagonal of the matrix.

setName(name)

Accessor to the object's name.

solveLinearSystem(b)

Solve a square linear system whose the present matrix is the operator.

solveLinearSystemInPlace(b)

Solve a rectangular linear system whose the present matrix is the operator.

squareElements()

Square the Matrix, ie each element of the matrix is squared.

transpose()

Transpose the matrix.

Examples

Create a matrix

>>> import openturns as ot
>>> M = ot.IdentityMatrix(2)
>>> print(M)
[[ 1 0 ]
 [ 0 1 ]]
__init__(*args)
checkSymmetry()

Check if the internal representation is really symmetric.

clean(threshold)

Set elements smaller than a threshold to zero.

Parameters:
thresholdfloat

Threshold for zeroing elements.

Returns:
cleaned_matrixMatrix

Input matrix with elements smaller than the threshold set to zero.

computeCholesky()

Compute the Cholesky factor.

The Cholesky factor of a covariance (real symmetric positive definite) matrix \mat{C} is the lower triangular matrix \mat{L} such that:

\mat{C} = \mat{L} \Tr{\mat{L}}

Returns:
cholesky_factorTriangularMatrix

The left (lower) Cholesky factor.

Notes

This uses LAPACK’s DPOTRF.

computeCholeskyInPlace()

Compute the Cholesky factor in place.

Similar to computeCholesky() but modifies the matrix in place to avoid a copy.

computeDeterminant()

Compute the determinant.

Returns:
determinantfloat

The square matrix determinant.

Examples

>>> import openturns as ot
>>> A = ot.SquareMatrix([[1.0, 2.0], [3.0, 4.0]])
>>> A.computeDeterminant()
-2.0
computeDeterminantInPlace()

Compute the determinant in place.

Similar to computeDeterminant() but modifies the matrix in place to avoid copy.

computeEV()

Compute the eigenvalues decomposition (EVD).

The eigenvalues decomposition of a square matrix \mat{M} with size n reads:

\mat{M} = \mat{\Phi} \mat{\Lambda} \Tr{\mat{\Phi}}

where \mat{\Lambda} is an n \times n diagonal matrix and \mat{\Phi} is an n \times n orthogonal matrix.

Returns:
eigenvaluesPoint

The vector of eigenvalues with size n that form the diagonal of the n \times n matrix \mat{\Lambda} of the EVD.

PhiSquareComplexMatrix

The left matrix of the EVD.

Notes

This uses LAPACK’S DSYEV.

Examples

>>> import openturns as ot
>>> import numpy as np
>>> M = ot.SymmetricMatrix([[1.0, 2.0], [2.0, -4.0]])
>>> eigen_values, Phi = M.computeEV()
>>> Lambda = ot.SquareMatrix(M.getDimension())
>>> for i in range(eigen_values.getSize()):
...     Lambda[i, i] = eigen_values[i]
>>> np.testing.assert_array_almost_equal(Phi * Lambda * Phi.transpose(), M)
computeEVInPlace()

Compute the eigenvalues decomposition (EVD) in place.

Similar to computeEV() but the matrix is modified in place to avoid copy.

computeEigenValues()

Compute eigenvalues.

Returns:
eigenvaluesPoint

Eigenvalues.

See also

computeEV

Examples

>>> import openturns as ot
>>> M = ot.SymmetricMatrix([[1.0, 2.0], [2.0, -4.0]])
>>> print(M.computeEigenValues())
[-4.70156,1.70156]
computeEigenValuesInPlace()

Compute eigenvalues in place.

Similar to computeEigenValues() but the matrix is modified in place to avoid copy.

computeGram(transpose=True)

Compute the associated Gram matrix.

Parameters:
transposedbool

Tells if matrix is to be transposed or not. Default value is True

Returns:
MMTMatrix

The Gram matrix.

Notes

When transposed is True, compute \Tr{M} M. Otherwise, compute M \Tr{M}.

Examples

>>> import openturns as ot
>>> M = ot.Matrix([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
>>> MtM = M.computeGram()
>>> print(MtM)
[[ 35 44 ]
 [ 44 56 ]]
>>> MMt = M.computeGram(False)
>>> print(MMt)
[[  5 11 17 ]
 [ 11 25 39 ]
 [ 17 39 61 ]]
computeHadamardProduct(other)

Compute the Hadamard product matrix.

Parameters:
MatMatrix

The right hand matrix

Returns:
CMatrix

The Hadamard product matrix.

Notes

The matrix \mat{C} \in \Rset^{m \times n} resulting from the Hadamard product ( also known as the elementwise product) of the matrices \mat{A} \in \Rset^{m \times n} and \mat{B} \in \Rset^{m \times n} is:

c_{i,j} = a_{i,j} b_{i,j}

for any i = 1, \cdots, m and j = 1, \cdots, n.

Examples

>>> import openturns as ot
>>> A = ot.Matrix([[1.0, 2.0], [3.0, 4.0]])
>>> B = ot.Matrix([[1.0, 2.0], [3.0, 4.0]])
>>> C = A.computeHadamardProduct(B)
>>> print(C)
[[  1  4 ]
 [  9 16 ]]
>>> print(B.computeHadamardProduct(A))
[[  1  4 ]
 [  9 16 ]]
computeLargestEigenValueModule(*args)

Compute the largest eigenvalue module.

Parameters:
maximumIterationsint, optional

The maximum number of power iterations to perform to get the approximation. Default is given by the ‘Matrix-LargestEigenValueIterations’ key in the ResourceMap.

epsilonfloat, optional

The target relative error. Default is given by the ‘Matrix-LargestEigenValueRelativeError’ key in the ResourceMap.

Returns:
largestEigenvalueModulefloat

The largest eigenvalue module.

Examples

>>> import openturns as ot
>>> M = ot.SymmetricMatrix([[1.0, 3.0], [3.0, 4.0]])
>>> M.computeLargestEigenValueModule()
5.8541...
computeLogAbsoluteDeterminant()

Compute the logarithm of the absolute value of the determinant.

Returns:
determinantfloat

The logarithm of the absolute value of the square matrix determinant.

signfloat

The sign of the determinant.

Examples

>>> import openturns as ot
>>> A = ot.SquareMatrix([[1.0, 2.0], [3.0, 4.0]])
>>> A.computeLogAbsoluteDeterminant()
[0.693147..., -1.0]
computeLogAbsoluteDeterminantInPlace()

Compute the determinant in place.

Similar to computeLogAbsoluteDeterminant() but modifies the matrix in place to avoid copy.

computeQR(fullQR=False)

Compute the QR factorization.

By default, it is the economic decomposition which is computed. The economic QR factorization of a rectangular matrix \mat{M} with n_r \geq n_c (more rows than columns) is defined as follows:

\mat{M} = \mat{Q} \mat{R}
        = \mat{Q} \begin{bmatrix} \mat{R}_1 \\ \mat{0} \end{bmatrix}
        = \begin{bmatrix} \mat{Q}_1, \mat{Q}_2 \end{bmatrix}
          \begin{bmatrix} \mat{R}_1 \\ \mat{0} \end{bmatrix}
        = \mat{Q}_1 \mat{R}_1

where \mat{R}_1 is an n_c \times n_c upper triangular matrix, \mat{Q}_1 is n_r \times n_c, \mat{Q}_2 is n_r \times (n_r - n_c), and \mat{Q}_1 and \mat{Q}_2 both have orthogonal columns.

Parameters:
full_qrbool, optional

A flag telling whether Q, R or Q1, R1 are returned. Default is False and returns Q1, R1.

Returns:
Q1Matrix

The orthogonal matrix of the economic QR factorization.

R1TriangularMatrix

The right (upper) triangular matrix of the economic QR factorization.

QMatrix

The orthogonal matrix of the full QR factorization.

RTriangularMatrix

The right (upper) triangular matrix of the full QR factorization.

Notes

The economic QR factorization is often used for solving overdetermined linear systems (where the operator \mat{M} has n_r \geq n_c) in the least-square sense because it implies solving a (simple) triangular system:

\vect{\hat{x}} = \arg\min\limits_{\vect{x} \in \Rset^{n_r}} \|\mat{M} \vect{x} - \vect{b}\|
               = \mat{R}_1^{-1} (\Tr{\mat{Q}_1} \vect{b})

This uses LAPACK’s DGEQRF and DORGQR.

Examples

>>> import openturns as ot
>>> import numpy as np
>>> M = ot.Matrix([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
>>> Q1, R1 = M.computeQR()
>>> np.testing.assert_array_almost_equal(Q1 * R1, M)
computeQRInPlace(fullQR=False)

Compute the QR factorization in place.

Similar to computeQR()

computeRegularizedCholesky()

Compute the regularized Cholesky factor.

Similar to computeCholesky() but with a regularization loop according to the largest eigenvalue and keys Matrix-StartingScaling and Matrix-MaximalScaling.

Returns:
cholesky_factorTriangularMatrix

The left (lower) Cholesky factor.

computeSVD(fullSVD=False)

Compute the singular values decomposition (SVD).

The singular values decomposition of a rectangular matrix \mat{M} with size n_r > n_c reads:

\mat{M} = \mat{U} \mat{\Sigma} \Tr{\mat{V}}

where \mat{U} is an n_r \times n_r orthogonal matrix, \mat{\Sigma} is an n_r \times n_c diagonal matrix and \mat{V} is an n_c \times n_c orthogonal matrix.

Parameters:
fullSVDbool, optional

Whether the null parts of the orthogonal factors are explicitly stored or not. Default is False and computes a reduced SVD.

Returns:
singular_valuesPoint

The vector of singular values with size n = \min(n_r, n_c) that form the diagonal of the n_r \times n_c matrix \mat{\Sigma} of the SVD.

USquareMatrix

The left orthogonal matrix of the SVD.

VTSquareMatrix

The transposed right orthogonal matrix of the SVD.

Notes

This uses LAPACK’s DGESDD.

Examples

>>> import openturns as ot
>>> import numpy as np
>>> M = ot.Matrix([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
>>> singular_values, U, VT = M.computeSVD(True)
>>> Sigma = ot.Matrix(M.getNbRows(), M.getNbColumns())
>>> for i in range(singular_values.getSize()):
...     Sigma[i, i] = singular_values[i]
>>> np.testing.assert_array_almost_equal(U * Sigma * VT, M)
computeSVDInPlace(fullSVD=False)

Compute the singular values decomposition (SVD).

Unlike computeSVD, this modifies the matrix in place and avoids a copy.

computeSingularValues()

Compute the singular values.

Parameters:
fullSVDbool, optional

Whether the null parts of the orthogonal factors are explicitly stored or not. Default is False and computes a reduced SVD.

Returns:
singular_valuesPoint

The vector of singular values with size n = \min(n_r, n_c) that form the diagonal of the n_r \times n_c matrix \mat{\Sigma} of the SVD decomposition.

See also

computeSVD

Examples

>>> import openturns as ot
>>> M = ot.Matrix([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
>>> print(M.computeSingularValues())
[9.52552,0.514301]
computeSingularValuesInPlace()

Compute the singular values in place.

Similar to computeSingularValues() but the matrix is modified in place to avoid copy.

computeSumElements()

Compute the sum of the matrix elements.

Returns:
suma float

The sum of the elements.

Notes

Compute the sum of elements of the matrix \mat{A} \in \Rset^{m \times n}:

s = \sum_{i=1}^{m} \sum_{j=1}^{n} a_{i,j}

Examples

>>> import openturns as ot
>>> M = ot.Matrix([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
>>> s = M.computeSumElements()
>>> print(s)
21.0
computeTrace()

Compute the trace of the matrix.

Returns:
tracefloat

The trace of the matrix.

Examples

>>> import openturns as ot
>>> M = ot.SquareMatrix([[1.0, 2.0], [3.0, 4.0]])
>>> M.computeTrace()
5.0
frobeniusNorm()

Frobenius norm accessor.

Returns:
normfloat

The Frobenius norm ||\mat{A}||_F = \sqrt{\sum_{i=1}^m \sum_{j=1}^n a_{ij}^2}.

getClassName()

Accessor to the object’s name.

Returns:
class_namestr

The object class name (object.__class__.__name__).

getDiagonal(k=0)

Get the k-th diagonal of the matrix.

Parameters:
kint

The k-th diagonal to extract Default value is 0

Returns:
D: Matrix

The k-th diagonal.

Examples

>>> import openturns as ot
>>> M = ot.Matrix([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]])
>>> diag = M.getDiagonal()
>>> print(diag)
[[ 1 ]
 [ 5 ]
 [ 9 ]]
>>> print(M.getDiagonal(1))
[[ 2 ]
 [ 6 ]]
getDiagonalAsPoint(k=0)

Get the k-th diagonal of the matrix.

Parameters:
kint

The k-th diagonal to extract Default value is 0

Returns:
ptPoint

The k-th digonal.

Examples

>>> import openturns as ot
>>> M = ot.Matrix([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]])
>>> pt = M.getDiagonalAsPoint()
>>> print(pt)
[1,5,9]
getDimension()

Accessor to the dimension (the number of rows).

Returns:
dimensionint
getId()

Accessor to the object’s id.

Returns:
idint

Internal unique identifier.

getImplementation()

Accessor to the underlying implementation.

Returns:
implImplementation

A copy of the underlying implementation object.

getName()

Accessor to the object’s name.

Returns:
namestr

The name of the object.

getNbColumns()

Accessor to the number of columns.

Returns:
n_columnsint
getNbRows()

Accessor to the number of rows.

Returns:
n_rowsint
inverse()

Compute the inverse of the matrix.

Returns:
inverseMatrixSymmetricMatrix

The inverse of the matrix.

Examples

>>> import openturns as ot
>>> M = ot.SymmetricMatrix([[4.0, 2.0, 1.0], [2.0, 5.0, 3.0], [1.0, 3.0, 6.0]])
>>> print(67.0 * M.inverse())
[[  21  -9   1 ]
 [  -9  23 -10 ]
 [   1 -10  16 ]]
isDiagonal()

Test whether the matrix is diagonal or not.

Returns:
testbool

Answer.

isEmpty()

Tell if the matrix is empty.

Returns:
is_emptybool

True if the matrix contains no element.

Examples

>>> import openturns as ot
>>> M = ot.Matrix([[]])
>>> M.isEmpty()
True
isPositiveDefinite()

Test whether the matrix is positive definite or not.

A matrix \mat{M} is positive definite if \Tr{\vect{z}} \mat{M} \vect{z} is positive for every compatible non-zero column vector \vect{z}.

Returns:
testbool

Answer.

Notes

This uses LAPACK’s DPOTRF.

reshape(newRowDim, newColDim)

Reshape the matrix.

Parameters:
newRowDimint

The row dimension of the reshaped matrix.

newColDimint

The column dimension of the reshaped matrix.

Returns:
MTMatrix

The reshaped matrix.

Notes

If the size of the reshaped matrix is smaller than the size of the matrix to be reshaped, only the newRowDim\times newColDim first elements are kept (in a column-major storage sense). If the size is greater, the new elements are set to zero.

Examples

>>> import openturns as ot
>>> M = ot.Matrix([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
>>> print(M)
[[ 1 2 ]
 [ 3 4 ]
 [ 5 6 ]]
>>> print(M.reshape(1, 6))
1x6
[[ 1 3 5 2 4 6 ]]
>>> print(M.reshape(2, 2))
[[ 1 5 ]
 [ 3 2 ]]
>>> print(M.reshape(2, 6))
2x6
[[ 1 5 4 0 0 0 ]
 [ 3 2 6 0 0 0 ]]
reshapeInPlace(newRowDim, newColDim)

Reshape the matrix, in place.

Parameters:
newRowDimint

The row dimension of the reshaped matrix.

newColDimint

The column dimension of the reshaped matrix.

Notes

If the size of the reshaped matrix is smaller than the size of the matrix to be reshaped, only the newRowDim\times newColDim first elements are kept (in a column-major storage sense). If the size is greater, the new elements are set to zero. If the size is unchanged, no copy of data is done.

Examples

>>> import openturns as ot
>>> M = ot.Matrix([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
>>> print(M)
[[ 1 2 ]
 [ 3 4 ]
 [ 5 6 ]]
>>> M.reshapeInPlace(1, 6)
>>> print(M)
1x6
[[ 1 3 5 2 4 6 ]]
>>> M.reshapeInPlace(2, 2)
>>> print(M)
[[ 1 5 ]
 [ 3 2 ]]
>>> M.reshapeInPlace(2, 6)
>>> print(M)
2x6
[[ 1 5 0 0 0 0 ]
 [ 3 2 0 0 0 0 ]]
setDiagonal(*args)

Set the k-th diagonal of the matrix.

Parameters:
diagPoint or Matrix or a float

Value(s) used to fill the diagonal of the matrix

kint

The k-th diagonal to fill Default value is 0

Examples

>>> import openturns as ot
>>> M = ot.Matrix([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]])
>>> M.setDiagonal([-1, 23, 9])
>>> print(M)
[[ -1  2  3 ]
 [  4 23  6 ]
 [  7  8  9 ]]
>>> M.setDiagonal(1.0)
>>> print(M)
[[ 1 2 3 ]
 [ 4 1 6 ]
 [ 7 8 1 ]]
>>> M.setDiagonal([2, 6, 9])
>>> print(M)
[[ 2 2 3 ]
 [ 4 6 6 ]
 [ 7 8 9 ]]
setName(name)

Accessor to the object’s name.

Parameters:
namestr

The name of the object.

solveLinearSystem(b)

Solve a square linear system whose the present matrix is the operator.

Parameters:
rhssequence of float or Matrix with n_r values or rows, respectively

The right hand side member of the linear system.

Returns:
solutionPoint or Matrix

The solution of the square linear system.

Notes

This will handle both matrices and vectors. Note that you’d better type explicitly the matrix if it has some properties that could simplify the resolution (see TriangularMatrix).

This uses LAPACK’S DGESV for matrices and DGELSY for vectors.

Examples

>>> import openturns as ot
>>> import numpy as np
>>> M = ot.SquareMatrix([[1.0, 2.0], [3.0, 4.0]])
>>> b = [1.0] * 2
>>> x = M.solveLinearSystem(b)
>>> np.testing.assert_array_almost_equal(M * x, b)
solveLinearSystemInPlace(b)

Solve a rectangular linear system whose the present matrix is the operator.

Similar to solveLinearSystem() except the matrix is modified in-place during the resolution avoiding the need to allocate an extra copy if the original copy is not re-used.

squareElements()

Square the Matrix, ie each element of the matrix is squared.

Examples

>>> import openturns as ot
>>> M = ot.Matrix([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
>>> M.squareElements()
>>> print(M)
[[  1  4 ]
 [  9 16 ]
 [ 25 36 ]]
transpose()

Transpose the matrix.

Returns:
MTSquareMatrix

The transposed matrix.

Examples

>>> import openturns as ot
>>> M = ot.SquareMatrix([[1.0, 2.0], [3.0, 4.0]])
>>> print(M)
[[ 1 2 ]
 [ 3 4 ]]
>>> print(M.transpose())
[[ 1 3 ]
 [ 2 4 ]]

Examples using the class

Subset Sampling

Subset Sampling

Calibration without observed inputs

Calibration without observed inputs

Iterated Functions System

Iterated Functions System

Control algorithm termination

Control algorithm termination