HMatrix¶
- class HMatrix(*args)¶
Hierarchical matrices.
Hierarchical matrices (or HMatrix) are a compressed representation of dense matrices. In many applications, matrix coefficients represent an interaction between two degrees of freedom; when these interactions are smooth, it is possible to approximate sub-blocks by a local low-rank approximation B =~ UV^T where B has dimension (m,n), U (m,k), and V (n,k). Of course, this is interesting only if k is much lower than m and n.
In order to obtain this compressed representation, several different steps must be performed:
Clustering: creation of rows and columns cluster trees Vertices where interactions are computed are reordered to improve locality. A binary space partition algorithm is used to recursively divide vertex set. Root cell contains all vertices. At each recursion step, a cell is divided into two new cells until it contains less than a given number of vertices. Space partition is performed orthogonally to original axis, by cutting its longest dimension.
The ‘median’ clustering algorithm divides a cell into two cells containing the same number of degrees of freedom.
The ‘geometric’ clustering algorithm divides a cell into two cells of the same geometric size
The ‘hybrid’ clustering algorithm is a mix. It first performs a ‘median’ bisection; if volumes of these new cells are very different, a ‘geometric’ clustering is performed instead.
Admissibility: creation of an empty HMatrix structure The first step created a full binary tree for rows and columns degrees of freedom. We will now create a hierarchical representation of our matrix by checking whether some blocks can be replaced by low-rank approximations. The whole matrix represents the interactions of all rows degrees of freedom against all columns degrees of freedom. It can not be approximated by a low-rank approximation, and thus it is replaced by 4 blocks obtained by considering interactions between rows and columns children nodes. This operation is performed recursively. At each step, we compute axis aligned bounding boxes rows_bbox and cols_bbox: if min(diameter(rows_bbox), diameter(cols_bbox)) <= eta*distance(rows_bbox, cols_bbox) then we consider that interaction between rows and columns degrees of freedom can have a local low-rank approximation, and recursion is stopped. Otherwise, we recurse until bottom cluster tree is reached. The whole matrix is thus represented by a 4-tree where leaves will contain either low-rank approximation or full blocks. The eta parameter is called the admissibility factor, and it can be modified.
Assembly: coefficients computations The hierarchical structure of the matrix has been computed during step 2. To compute coefficients, we call the assemble method and provide a callable to compute interaction between two nodes. Full blocks are computed by calling this callable for the whole block. If compression method is ‘SVD’, low-rank approximation is computed by first computing the whole block, then finding its singular value decomposition, and rank is truncated so that error does not exceed assemblyEpsilon. This method is precise, but very costly. If compression method is a variant of ACA, only few rows and columns are computed. This is much more efficient, but error may be larger than expected on some problems.
Matrix computations Once an HMatrix is computed, usual linear algebra operations can be performed. Matrix can be factorized in-place, in order to solve systems. Or we can compute its product by a matrix or vector. But keep in mind that rows and columns are reordered internally, and thus results may differ sensibly from standard dense representation (for instance when computing a Cholesky or LU decomposition).
See also
Methods
addIdentity
(alpha)Add alpha*Identity to the Matrix.
assemble
(*args)Assemble matrix.
assembleReal
(callable, symmetry)Assemble matrix.
assembleTensor
(callable, outputDimension, ...)Assemble matrix by block.
Compression ratio accessor.
copy
()Copy matrix.
dump
(name)Save matrix to a file.
factorize
(method)Factorize matrix.
Block ratio accessor.
gemm
(transA, transB, alpha, a, b, beta)Multiply matrix in-place self=alpha*op(A)*op(B)+beta*self.
gemv
(trans, alpha, x, beta, y)Multiply vector in-place y=alpha*op(A)*x+beta*y.
Accessor to the object's name.
Diagonal values accessor.
getId
()Accessor to the object's id.
Accessor to the underlying implementation.
getName
()Accessor to the object's name.
Accessor to the number of columns.
Accessor to the number of rows.
norm
()Compute norm value.
scale
(alpha)Scale matrix in-place A=alpha*A.
setName
(name)Accessor to the object's name.
solve
(*args)Solve linear system op(A)*x=b, after A has been factorized.
solveLower
(*args)Solve lower linear system op(L)*x=b, after A has been factorized.
Transpose matrix in-place.
- __init__(*args)¶
- addIdentity(alpha)¶
Add alpha*Identity to the Matrix.
- Parameters
- alphafloat
Coefficient.
- assemble(*args)¶
Assemble matrix.
- Parameters
- f
HMatrixRealAssemblyFunction
orHMatrixTensorRealAssemblyFunction
Assembly function.
- symmetrystr
Symmetry flag, either N or L
- f
- assembleReal(callable, symmetry)¶
Assemble matrix.
- Parameters
- fassembly function
Callable that takes i,j int parameters and returns a float
- symmetrystr
Symmetry flag, either N or L
- assembleTensor(callable, outputDimension, symmetry)¶
Assemble matrix by block.
- Parameters
- fassembly function
Callable that takes i,j int parameters and returns a Matrix
- outputDimensionint
Block dimension
- symmetrystr
Symmetry flag, either N or L
- compressionRatio()¶
Compression ratio accessor.
- Returns
- ratio2-tuple of int
Numbers of elements in the compressed and uncompressed forms.
- copy()¶
Copy matrix.
As factorization overwrites matrix contents, this method is useful to get a copy of assembled matrix before it is factorized.
- Returns
- matrix
HMatrix
Matrix copy.
- matrix
- dump(name)¶
Save matrix to a file.
- Parameters
- fileNamestr
File name to save to.
- factorize(method)¶
Factorize matrix.
- Parameters
- methodstr
Factorization method, either one of: LDLt, LLt or LU
Notes
The factorization embeds an automatic regularization procedure based on an approximation of the largest eigenvalue module. Its computation is done using a power iteration, controlled by the ‘HMatrix-LargestEigenValueRelatveError’ and ‘HMatrix-LargestEigenValueIterations’ keys in the
ResourceMap
.
- fullrkRatio()¶
Block ratio accessor.
- Returns
- ratio2-tuple of int
Numbers of elements in full blocks and low rank blocks.
- gemm(transA, transB, alpha, a, b, beta)¶
Multiply matrix in-place self=alpha*op(A)*op(B)+beta*self.
- gemv(trans, alpha, x, beta, y)¶
Multiply vector in-place y=alpha*op(A)*x+beta*y.
- Parameters
- transstr
Whether to use A or A^t: either N or T.
- alphafloat
Coefficient
- xsequence of float
Vector to multiply.
- betafloat
Coefficient.
- y
Point
Vector multiplied in-place.
- getClassName()¶
Accessor to the object’s name.
- Returns
- class_namestr
The object class name (object.__class__.__name__).
- getId()¶
Accessor to the object’s id.
- Returns
- idint
Internal unique identifier.
- getImplementation()¶
Accessor to the underlying implementation.
- Returns
- implImplementation
The implementation class.
- getName()¶
Accessor to the object’s name.
- Returns
- namestr
The name of the object.
- getNbColumns()¶
Accessor to the number of columns.
- Returns
- nbColumnsint
Number of columns.
- getNbRows()¶
Accessor to the number of rows.
- Returns
- nbRowsint
Number of rows.
- norm()¶
Compute norm value.
- Returns
- normfloat
Frobenius norm.
- scale(alpha)¶
Scale matrix in-place A=alpha*A.
- Parameters
- alphafloat
Coefficient.
- setName(name)¶
Accessor to the object’s name.
- Parameters
- namestr
The name of the object.
- solve(*args)¶
Solve linear system op(A)*x=b, after A has been factorized.
- solveLower(*args)¶
Solve lower linear system op(L)*x=b, after A has been factorized.
- transpose()¶
Transpose matrix in-place.