Module svd: SVD-based algorithms for matrices and tensors

Package teneva, module svd: SVD-based algorithms.

This module contains the basic implementation of the TT-SVD algorithm (function svd) as well as new original TT-SVD-incomplete algorithm (function svd_incomplete), which implements efficient construction of the TT-tensor based on specially selected elements. This module also contains functions for constructing the SVD decomposition (function matrix_svd) and skeleton decomposition (matrix_skeleton) for the matrices.




teneva.svd.matrix_skeleton(A, e=1e-10, r=1000000000000.0, hermitian=False, rel=False, give_to='m')[source]

Construct truncated skeleton decomposition A = U V for the given matrix.

Parameters:
  • A (np.ndarray) – matrix of the shape [m, n].

  • e (float) – desired approximation accuracy.

  • r (int) – maximum rank for the SVD decomposition.

  • hermitian (bool) – if True, then “hermitian” SVD will be used.

  • rel (bool) – if True, then normed singalar values will be used while selection of the truncation threshold.

  • give_to (str) – instruction how to combine the matrices: U S**0.5, S**0.5 V (m; default), U S, V (l) or U, S V (r).

Returns:

factor matrix U of the shape [m, q] and factor matrix V of the shape [q, n], where q is the selected rank (note that q <= r).

Return type:

(np.ndarray, np.ndarray)

Examples:

# Shape of the matrix:
m, n = 100, 30

# Build random matrix, which has rank 3,
# as a sum of rank-1 matrices:
A = np.outer(np.random.randn(m), np.random.randn(n))
A += np.outer(np.random.randn(m), np.random.randn(n))
A += np.outer(np.random.randn(m), np.random.randn(n))
# Compute skeleton decomp.:
U, V = teneva.matrix_skeleton(A, e=1.E-10)

# Approximation error
e = np.linalg.norm(A - U @ V) / np.linalg.norm(A)

print(f'Shape of U :', U.shape)
print(f'Shape of V :', V.shape)
print(f'Error      : {e:-8.2e}')

# >>> ----------------------------------------
# >>> Output:

# Shape of U : (100, 3)
# Shape of V : (3, 30)
# Error      : 7.32e-16
#
# Compute skeleton decomp with small rank:
U, V = teneva.matrix_skeleton(A, r=2)

# Approximation error:
e = np.linalg.norm(A - U @ V) / np.linalg.norm(A)
print(f'Shape of U :', U.shape)
print(f'Shape of V :', V.shape)
print(f'Error      : {e:-8.2e}')

# >>> ----------------------------------------
# >>> Output:

# Shape of U : (100, 2)
# Shape of V : (2, 30)
# Error      : 4.60e-01
#


teneva.svd.matrix_svd(A, e=1e-10, r=1000000000000.0)[source]

Construct truncated SVD decomposition A = U V for the given matrix.

Parameters:
  • A (np.ndarray) – matrix of the shape [m, n].

  • e (float) – desired approximation accuracy.

  • r (int) – maximum rank for the SVD decomposition.

Returns:

factor matrix U of the shape [m, q] and factor matrix V of the shape [q, n], where q is the selected rank (note that q <= r).

Return type:

(np.ndarray, np.ndarray)

Examples:

# Shape of the matrix:
m, n = 100, 30

# Build random matrix, which has rank 3,
# as a sum of rank-1 matrices:
A = np.outer(np.random.randn(m), np.random.randn(n))
A += np.outer(np.random.randn(m), np.random.randn(n))
A += np.outer(np.random.randn(m), np.random.randn(n))
# Compute SVD-decomp.:
U, V = teneva.matrix_svd(A, e=1.E-10)

# Approximation error:
e = np.linalg.norm(A - U @ V) / np.linalg.norm(A)

print(f'Shape of U :', U.shape)
print(f'Shape of V :',V.shape)
print(f'Error      : {e:-8.2e}')

# >>> ----------------------------------------
# >>> Output:

# Shape of U : (100, 16)
# Shape of V : (16, 30)
# Error      : 7.42e-16
#
# Compute SVD-decomp.:
U, V = teneva.matrix_svd(A, r=3)

# Approximation error:
e = np.linalg.norm(A - U @ V) / np.linalg.norm(A)

print(f'Shape of U :', U.shape)
print(f'Shape of V :',V.shape)
print(f'Error      : {e:-8.2e}')

# >>> ----------------------------------------
# >>> Output:

# Shape of U : (100, 3)
# Shape of V : (3, 30)
# Error      : 7.33e-16
#
# Compute SVD-decomp.:
U, V = teneva.matrix_svd(A, e=1.E-2)

# Approximation error:
e = np.linalg.norm(A - U @ V) / np.linalg.norm(A)

print(f'Shape of U :', U.shape)
print(f'Shape of V :',V.shape)
print(f'Error      : {e:-8.2e}')

# >>> ----------------------------------------
# >>> Output:

# Shape of U : (100, 3)
# Shape of V : (3, 30)
# Error      : 7.33e-16
#
# Compute SVD-decomp.:
U, V = teneva.matrix_svd(A, r=2)

# Approximation error:
e = np.linalg.norm(A - U @ V) / np.linalg.norm(A)

print(f'Shape of U :', U.shape)
print(f'Shape of V :',V.shape)
print(f'Error      : {e:-8.2e}')

# >>> ----------------------------------------
# >>> Output:

# Shape of U : (100, 2)
# Shape of V : (2, 30)
# Error      : 4.37e-01
#


teneva.svd.svd(Y_full, e=1e-10, r=1000000000000.0)[source]

Construct TT-tensor from the given full tensor using TT-SVD algorithm.

Parameters:
  • Y_full (np.ndarray) – tensor (multidimensional array) in the full format.

  • e (float) – desired approximation accuracy.

  • r (int) – maximum rank of the constructed TT-tensor.

Returns:

TT-tensor, which represents an approximation with a given accuracy (e) and a TT-rank constraint (r) for the given full tensor.

Return type:

list

Examples:

d = 20              # Dimension number
t = np.arange(2**d) # Tensor will be 2^d

# Construct d-dim full array:
Z_full = np.cos(t).reshape([2] * d, order='F')
# Construct TT-tensor by TT-SVD:
Y = teneva.svd(Z_full)

# Convert it back to numpy to check result:
Y_full = teneva.full(Y)

# Compute error for TT-tensor vs full tensor:
e = np.linalg.norm(Y_full - Z_full)
e /= np.linalg.norm(Z_full)
# Size of the original tensor:
print(f'Size (np) : {Z_full.size:-8d}')

# Size of the TT-tensor:
print(f'Size (tt) : {teneva.size(Y):-8d}')

# Eff. rank of the TT-tensor:
print(f'Erank     : {teneva.erank(Y):-8.2f}')

# Rel. error for the TT-tensor vs full tensor:
print(f'Error     : {e:-8.2e}')

# >>> ----------------------------------------
# >>> Output:

# Size (np) :  1048576
# Size (tt) :      152
# Erank     :     2.00
# Error     : 1.91e-14
#


teneva.svd.svd_matrix(Y_full, e=1e-10, r=1000000000000.0)[source]

Construct QTT-matrix from the given full matrix using TT-SVD algorithm.

Parameters:
  • Y_full (np.ndarray) – matrix of the shape 2^q x 2^q in the full format.

  • e (float) – desired approximation accuracy.

  • r (int) – maximum rank of the constructed TT-tensor.

Returns:

TT-tensor / QTT-matrix, which represents an approximation with a

given accuracy (e) and a TT-rank constraint (r) for the given full matrix (it has q dimensions and mode equals 4).

Return type:

list

Note

The matrix size should be the power of 2, and only square matrices are supported.

Examples:

q = 10   # Matrix size factor
n = 2**q # Matrix mode size

# Construct some matrix:
Z_full = np.zeros((n, n))
for i in range(n):
    for j in range(n):
        Z_full[i, j] = np.cos(i) * j**2
# Construct QTT-matrix / TT-tensor by TT-SVD:
Y = teneva.svd_matrix(Z_full, e=1.E-6)

# Convert it back to numpy to check result:
Y_full = teneva.full_matrix(Y)

# Compute error for QTT-matrix / TT-tensor vs full matrix:
e = np.linalg.norm(Y_full - Z_full)
e /= np.linalg.norm(Z_full)
print(f'Size (np) : {Z_full.size:-8d}')       # Size of original tensor
print(f'Size (tt) : {teneva.size(Y):-8d}')    # Size of the QTT-matrix
print(f'Erank     : {teneva.erank(Y):-8.2f}') # Eff. rank of the QTT-matrix
print(f'Error     : {e:-8.2e}')               # Rel. error for QTT-matrix vs full tensor

# >>> ----------------------------------------
# >>> Output:

# Size (np) :  1048576
# Size (tt) :     1088
# Erank     :     5.71
# Error     : 3.64e-12
#


teneva.svd.svd_incomplete(I, Y, idx, idx_many, e=1e-10, r=1000000000000.0)[source]

Construct TT-tensor from the given specially selected samples.

Parameters:
  • I (np.ndarray) – multi-indices for the tensor in the form of array of the shape [samples, d].

  • Y (np.ndarray) – values of the tensor for multi-indices I in the form of array of the shape [samples].

  • idx (np.ndarray) – starting poisitions in generated samples for the corresponding dimensions in the form of array of the shape [d+1].

  • idx_many (np.ndarray) – numbers of points for the right unfoldings in generated samples in the form of array of the shape [d].

  • e (float) – desired approximation accuracy.

  • r (int, float) – maximum rank of the constructed TT-tensor.

Returns:

TT-tensor, which represents an approximation with a given accuracy (e) and a TT-rank constraint (r) for the full tensor.

Return type:

list

Note

The samples I and options idx and idx_many should be generated by the function sample_tt.

Examples:

d = 20              # Dimension number
n = [2] * d         # Shape of the tensor/grid
t = np.arange(2**d) # Tensor will be 2^d

# Construct d-dim full array:
Z_full = np.cos(t).reshape([2] * d, order='F')
m = 4 # The expected TT-rank

# Generate special samples (indices) for the tensor:
I_trn, idx, idx_many = teneva.sample_tt(n, m)
# Compute tensor values in I multiindices:
Y_trn = np.array([Z_full[tuple(i)] for i in I_trn])
Y = teneva.svd_incomplete(I_trn, Y_trn,
    idx, idx_many, e=1.E-10, r=3) # Construct TT-tensor
teneva.show(Y)                    # Show the tensor

# >>> ----------------------------------------
# >>> Output:

# TT-tensor    20D : |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2|
# <rank>  =    2.0 :   \2/ \2/ \2/ \2/ \2/ \2/ \2/ \2/ \2/ \2/ \2/ \2/ \2/ \2/ \2/ \2/ \2/ \2/ \2/
#
# Convert it back to numpy to check result:
Y_full = teneva.full(Y)

# Compute error for TT-tensor vs full tensor :
e = np.linalg.norm(Y_full - Z_full)
e /= np.linalg.norm(Z_full)
print(f'Size (np) : {Z_full.size:-8d}')       # Size of original tensor
print(f'Size (tt) : {teneva.size(Y):-8d}')    # Size of the TT-tensor
print(f'Erank     : {teneva.erank(Y):-8.2f}') # Eff. rank of the TT-tensor
print(f'Error     : {e:-8.2e}')               # Rel. error for TT-tensor vs full tensor

# >>> ----------------------------------------
# >>> Output:

# Size (np) :  1048576
# Size (tt) :      152
# Erank     :     2.00
# Error     : 2.24e-15
#