Module act_one: single TT-tensor operations

Package teneva, module act_one: single TT-tensor operations.

This module contains the basic operations with one TT-tensor (Y), including “copy”, “get”, “sum”, etc.




teneva.act_one.copy(Y)[source]

Return a copy of the given TT-tensor.

Parameters:

Y (int, float, list) – TT-tensor (or it may be int, float and numpy array for convenience).

Returns:

TT-tensor, which is a copy of the given TT-tensor. If Y is a number, then result will be the same number. If Y is np.ndarray, then the result will the corresponding copy in numpy format. If the function’s argument is None, then it will also return None.

Return type:

list

Examples:

# 10-dim random TT-tensor with TT-rank 2:
Y = teneva.rand([5]*10, 2)

Z = teneva.copy(Y) # The copy of Y

print(Y[2][1, 2, 0])
print(Z[2][1, 2, 0])

# >>> ----------------------------------------
# >>> Output:

# 0.1643445611288208
# 0.1643445611288208
#

Note that changes to the copy will not affect the original tensor:

Z[2][1, 2, 0] = 42.

print(Y[2][1, 2, 0])
print(Z[2][1, 2, 0])

# >>> ----------------------------------------
# >>> Output:

# 0.1643445611288208
# 42.0
#

Note that this function also supports numbers and numpy arrays for convenience:

a = teneva.copy(42.)
b = teneva.copy(np.array([1, 2, 3]))
c = teneva.copy(None)


teneva.act_one.get(Y, i, _to_item=True)[source]

Compute the element (or elements) of the TT-tensor.

Parameters:
  • Y (list) – d-dimensional TT-tensor.

  • i (list, np.ndarray) – the multi-index for the tensor (list or 1D array of the length d) or a batch of multi-indices in the form of a list of lists or array of the shape [samples, d].

Returns:

the element of the TT-tensor. If argument i is a batch of multi-indices, then array of the length samples will be returned (the get_many function is called in this case).

Return type:

float

Examples:

n = [10] * 5              # Shape of the tensor
Y0 = np.random.randn(*n)  # Create 5-dim random numpy tensor
Y1 = teneva.svd(Y0)       # Compute TT-tensor from Y0 by TT-SVD
teneva.show(Y1)           # Print the TT-tensor

i = [1, 2, 3, 4, 5]       # Select some tensor element
y1 = teneva.get(Y1, i)    # Compute the element of the TT-tensor
y0 = Y0[tuple(i)]         # Compute the same element of the original tensor
abs(y1-y0)                # Compare original tensor and reconstructed tensor

# >>> ----------------------------------------
# >>> Output:

# TT-tensor     5D : |10|  |10|   |10|   |10|  |10|
# <rank>  =   63.0 :    \10/  \100/  \100/  \10/
#

This function is also support batch mode (in the case of batch, it calls the function “get_many”):

# Select some tensor elements:
I = [
    [1, 2, 3, 4, 5],
    [0, 0, 0, 0, 0],
    [5, 4, 3, 2, 1],
]

# Compute the element of the TT-tensor:
y1 = teneva.get(Y1, I)

# Compute the same element of the original tensor:
y0 = [Y0[tuple(i)] for i in I]

# Compare original tensor and reconstructed tensor:
e = np.max(np.abs(y1-y0))
print(f'Error   : {e:7.1e}')

# >>> ----------------------------------------
# >>> Output:

# Error   : 1.3e-14
#


teneva.act_one.get_and_grad(Y, i, check_phi=False)[source]

Compute the element of the TT-tensor and gradients of its TT-cores.

Parameters:
  • Y (list) – d-dimensional TT-tensor.

  • i (list, np.ndarray) – the multi-index for the tensor.

  • check_phi (bool) – service flag, should be False.

Returns:

the element y of the TT-tensor at provided multi-index and the TT-tensor of related gradients for all TT-cores.

Return type:

(float, list)

Examples:

lr = 1.E-4                        # Learning rate
n = [4, 5, 6, 7]                  # Shape of the tensor
Y = teneva.rand(n, r=3, seed=44)  # Random TT-tensor
i = [2, 3, 4, 5]                  # Targer multi-index for gradient
y, dY = teneva.get_and_grad(Y, i)

Z = teneva.copy(Y)                # Simulating gradient descent
for k in range(len(n)):
    Z[k] -= lr * dY[k]

z = teneva.get(Z, i)
e = teneva.accuracy(Y, Z)

print(f'Old value at multi-index : {y:-12.5e}')
print(f'New value at multi-index : {z:-12.5e}')
print(f'Difference for tensors   : {e:-12.1e}')

# >>> ----------------------------------------
# >>> Output:

# Old value at multi-index :  2.91493e-01
# New value at multi-index :  2.90991e-01
# Difference for tensors   :      8.1e-05
#

We can also perform several GD steps:

Z = teneva.copy(Y)
for step in range(100):
    for k in range(len(n)):
        Z[k] -= lr * dY[k]

z = teneva.get(Z, i)
e = teneva.accuracy(Y, Z)

print(f'Old value at multi-index : {y:-12.5e}')
print(f'New value at multi-index : {z:-12.5e}')
print(f'Difference for tensors   : {e:-12.1e}')

# >>> ----------------------------------------
# >>> Output:

# Old value at multi-index :  2.91493e-01
# New value at multi-index :  2.41494e-01
# Difference for tensors   :      8.1e-03
#


teneva.act_one.get_many(Y, I, _to_item=True)[source]

Compute the elements of the TT-tensor on many indices (batch).

Parameters:
  • Y (list) – d-dimensional TT-tensor.

  • I (list of list, np.ndarray) – the multi-indices for the tensor in the form of a list of lists or array of the shape [samples, d].

Returns:

the elements of the TT-tensor for multi-indices I (array of the length samples).

Return type:

np.ndarray

Examples:

n = [10] * 5             # Shape of the tensor
Y0 = np.random.randn(*n) # Create 5-dim random numpy tensor
Y1 = teneva.svd(Y0)      # Compute TT-tensor from Y0 by TT-SVD
teneva.show(Y1)          # Print the TT-tensor

# Select some tensor elements:
I = [
    [1, 2, 3, 4, 5],
    [0, 0, 0, 0, 0],
    [5, 4, 3, 2, 1],
]

# Compute the element of the TT-tensor:
y1 = teneva.get_many(Y1, I)

# Compute the same element of the original tensor:
y0 = [Y0[tuple(i)] for i in I]

# Compare original tensor and reconstructed tensor:
e = np.max(np.abs(y1-y0))
print(f'Error   : {e:7.1e}')

# >>> ----------------------------------------
# >>> Output:

# TT-tensor     5D : |10|  |10|   |10|   |10|  |10|
# <rank>  =   63.0 :    \10/  \100/  \100/  \10/
# Error   : 1.5e-14
#


teneva.act_one.getter(Y, compile=True)[source]

Build the fast getter function to compute the element of the TT-tensor.

Parameters:
  • Y (list) – TT-tensor.

  • compile (bool) – flag, if True, then the getter will be called one time with a random multi-index to compile its code.

Returns:

the function that computes the element of the TT-tensor. It has one argument k (list or np.ndarray of the length d) which is the multi-index for the tensor.

Return type:

function

Note

Note that the gain from using this getter instead of the base function “get” appears only in the case of many requests for calculating the tensor value (otherwise, the time spent on compiling the getter may turn out to be significant). Also note that this function requires “numba” package to be installed.

Attention: this function will be removed in the future! Use the “get_many” function instead (it’s faster in most cases).

Examples:

# Note that numba package is required for this function

n = [10] * 5              # Shape of the tensor
Y0 = np.random.randn(*n)  # Create 5-dim random numpy tensor
Y1 = teneva.svd(Y0)       # Compute TT-tensor from Y0 by TT-SVD
get = teneva.getter(Y1)   # Build (compile) function to compute the element of the TT-tensor
k = (1, 2, 3, 4, 5)       # Select some tensor element
y1 = get(k)               # Compute the element of the TT-tensor
y0 = Y0[k]                # Compute the same element of the original tensor
np.max(np.max(y1-y0))     # Compare original tensor and reconstructed tensor

# >>> ----------------------------------------
# >>> Output:

# -5.218048215738236e-15
#


teneva.act_one.interface(Y, P=None, i=None, norm='linalg', ltr=False)[source]

Generate interface vectors for provided TT-tensor.

Parameters:
  • Y (list) – d-dimensional TT-tensor.

  • P (list, np.ndarray) – optional weights for mode indices from left to right (list of lists of the length d; or just one list if the weights are the same for all modes and all modes are equal).

  • i (list, np.ndarray) – optional multi-index for the tensor.

  • norm (str) – optional norm function to use (it may be ‘linalg’ [‘l’] for the usage of the np.linalg.norm or ‘natural’ [‘n’] for usage of the natural norm, i.e., the related mode size; or it may be None).

  • ltr (bool) – the direction of computation of the interface vectors (“left to right” if True and “right to left” if False).

Returns:

list of d+1 interface vectors. Note that the first and last vectors always have length 1.

Return type:

list

Examples:

n = [4, 5, 6, 7]         # Shape of the tensor
Y = teneva.rand(n, r=3)  # Create 4-dim random TT-tensor
phi_r = teneva.interface(Y)
phi_l = teneva.interface(Y, ltr=True)

print('\nRight:')
for phi in phi_r:
    print(phi)

print('\nLeft:')
for phi in phi_l:
    print(phi)

# >>> ----------------------------------------
# >>> Output:

#
# Right:
# [-1.]
# [0.68813332 0.53462172 0.49056309]
# [ 0.02724276  0.17567491 -0.98407122]
# [ 0.28219429 -0.45639302 -0.84384346]
# [1.]
#
# Left:
# [1.]
# [-0.82889095 -0.55022389 -0.10096271]
# [ 0.55175562 -0.81512821  0.1764419 ]
# [ 0.65082799 -0.37736438  0.65880123]
# [-1.]
#
n = [4, 5, 6, 7]         # Shape of the tensor
Y = teneva.rand(n, r=3)  # Create 4-dim random TT-tensor
i = [2, 3, 4, 5]         # Targer multi-index
phi_r = teneva.interface(Y, i=i)
phi_l = teneva.interface(Y, i=i, ltr=True)

print('\nRight:')
for phi in phi_r:
    print(phi)

print('\nLeft:')
for phi in phi_l:
    print(phi)

# >>> ----------------------------------------
# >>> Output:

#
# Right:
# [-1.]
# [0.3736717  0.13703245 0.917383  ]
# [ 0.15999998  0.97874472 -0.1282918 ]
# [-0.73153711  0.26369271 -0.62874447]
# [1.]
#
# Left:
# [1.]
# [ 0.72667917 -0.39223735 -0.56399224]
# [ 0.54988977 -0.5076079   0.66329139]
# [ 0.54448258 -0.61483454 -0.57054116]
# [-1.]
#
n = [4, 5, 6, 7]         # Shape of the tensor
Y = teneva.rand(n, r=3)  # Create 4-dim random TT-tensor
i = [2, 3, 4, 5]         # Targer multi-index
P = [                    # Weight for all modes
    [0.1, 0.2, 0.3, 0.4],
    [0.1, 0.2, 0.3, 0.4, 0.5],
    [0.1, 0.2, 0.3, 0.4, 0.5, 0.6],
    [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7]]
phi_r = teneva.interface(Y, P, i)
phi_l = teneva.interface(Y, P, i, ltr=True)

print('\nRight:')
for phi in phi_r:
    print(phi)

print('\nLeft:')
for phi in phi_l:
    print(phi)

# >>> ----------------------------------------
# >>> Output:

#
# Right:
# [-1.]
# [ 0.02712957  0.79077339 -0.61150751]
# [0.30447033 0.7558563  0.57963702]
# [0.87461345 0.48475263 0.0081361 ]
# [1.]
#
# Left:
# [1.]
# [ 0.55886258 -0.31423024  0.76741903]
# [-0.96060732 -0.24796613 -0.12548457]
# [-0.81379032  0.03462715 -0.58012609]
# [-1.]
#
n = [7] * 4              # Shape of the tensor
Y = teneva.rand(n, r=3)  # Create 4-dim random TT-tensor
i = [2, 3, 4, 5]         # Targer multi-index
p = [                    # Weight for all modes (equal)
    0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7]
phi_r = teneva.interface(Y, p, i)
phi_l = teneva.interface(Y, p, i, ltr=True)

print('\nRight:')
for phi in phi_r:
    print(phi)

print('\nLeft:')
for phi in phi_l:
    print(phi)

# >>> ----------------------------------------
# >>> Output:

#
# Right:
# [1.]
# [-0.32868849  0.94421494  0.02054309]
# [-0.99678302  0.00501843 -0.07999011]
# [ 0.55197584 -0.68846358 -0.47046846]
# [1.]
#
# Left:
# [1.]
# [-0.45780124  0.79627937 -0.39542028]
# [-0.76947865 -0.16291856 -0.61754364]
# [ 0.06665077 -0.90158264  0.4274417 ]
# [1.]
#
n = [7] * 4              # Shape of the tensor
Y = teneva.rand(n, r=3)  # Create 4-dim random TT-tensor
i = [2, 3, 4, 5]         # Targer multi-index
p = [                    # Weight for all modes (equal)
    0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7]
phi_r = teneva.interface(Y, p, i, norm=None)
phi_l = teneva.interface(Y, p, i, norm=None, ltr=True)

print('\nRight:')
for phi in phi_r:
    print(phi)

print('\nLeft:')
for phi in phi_l:
    print(phi)

# >>> ----------------------------------------
# >>> Output:

#
# Right:
# [-0.04710111]
# [-0.05334143 -0.11429042 -0.11918024]
# [ 0.05111053 -0.15246171  0.26067213]
# [ 0.58188778  0.419016   -0.11394976]
# [1.]
#
# Left:
# [1.]
# [0.19713772 0.03388943 0.27447725]
# [ 0.05784957  0.03471187 -0.17173144]
# [-0.05027864 -0.05818472 -0.05735637]
# [-0.04710111]
#


teneva.act_one.mean(Y, P=None, norm=True)[source]

Compute mean value of the TT-tensor with the given inputs probability.

Parameters:
  • Y (list) – TT-tensor.

  • P (list) – optional probabilities for each dimension. It is the list of length d (number of tensor dimensions), where each element is also a list with length equals to the number of tensor elements along the related dimension. Hence, P[m][i] relates to the probability of the i-th input for the m-th mode (dimension).

  • norm (bool) – service (inner) flag, should be True.

Returns:

the mean value of the TT-tensor.

Return type:

float

Examples:

Y = teneva.rand([5]*10, 2) # 10-dim random TT-tensor with TT-rank 2
m = teneva.mean(Y)         # The mean value
Y_full = teneva.full(Y)    # Compute tensor in the full format to check the result
m_full = np.mean(Y_full)   # The mean value for the numpy array
e = abs(m - m_full)        # Compute error for TT-tensor vs full tensor
print(f'Error     : {e:-8.2e}')

# >>> ----------------------------------------
# >>> Output:

# Error     : 1.91e-21
#

The probability of tensor inputs my be also set:

n = [5]*10                   # Shape of the tensor
Y = teneva.rand(n, 2)        # 10-dim random TT-tensor with TT-rank 2
P = [np.zeros(k) for k in n] # The "probability"
teneva.mean(Y, P)            # The mean value

# >>> ----------------------------------------
# >>> Output:

# 0.0
#


teneva.act_one.norm(Y, use_stab=False)[source]

Compute Frobenius norm of the given TT-tensor.

Parameters:
  • Y (list) – TT-tensor.

  • use_stab (bool) – if flag is set, then function will also return the second argument p, which is the factor of 2-power.

Returns:

Frobenius norm of the TT-tensor.

Return type:

float

Examples:

Y = teneva.rand([5]*10, 2) # 10-dim random TT-tensor with TT-rank 2
v = teneva.norm(Y)                # Compute the Frobenius norm
print(v)                          # Print the resulting value

# >>> ----------------------------------------
# >>> Output:

# 283.64341295400476
#
Y_full = teneva.full(Y)           # Compute tensor in the full format to check the result

v_full = np.linalg.norm(Y_full)
print(v_full)                     # Print the resulting value from full tensor

e = abs((v - v_full)/v_full)      # Compute error for TT-tensor vs full tensor
print(f'Error     : {e:-8.2e}')   # Rel. error

# >>> ----------------------------------------
# >>> Output:

# 283.6434129540049
# Error     : 4.01e-16
#


teneva.act_one.qtt_to_tt(Y, q)[source]

Transform the QTT-tensor into a TT-tensor.

Parameters:
  • Y (list) – QTT-tensor. It is d*q-dimensional tensor with mode size 2.

  • q (int) – quantization factor, i.e., the mode size of the TT-tensor will be n = 2^q.

Returns:

TT-tensor. It is d-dimensional tensor with mode size 2^q.

Return type:

list

Examples:

d = 4                         # Dimension of the tensor
q = 5                         # Quantization value (n=2^q)
r = [                         # TT-ranks of the QTT-tensor
    1,
    3, 4, 5, 6, 7,
    5, 4, 3, 6, 7,
    5, 4, 3, 6, 7,
    5, 4, 3, 6, 1,
]

# Random QTT-tensor:
Y = teneva.rand([2]*(d*q), r)

# Related TT-tensor:
Z = teneva.qtt_to_tt(Y, q)

teneva.show(Y)                # Show QTT-tensor
print()
teneva.show(Z)                # Show TT-tensor

# >>> ----------------------------------------
# >>> Output:

# TT-tensor    20D : |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2| |2|
# <rank>  =    5.0 :   \3/ \4/ \5/ \6/ \7/ \5/ \4/ \3/ \6/ \7/ \5/ \4/ \3/ \6/ \7/ \5/ \4/ \3/ \6/
#
# TT-tensor     4D : |32| |32| |32| |32|
# <rank>  =    7.0 :    \7/  \7/  \7/
#

We can check that values of the QTT-tensor and TT-tensor are the same:

# Multi-index for QTT-tensor:
i = [
    0, 1, 1, 0, 0,
    0, 0, 1, 1, 0,
    0, 1, 1, 1, 1,
    0, 1, 1, 1, 0,
]

# Related multi-index for TT-tensor:
j = teneva.ind_qtt_to_tt(i, q)

print(f'QTT value : {teneva.get(Y, i):-14.6f}')
print(f' TT value : {teneva.get(Z, j):-14.6f}')

# >>> ----------------------------------------
# >>> Output:

# QTT value :       4.067825
#  TT value :       4.067825
#

We can also transform the TT-tensor back into QTT-tensor:

q = int(np.log2(n[0]))
U = teneva.tt_to_qtt(Z)

teneva.accuracy(Y, U)

# >>> ----------------------------------------
# >>> Output:

# 1.3084361360868113e-08
#


teneva.act_one.sum(Y)[source]

Compute sum of all tensor elements.

Parameters:

Y (list) – TT-tensor.

Returns:

the sum of all tensor elements.

Return type:

float

Examples:

Y = teneva.rand([10, 12, 8, 9, 30], 2) # 5-dim random TT-tensor with TT-rank 2
teneva.sum(Y)                          # Sum of the TT-tensor elements

# >>> ----------------------------------------
# >>> Output:

# -10.421669993532463
#
Z = teneva.full(Y) # Compute tensor in the full format to check the result
np.sum(Z)

# >>> ----------------------------------------
# >>> Output:

# -10.421669993532458
#


teneva.act_one.tt_to_qtt(Y, e=1e-12, r=100)[source]

Transform the TT-tensor into a QTT-tensor.

Parameters:
  • Y (list) – TT-tensor. It is d-dimensional tensor with mode size n, which is a power of two, i.e., n=2^q.

  • e (float) – desired approximation accuracy.

  • r (int) – maximum rank for the SVD decomposition.

Returns:

QTT-tensor. It is d * q-dimensional tensor with mode size 2.

Return type:

list

Examples:

d = 4                         # Dimension of the tensor
n = [32] * d                  # Shape of the tensor
r = [1, 4, 3, 6, 1]           # TT-ranks of the tensor
Y = teneva.rand(n, r)         # Random TT-tensor
Z = teneva.tt_to_qtt(Y)       # Related QTT-tensor

teneva.show(Y)                # Show TT-tensor
print()
teneva.show(Z)                # Show QTT-tensor

# >>> ----------------------------------------
# >>> Output:

# TT-tensor     4D : |32| |32| |32| |32|
# <rank>  =    4.0 :    \4/  \3/  \6/
#
# TT-tensor    20D : |2| |2| |2| |2| |2| |2| |2|  |2|  |2| |2| |2| |2|  |2|  |2|  |2| |2|  |2| |2| |2| |2|
# <rank>  =    9.2 :   \2/ \4/ \8/ \8/ \4/ \8/ \16/ \12/ \6/ \3/ \6/ \12/ \24/ \12/ \6/ \12/ \8/ \4/ \2/
#

We can check that values of the TT-tensor and QTT-tensor are the same:

# Multi-index for TT-tensor:
i = [5, 10, 20, 30]

# Related multi-index for QTT-tensor:
j = teneva.ind_tt_to_qtt(i, n[0])

print(f' TT value : {teneva.get(Y, i):-14.6f}')
print(f'QTT value : {teneva.get(Z, j):-14.6f}')

# >>> ----------------------------------------
# >>> Output:

#  TT value :      -0.144598
# QTT value :      -0.144598
#

We can also transform the QTT-tensor back into TT-tensor:

q = int(np.log2(n[0]))
U = teneva.qtt_to_tt(Z, q)

teneva.accuracy(Y, U)

# >>> ----------------------------------------
# >>> Output:

# 1.9914054150840573e-08
#

We can also perform the transformation with limited precision:

Z = teneva.tt_to_qtt(Y, r=20)
teneva.show(Z)

U = teneva.qtt_to_tt(Z, q)
teneva.accuracy(Y, U)

# >>> ----------------------------------------
# >>> Output:

# TT-tensor    20D : |2| |2| |2| |2| |2| |2| |2|  |2|  |2| |2| |2| |2|  |2|  |2|  |2| |2|  |2| |2| |2| |2|
# <rank>  =    8.9 :   \2/ \4/ \8/ \8/ \4/ \8/ \16/ \12/ \6/ \3/ \6/ \12/ \20/ \12/ \6/ \12/ \8/ \4/ \2/
#