Tensor trains are breaking the curse of dimensionality
Speaker:Prof. Eugene E. Tyrtyshnikov
Institute of Numerical Mathematics
Russian Academy of Sciences
Date & Time:27 Aug 2009 (Thursday) 10:30 - 11:30


It can be argued that sundry and possibly the better part of modern applications (quantum chemistry, financial models, stochastic PDEs, data mining etc.) give rise to multi-index arrays (tensors) with tens, hundreds or even thousands of indices. The number of indices (axes, ways, dimensions) is called dimensionality. In the physical world it is just 3, but there are important problems where it is even much larger. The curse of dimensionality is sheer appreciation of the fact that the amount of data (all entries of a d-way array) grows exponentially in the number of ways. Such objects cannot be treated directly and require certain low-parametric representations. However, the ubiquitous formats (e.g. canonical and Tucker’s) have serious drawbacks and rather impede large-scale computations in really many dimensions. We discuss new representation and approximation tools based on dimensionality reduction schemes leading to tensor trains (TT decomposition) [1,2,3]. Behind many advantages of tensor trains is that a tensor is seen through nicely structured factorizations of related unfolding matrices. Eventually we enjoy efficient algorithms for basic tensor operations with the complexity linear in the number of indices. These algorithms build upon the TT decomposition and intrinsically possess the robustness of the SVD. Numerical results are given to demonstrate the efficiency of the algorithm.