PITTS--Parallel Iterative Tensor-Train Solvers--is a small header-only C++20 library for numerical algorithms with low-rank tensor approximations in tensor train form (TT format, see 3 and 4). Algorithms are parallelized for multi-core CPU clusters using OpenMP and MPI.
Currently provides a fast TT-SVD implementation (algorithm to compress a dense tensor in the TT format, see 1), and methods for solving linear systems (symmetric and non-symmetric, see 2) in tensor-train format (TT-GMRES 5, TT-MALS 6, TT-AMEn 7).
Bindings to other languages / libraries:
- Python: complete functionality using pybind11 with numpy-compatible interface
- ITensor (C++): conversion to/from MPS and MPO
- Julia (JlCxx): basic data types and TSQR algorithm
You can get a copy of the repository from [https://github.com/melven/pitts]:
git clone https://github.com/melven/pitts.git- CMake >= 3.18 (tested with 3.23, 3.26 and 3.28)
- GCC >= 11.1 (or a C++20 compliant compiler, tested with GCC 12 and 13 and with LLVM 16)
- OpenMP (usually included in the compiler)
- MPI (tested with OpenMPI 4.1 and MPICH 4.1)
- LAPACK (tested with Intel MKL 2022.1 and 2023.2)
- Eigen >= 3.3.9 (3.3.8 has a C++20 bug, tested with 3.4.0 and master branch from 2023-10 (faster))
- cereal (tested with 1.3)
- Python >= 3.6 (tested with 3.9 and 3.10)
- pybind11 (optional, tested with 2.9 and 2.10)
- JlCxx (optional, tested with 0.11)
- likwid (optional, tested with 5.2)
Supplied version numbers ("tested with") are used in CI builds.
Simply configure with CMake and compile, on Linux system usually done by:
cd pitts
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
makeInternally uses googletest with patches for MPI parallelization for testing C++ code and python unittest to check the behavior of the python interface.
make checkThe tests run with different numbers of OpenMP threads and MPI processes.
They call mpiexec to launch multiple processes, respectively the SLURM command srun when a SLURM queueing system is found.
Currently, pitts can be used from C++ and python.
pitts is intended as header-only library and its data types and algorithms can be directly included:
#include "pitts_tensortrain.hpp"
#include "pitts_tensortrain_random.hpp"
// create a random tensor in TT format with dimensions 2^5 and TT ranks [2,4,4,2]
void someFunction()
{
PITTS::TensorTrain<double> tt(2,5);
tt.setTTranks({2,4,4,2});
// most algorithms in PITTS are defined as free functions with overloads for different data types,
// this calls PITTS::randomize(PITTS::TensorTrain<double>&)
randomize(tt);
}As pitts heavily uses templates and C++20 features, using pitts from C++ code requires a C++20 compliant compiler (and enabling C++20, of course).
For simple tests, add the build directory pitts/build/src to the PYTHONPATH environment variable.
You can also install it in a custom directory using the CMAKE_INSTALL_PREFIX setting and type make install.
To use PITTS in your python code, simply import pitts_py:
import pitts_py
# create a random tensor in TT format with dimensions 2^5 and TT ranks [2,4,4,2]
tt = pitts_py.TensorTrain_double(dimensions=[2,2,2,2,2])
tt.setTTranks([2,4,4,2])
pitts_py.randomize(tt)[1] Roehrig-Zoellner, M., Thies, J. and Basermann, A.: "Performance of the Low-Rank TT-SVD for Large Dense Tensors on Modern MultiCore CPUs", SIAM Journal on Scientific Computing, 2022, https://doi.org/10.1137/21M1395545
[2] Roehrig-Zoellner, M., Becklas, M., Thies, J. and Basermann A.: "Performance of linear solvers in tensor-train format on current multicore architectures", The International Journal of High Performance Computing Applications (IJHPCA), 2025, https://doi.org/10.1177/10943420251317994
[3] Oseledets, I. V.: "Tensor-Train Decomposition", SIAM Journal on Scientific Computing, 2011, https://doi.org/10.1137/090752286
[4] Grasedyck, L., Kressner, D. and Tobler, C.: "A literature survey of low-rank tensor approximation techniques", GAMM-Mitteilungen, 2013, https://doi.org/10.1002/gamm.201310004
[5] Dolgov, S. V.: "TT-GMRES: solution to a linear system in the structured tensor format", Russian Journal of Numerical Analysis and Mathematical Modelling, 2013, https://doi.org/10.1515/rnam-2013-0009
[6] Holtz, S., Rohwedder, T. and Schneider, R.: "The Alternating Linear Scheme for Tensor Optimization in the Tensor Train Format", SIAM Journal on Scientific Computing, 2012, https://doi.org/10.1137/100818893
[7] Dolgov, S. V. and Savostyanov, D. V.: "Alternating Minimal Energy Methods for Linear Systems in Higher Dimensions" SIAM Journal on Scientific Computing, 2014, http://doi.org/10.1137/140953289
Please feel free to send any question or suggestion to Melven.Roehrig-Zoellner@DLR.de.