Weighted sparsity and sparse tensor networks for least squares approximation
The SMAI Journal of computational mathematics, Volume 11 (2025), pp. 289-333.

Approximation of high-dimensional functions is a problem in many scientific fields that is only feasible if advantageous structural properties, such as sparsity in a given basis, can be exploited. A relevant tool for analysing sparse approximations is Stechkin’s lemma.

In its standard form, however, this lemma does not allow to explain convergence rates for a wide range of relevant function classes. This work presents a new weighted version of Stechkin’s lemma that improves the best $n$-term rates for weighted $\ell ^p$-spaces and associated function classes such as Sobolev or Besov spaces. For the class of holomorphic functions, which occur as solutions of common high-dimensional parameter-dependent PDEs, we recover exponential rates that are not directly obtainable with Stechkin’s lemma.

Since weighted $\ell ^p$-summability induces weighted sparsity, compressed sensing algorithms can be used to approximate the associated functions. To break the curse of dimensionality, from which these algorithms suffer, we utilise that sparse approximations can be encoded efficiently using tensor networks with sparse component tensors. We also demonstrate that weighted $\ell ^p$-summability induces low ranks, which motivates a second tensor train format with low ranks and a single weighted sparse core. We present new alternating algorithms for best $n$-term approximation in both formats.

To analyse the sample complexity for the new model classes, we derive a novel result of independent interest that allows the transfer of the restricted isometry property from one set to another sufficiently close set. Although they lead up to the analysis of our final model class, our contributions on weighted Stechkin and the restricted isometry property are of independent interest and can be read independently.

Published online:
DOI: 10.5802/smai-jcm.126
Classification: 15A69, 41A30, 62J02, 65Y20, 68Q25
Keywords: least squares, sample efficiency, sparse tensor networks, alternating least squares

Philipp Trunschke 1; Martin Eigel 2; Anthony Nouy 1

1 Centrale Nantes, Nantes Université, Laboratoire de Mathématiques Jean Leray, CNRS UMR 6629, France
2 Weierstrass Institute, Mohrenstrasse 39, 10117 Berlin, Germany
@article{SMAI-JCM_2025__11__289_0,
     author = {Philipp Trunschke and Martin Eigel and Anthony Nouy},
     title = {Weighted sparsity and sparse tensor networks for least squares approximation},
     journal = {The SMAI Journal of computational mathematics},
     pages = {289--333},
     publisher = {Soci\'et\'e de Math\'ematiques Appliqu\'ees et Industrielles},
     volume = {11},
     year = {2025},
     doi = {10.5802/smai-jcm.126},
     language = {en},
     url = {https://smai-jcm.centre-mersenne.org/articles/10.5802/smai-jcm.126/}
}
TY  - JOUR
AU  - Philipp Trunschke
AU  - Martin Eigel
AU  - Anthony Nouy
TI  - Weighted sparsity and sparse tensor networks for least squares approximation
JO  - The SMAI Journal of computational mathematics
PY  - 2025
SP  - 289
EP  - 333
VL  - 11
PB  - Société de Mathématiques Appliquées et Industrielles
UR  - https://smai-jcm.centre-mersenne.org/articles/10.5802/smai-jcm.126/
DO  - 10.5802/smai-jcm.126
LA  - en
ID  - SMAI-JCM_2025__11__289_0
ER  - 
%0 Journal Article
%A Philipp Trunschke
%A Martin Eigel
%A Anthony Nouy
%T Weighted sparsity and sparse tensor networks for least squares approximation
%J The SMAI Journal of computational mathematics
%D 2025
%P 289-333
%V 11
%I Société de Mathématiques Appliquées et Industrielles
%U https://smai-jcm.centre-mersenne.org/articles/10.5802/smai-jcm.126/
%R 10.5802/smai-jcm.126
%G en
%F SMAI-JCM_2025__11__289_0
Philipp Trunschke; Martin Eigel; Anthony Nouy. Weighted sparsity and sparse tensor networks for least squares approximation. The SMAI Journal of computational mathematics, Volume 11 (2025), pp. 289-333. doi : 10.5802/smai-jcm.126. https://smai-jcm.centre-mersenne.org/articles/10.5802/smai-jcm.126/

[1] Ben Adcock; Simone Brugiapaglia Is Monte Carlo a bad sampling strategy for learning smooth functions in high dimensions? (2022) | arXiv | DOI

[2] Ben Adcock; Simone Brugiapaglia; Clayton G Webster Sparse Polynomial Approximation of High-Dimensional Functions, Computational Science & Engineering, 25, Society for Industrial and Applied Mathematics, 2022 | DOI | MR

[3] Mazen Ali; Anthony Nouy Approximation Theory of Tree Tensor Networks: Tensorized Multivariate Functions (2021) | arXiv

[4] Mazen Ali; Anthony Nouy Approximation Theory of Tree Tensor Networks: Tensorized Univariate Functions, Computing, Volume 58 (2023) no. 2, pp. 463-544 | Zbl

[5] Markus Bachmayr; Albert Cohen; Ronald DeVore; Giovanni Migliorati Sparse polynomial approximation of parametric elliptic PDEs. Part II: lognormal coefficients, ESAIM, Math. Model. Numer. Anal., Volume 51 (2016) no. 1, pp. 341-363 | DOI | MR | Zbl

[6] Markus Bachmayr; Albert Cohen; Giovanni Migliorati Sparse polynomial approximation of parametric elliptic PDEs. Part I: affine coefficients, ESAIM, Math. Model. Numer. Anal., Volume 51 (2016) no. 1, pp. 321-339 | DOI | MR | Zbl

[7] Markus Bachmayr; Albert Cohen; Giovanni Migliorati Representations of Gaussian Random Fields and Approximation of Elliptic PDEs with Lognormal Coefficients, J. Fourier Anal. Appl., Volume 24 (2017) no. 3, pp. 621-649 | DOI | MR | Zbl

[8] Markus Bachmayr; Michael Götte; Max Pfeffer Particle number conservation and block structures in matrix product states, Calcolo, Volume 59 (2022) no. 2, pp. 1-47 | MR | Zbl

[9] Markus Bachmayr; Anthony Nouy; Reinhold Schneider Approximation by tree tensor networks in high dimensions: Sobolev and compositional functions, Pure Appl. Funct. Anal., Volume 8 (2023) no. 2, pp. 405-428 | MR | Zbl

[10] Daniele Bigoni; Allan P. Engsig-Karup; Youssef M. Marzouk Spectral Tensor-Train Decomposition, SIAM J. Sci. Comput., Volume 38 (2016) no. 4, p. A2405-A2439 | DOI | MR | Zbl

[11] Jean-Luc Bouchot; Benjamin Bykowski; Holger Rauhut; Christoph Schwab Compressed sensing Petrov-Galerkin approximations for parametric PDEs, 2015 International Conference on Sampling Theory and Applications (SampTA), IEEE (2015) | DOI

[12] Emmanuel J. Candès The restricted isometry property and its implications for compressed sensing, C. R. Math., Volume 346 (2008) no. 9, pp. 589-592 | DOI | Numdam | Zbl

[13] Gavin C. Cawley; Nicola L. C. Talbot Fast exact leave-one-out cross-validation of sparse least-squares support vector machines, Neural Netw., Volume 17 (2004) no. 10, pp. 1467-1475 | DOI | Zbl

[14] Olivier Chapelle; Vladimir Vapnik; Yoshua Bengio Model Selection for Small Sample Regression, Mach. Learn., Volume 48 (2002) no. 1/3, pp. 9-23 | DOI | Zbl

[15] Ziang Chen; Jianfeng Lu; Yulong Lu; Shengxuan Zhou A Regularity Theory for Static Schrödinger Equations on d in Spectral Barron Spaces (2022) | arXiv | DOI

[16] M. Chevreuil; R. Lebrun; A. Nouy; P. Rai A Least-Squares Method for Sparse Low Rank Approximation of Multivariate Functions, SIAM/ASA J. Uncertain. Quantif., Volume 3 (2015) no. 1, pp. 897-921 | DOI | MR | Zbl

[17] Albert Cohen; Ronald DeVore Approximation of high-dimensional parametric PDEs, Acta Numer., Volume 24 (2015), pp. 1-159 | DOI | MR | Zbl

[18] Albert Cohen; Ronald Devore; Christoph Schwab Analytic regularity and polynomial approximation of parametric and stochastic elliptic PDE’s, Anal. Appl., Singap., Volume 09 (2011) no. 1, pp. 11-47 | DOI | MR | Zbl

[19] Albert Cohen; Giovanni Migliorati Optimal weighted least-squares methods, SMAI J. Comput. Math., Volume 3 (2017), pp. 181-203 | DOI | Numdam | MR | Zbl

[20] Albert Cohen; Giovanni Migliorati Near-optimal approximation methods for elliptic PDEs with lognormal coefficients, Math. Comput., Volume 92 (2023) no. 342, pp. 1665-1691 | DOI | MR | Zbl

[21] Ronald A. DeVore Nonlinear approximation, Acta Numer., Volume 7 (1998), pp. 51-150 | DOI | Zbl

[22] Bradley Efron; Trevor Hastie; Iain Johnstone; Robert Tibshirani Least angle regression, Ann. Stat., Volume 32 (2004) no. 2, pp. 407-499 | DOI | MR | Zbl

[23] Martin Eigel; Reinhold Schneider; Philipp Trunschke; Sebastian Wolf Variational Monte Carlo—bridging concepts of machine learning and high-dimensional partial differential equations, Adv. Comput. Math., Volume 45 (2019), pp. 2503-2532 | DOI | MR | Zbl

[24] Michael Götte; Reinhold Schneider; Philipp Trunschke A block-sparse Tensor Train Format for sample-efficient high-dimensional Polynomial Regression (2021) | arXiv

[25] Lars Grasedyck; Sebastian Krämer Stable ALS approximation in the TT-format for rank-adaptive tensor completion, Numer. Math., Volume 143 (2019) no. 4, pp. 855-904 | DOI | MR | Zbl

[26] Erwan Grelier; Anthony Nouy; Mathilde Chevreuil Learning with tree-based tensor formats (2018) | arXiv

[27] Michael Griebel; Helmut Harbrecht On the constrction of sparse tensor product spaces, Math. Comput., Volume 82 (2013) no. 282, pp. 975-994 | DOI | MR | Zbl

[28] Michael Griebel; Helmut Harbrecht Singular value decomposition versus sparse grids: refined complexity estimates, IMA J. Numer. Anal., Volume 39 (2018) no. 4, pp. 1652-1671 | DOI | MR | Zbl

[29] Michael Griebel; Helmut Harbrecht Analysis of tensor approximation schemes for continuous functions (2021) | arXiv

[30] Michael Griebel; Helmut Harbrecht; Reinhold Schneider Low-rank approximation of continuous functions in Sobolev spaces with dominating mixed smoothness (2022) | arXiv

[31] Markus Hansen; Winfried Sickel Best m-term aproximation and tensor product of Sobolev and Besov spaces-the case of non-compact embeddings, East J. Approx., Volume 16 (2010), pp. 345-388 | MR | Zbl

[32] Charles R. Harris; K. Jarrod Millman; Stéfan J. van der Walt; Ralf Gommers; Pauli Virtanen; David Cournapeau; Eric Wieser; Julian Taylor; Sebastian Berg; Nathaniel J. Smith; Robert Kern; Matti Picus; Stephan Hoyer; Marten H. van Kerkwijk; Matthew Brett; Allan Haldane; Jaime Fernández del Río; Mark Wiebe; Pearu Peterson; Pierre Gérard-Marchant; Kevin Sheppard; Tyler Reddy; Warren Weckesser; Hameer Abbasi; Christoph Gohlke; Travis E. Oliphant Array programming with NumPy, Nature, Volume 585 (2020) no. 7825, pp. 357-362 | DOI

[33] Sebastian Holtz; Thorsten Rohwedder; Reinhold Schneider The Alternating Linear Scheme for Tensor Optimization in the Tensor Train Format, SIAM J. Sci. Comput., Volume 34 (2012) no. 2, p. A683-A713 | DOI | MR | Zbl

[34] Abdolhossein Hoorfar; Mehdi Hassani Inequalities on the Lambert function and hyperpower function, JIPAM, J. Inequal. Pure Appl. Math., Volume 9 (2008) no. 2, 51, 5 pages | MR | Zbl

[35] J. D. Hunter Matplotlib: A 2D graphics environment, Comput. Sci. & Eng., Volume 9 (2007) no. 3, pp. 90-95 | DOI

[36] Arieh Iserles A fast and simple algorithm for the computation of Legendre coefficients, Numer. Math., Volume 117 (2010) no. 3, pp. 529-553 | DOI | MR | Zbl

[37] Christopher Leisner Nonlinear Wavelet Approximation in Anisotropic Besov Spaces, Indiana Univ. Math. J., Volume 52 (2003) no. 2, pp. 437-455 | DOI | MR | Zbl

[38] Lingjie Li; Wenjian Yu; Kim Batselier Faster tensor train decomposition for sparse data, J. Comput. Appl. Math., Volume 405 (2022), 113972 | DOI | MR | Zbl

[39] Gabriel J Lord; Catherine E Powell; Tony Shardlow An introduction to computational stochastic PDEs, Cambridge Texts in Applied Mathematics, Cambridge University Press, 2014 | DOI | MR

[40] Bertrand Michel; Anthony Nouy Learning with tree tensor networks: Complexity estimates and model selection, Bernoulli, Volume 28 (2022) no. 2, pp. 910-936 | MR | Zbl

[41] A. Nouy 4, Low-Rank Methods for High-Dimensional Approximation and Model Order Reduction, Society for Industrial and Applied Mathematics (2017)

[42] Anthony Nouy; Erwan Grelier tensap, 2020 (https://doi.org/10.5281/zenodo.3894378) | DOI

[43] Ivan V. Oseledets Tensor-Train Decomposition, SIAM J. Sci. Comput., Volume 33 (2011) no. 5, pp. 2295-2317 | DOI | MR | Zbl

[44] Earl David Rainville Special functions, The Macmillan Company, 1960 | MR

[45] Holger Rauhut; Christoph Schwab Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations, Math. Comput., Volume 86 (2016) no. 304, pp. 661-700 | DOI | MR | Zbl

[46] Holger Rauhut; Rachel Ward Interpolation via weighted 1 minimization, Appl. Comput. Harmon. Anal., Volume 40 (2016) no. 2, pp. 321-351 | DOI | MR | Zbl

[47] Abel Sancarlos; Victor Champaney; Jean-Louis Duval; Elias Cueto; Francisco Chinesta PGD-based advanced nonlinear multiparametric regressions for constructing metamodels at the scarce-data limit (2021) | arXiv

[48] Fadil Santosa; William W. Symes Linear Inversion of Band-Limited Reflection Seismograms, SIAM J. Sci. Stat. Comput., Volume 7 (1986) no. 4, pp. 1307-1330 | DOI | MR | Zbl

[49] R. Schneider; A. Uschmajew Approximation rates for the hierarchical tensor format in periodic Sobolev spaces, J. Complexity, Volume 30 (2014) no. 2, pp. 56-71 | DOI | MR | Zbl

[50] Christoph Schwab; Claude Jeffrey Gittelson Sparse tensor discretizations of high-dimensional parametric and stochastic PDEs, Acta Numer., Volume 20 (2011), pp. 291-467 | DOI | MR | Zbl

[51] Sukhwinder Singh; Robert N. C. Pfeifer; Guifré Vidal Tensor network decompositions in the presence of a global symmetry, Phys. Rev. A, Volume 82 (2010) no. 5, 050301(R) | DOI | MR

[52] Radu Alexandru Todor; Christoph Schwab Convergence rates for sparse chaos approximations of elliptic problems with stochastic coefficients, IMA J. Numer. Anal., Volume 27 (2007) no. 2, pp. 232-261 | DOI | MR | Zbl

[53] Philipp Trunschke Convergence bounds for nonlinear least squares and applications to tensor recovery (2021) | arXiv | DOI

[54] Philipp Trunschke Convergence bounds for nonlinear least squares for tensor recovery (2022) | arXiv | DOI

[55] Pauli Virtanen; Ralf Gommers; Travis E. Oliphant; Matt Haberland; Tyler Reddy; David Cournapeau; Evgeni Burovski; Pearu Peterson; Warren Weckesser; Jonathan Bright; Stéfan J. van der Walt; Matthew Brett; Joshua Wilson; K. Jarrod Millman; Nikolay Mayorov; Andrew R. J. Nelson; Eric Jones; Robert Kern; Eric Larson; C J Carey; İlhan Polat; Yu Feng; Eric W. Moore; Jake VanderPlas; Denis Laxalde; Josef Perktold; Robert Cimrman; Ian Henriksen; E. A. Quintero; Charles R. Harris; Anne M. Archibald; Antônio H. Ribeiro; Fabian Pedregosa; Paul van Mulbregt; SciPy 1.0 Contributors SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python, Nat. Methods, Volume 17 (2020), pp. 261-272 | DOI

[56] Herbert F Wang; Mary P Anderson Introduction to groundwater modeling: finite difference and finite element methods, Academic Press Inc., 1995

[57] Wenqi Wang; Vaneet Aggarwal; Shuchin Aeron Tensor Train Neighborhood Preserving Embedding, IEEE Trans. Signal Process., Volume 66 (2018) no. 10, pp. 2724-2732 | DOI | MR | Zbl

Cited by Sources: