Error Guarantees for Least Squares Approximation with Noisy Samples in Domain Adaptation

Given $n$ samples of a function $f\colon D\to\mathbb C$ in random points drawn with respect to a measure $\varrho_S$ we develop theoretical analysis of the $L_2(D, \varrho_T)$-approximation error. For a parituclar choice of $\varrho_S$ depending on $\varrho_T$, it is known that the weighted least squares method from finite dimensional function spaces $V_m$, $\dim(V_m) = m<\infty$ has the same error as the best approximation in $V_m$ up to a multiplicative constant when given exact samples with logarithmic oversampling. If the source measure $\varrho_S$ and the target measure $\varrho_T$ differ we are in the domain adaptation setting, a subfield of transfer learning. We model the resulting deterioration of the error in our bounds. Further, for noisy samples, our bounds describe the bias-variance trade off depending on the dimension $m$ of the approximation space $V_m$. All results hold with high probability. For demonstration, we consider functions defined on the $d$-dimensional cube given in unifom random samples. We analyze polynomials, the half-period cosine, and a bounded orthonormal basis of the non-periodic Sobolev space $H_{\mathrm{mix}}^2$. Overcoming numerical issues of this $H_{\text{mix}}^2$ basis, this gives a novel stable approximation method with quadratic error decay. Numerical experiments indicate the applicability of our results.


Introduction
In this paper we study the reconstruction of complex-valued functions on a d-dimensional domain D ⊂ R d from possibly noisy function values which are sampled in random points x 1 , . . ., x n ∈ D. We consider error bounds for the weighted least squares method for individual functions, which is common in, e.g.partial differential equations [9] or uncertainty quantification [22].In this setting, the samples are drawn after the function is fixed in contrast to worst-case or minmax-bounds, which hold for a class of functions and usually do not include noise in the samples.For individual function approximation the majority of L 2 -error bounds are stated in expectation, cf.[5,Thm. 1.1] for penalized least-squares, [10,Thm. 3] for plain leastsquares or, [21,Thm. 4.1], and [24,Thm. 6.1] for weighted least squares.Bounds, which hold with high probability, are known for polynomial approximation, cf.[31,Thm. 3], wavelet approximation, cf.[29,Thms. 3.20 & 3.21], or in a more general setting incuding noise in [11,Thm. 4.3] with the coarser L ∞ -norm instead of the natural L 2 -norm in the estimate.Further, in [11,Thm. 4.1] an error bound with the natural L 2 -norm estimate is presented in expectation with the same behaviour as we will present with high probability.The contribution and novelty of this paper is twofold: • We use concentration inequalities to show error bounds in the L 2 -and L ∞ -norm which hold with high probability, including the noisy case.The behaviour of our bound is similar to F. Bartel [11,Thm. 4.1], which is stated in expectation.Approximating from an m-dimensional function space we achieve the best error up to a multiplicative constant using logarithmic oversampling.Note, there exists a distribution such that linear oversampling achieves the optimal error but this is not constructive, cf.[14].Including noise, our bounds reflect the typical bias-variance trade off which one wants to balance to prevent over-or underfitting.The results enable to give performance guarantees for model selection strategies like the balancing principle [37,30] or cross-validation [7,6].
• For an application we have a look at approximation on the d-dimensional unit cube [0, 1] d when samples are distributed uniform according to the Lebesgue measure.A result with focus on polynomial approximation in the one-dimensional space is [31,Thm. 3] which is improved by the general result [11,Thm. 2.1].There, the aproximation error is estimated by the L ∞ -error of the projection with high probability and to the more natural L 2 -error of the projection in expectation.We obtain a bound by the L 2 -error of the projection which also holds with high probability.A drawback of polynomials is the need for quadratic oversampling, which we show for the Legendre polynomials but holds in general, cf.[31].To circumvent this, we use the eigenfunctions of the embedding Id : H s → L 2 from the Sobolev space H s for s = 1, 2 which allow for logarithmic oversampling.The H 1 basis, also known as half-period cosine, was introduced in [25] and has become the standard in many applications and is researched thouroughly, cf.[23,53,1,2,13,46,12,27].But also for functions in Sobolev spaces H s of higher smoothness their convergence is limited to be linear in theory (the rate 3/2 can be observed in practice).This can be improved by using the H 2 basis, examined theoretically in [3,Section 3] to have quadratic convergence.So far it is not used as it is prone to numerical errors and unusable for higher degree approximation.Here, we propose an approximation and prove its accuracy which leads to a numerically stable way for approximating non-periodic uniform data with quadratic convergence.
For a more detailed formulation we need some notation.Given an m-dimensional function space V m ⊂ L 2 , we define the best possible approximation (projection) to f : D → C in V m and its error: for p, q ∈ {2, ∞}.Note, since V m is finite-dimensional the minimum is actually attained.Following [5,10,31,11,24,29], we use weighted least squares S m , defined in (3.1), as underlying approximation method.Because of its linearity, the approximation error ∥f − S m y∥ L 2 splits as follows: .
For fixed number of points n, we have a look at the behaviour with respect to m, the dimension of the approximation space V m .The truncation error is the best possible benchmark and usually has polynomial decay m −s for some rate s ≥ 1 depending on f and the choice of V m .We show, that the discretization error obeys the same decay as the truncation error.Thus, given logarithmic oversampling, we obtain the best possible error up to a multiplicative constant in the noiseless case, cf.Theorem 3.2.
Including noise, we show that we get an additional summand growing linear in m, cf.Theorems 1.1.This resambles the well-known bias-variance trade off modeling the over-and undersmoothing effects which one wants to balance, cf.[20,37].This linear behaviour in m is approved by [30,Thm. 4.9] (by using the regularization g λ (σ) = 1/(λ + σ) with λ = 0).An example of that behaviour for D = [0, 1] Then, for S m the weighted least squares method defined in (3.1) with ω i = β(x i ), we have with joint probability exceeding 1 − 3 exp(−t): where The first line of the bound corresponds to the truncation error and discretization error, decaying in m.Note, that the L ∞ -term with the prefactor n −1/2 behaves as the L 2 -term whenever β is bounded from below, cf.Theorem 3.2.The second line is the error due to noise, increasing in m, cf. Figure 1.1.The estimation of the noise error is using a Hanson-Wright concentration inequality, which can be found using different assumptions.Thus, we can replace the noise model by general Bernstein conditions, cf.Lemma 2.3, or sub-Gaussian noise, cf.[43].This theorem extends to the L ∞ case: Theorem 1.2 (L ∞ -error bound with noise).Let the assumptions of Theorem 1.1 hold.Then, for S m the weighted least squares method defined in (3.1) with ω i = β(x i ), we have with probability exceeding 1 − 3 exp(−t):

F. Bartel
The bound is similar to [29,Thm. 3.21] in the wavelet setting but we use the best approximation with respect to the more natural L ∞ instead of L 2 .In addition to the error of the best approximation we now have the additional factor N (V m ) due to using the norm estimate ∥g∥ L∞ ≤ N (V m )∥g∥ L 2 for functions g ∈ V m .The same factor appears when approximating the worst-case error where it is known to be necessary in various examples, e.g.[41,Sec. 7] or [47,Thm .1.1].
The sampling measure ϱ S (E) := E 1/β dϱ T , induced by the probability distribution β, may differ from the error measure ϱ T , which is known as the change of measure and has applications in domain adaptation, cf.[36].We assume to know β exactly but it may be approximated as well, cf.[18].Note, β affects the maximal size of V m in the assumption and the amplification of the noise in bound.There are two extremal cases: (i) Having β(x) = m/N (V m , x), as it was done in [22,34,11,24], we obtain the assumption which allows for the biggest choice of m possible.But this spoils ∥β∥ ∞ in the error bound when the Christoffel function attains small values.
(ii) For domains D with bounded measure, we may choose β(x) = ϱ T (D), as it was done in [10,11,29].As all weights ω i = ϱ T (D), S m becomes the plain least squares method.In this case, ∥β∥ ∞ is minimal and noise is amplified the least.But this choice spoils the assumption on the choice of m when the Christoffel function N (V m , x) attains big values.This effect is controllable, for instance, when working with a bounded orthonormal system (BOS) (∥η k ∥ ∞ ≤ B for some B > 0 and all k).Then and the assumption on the size of V m can be replaced by An interesting example, where these effects occur, is the approximation of functions on the unit interval D = [0, 1] from samples given in uniformly random points.
• When using algebraic polynomials and the Lebesgue error measure dϱ T = dx we have to choose β ≡ 1 to obtain uniform random points.Orthogonalizing algebraic polynomials with respect to the Lebesgue measure, we obtain η k = P k /∥P k ∥ L 2 ((0,1),dx) Legendre polynomials for our approximation space V m .Since ∥P k ∥ 2 L 2 ((0,1),dx) = 2k + 1 and P k (0) = 1, we have Thus, this case falls into category (i) from above and spoils our choice of m ≤ √ n, i.e., quadratic oversampling as in [31].
• When using algebraic polynomials and the Chebyshev error measure dϱ T = (1 − (2x − 1) 2 ) −1/2 dx we have to choose β(x) = π 4 (1 − (2x − 1) 2 ) −1/2 to obtain uniform random points.Orthogonalizing algebraic polynomials with respect to the Chebyshev measure, we obtain Chebyshev polynomials η k (x) = T k (x) = cos(k arccos(2x − 1)) for our approximation space V m .These are a BOS, but the distribution β spoils both the assumption on m and the error bound, since β diverges at the border (this effect can be circumvented using a padding technique at the border as we discuss in Section 4).
We show that they form a BOS and, by (ii) above, this basis is then suitable for approximation in uniform random points on D = [0, 1] using plain least squares and only logarithmic oversampling.The H 2 basis is prone to numerical errors.To overcome this, we propose a numerically stable approximation and proof its accuracy.
As for the structure of this paper, we start with some tools from probability theory in Section 2. In Section 3 we show error bounds for the weighted least squares method.The construction of the H 1 and H 2 basis mentioned above are found in Section 4 along with a comparison to the Legendre and Chebyshev polynomials.To indicate the applicability of our error bounds and the proposed basis, we conduct numerical experiments in one and five dimensions.

Tools from probabilty theory
For the later analysis we need concentration inequalities starting with Bernstein's inequality, which is found in the standard literature, cf.[45,Theorem 6.12] or [17,Corollary 7.31].
Theorem 2.1 (Bernstein).Let ξ 1 , . . ., ξ n be independent real-valued mean-zero random variables satisfying E(ξ 2 i ) ≤ σ 2 and ∥ξ i ∥ ∞ ≤ B for i = 1, . . ., n and real numbers σ 2 and B. Then Bernstein's inequality gives a concentration bound for the sum of independent random variables.We need similar bounds for quadratic forms in random vectors, which are known as Hanson-Wright inequalities.To formulate them, we need to introduce the spectral norm and the Frobenius norm of a matrix A ∈ C m×n where λ max and σ max denote the largest eigenvalues and singular values, respectively.The following result is such an inequality with a Bernstein condition on the random variables and was shown in [4,Theorem 3].

F. Bartel
The following is a special case of the above Hanson-Wright inequality for Hermitian positive semidefinite matrices and random variables with known variance E(|ξ i | 2 ) and a uniform bound ∥ξ i ∥ ∞ .
It is left to estimate the expected value.Since ξ 1 , . . ., ξ n are independent and have bounded variance, we obtain The following tool is a concentration bound on the maximal singular values of random matrices which was shown in [51,Theorem 1.1].
Lemma 2.4 (Matrix Chernoff).For a finite sequence A 1 , . . ., A n ∈ C m×m of independent, Hermitian, positive semi-definite random matrices satisfying λ max (A i ) ≤ R almost surely it holds and Proof.The first estimates are provided by [51,Theorem 1.1].Based on the Taylor expansion which holds true for t ∈ [−1, 1], we further derive the inequalities for the range t ∈ [0, 1].

Error bounds for least squares
In this section we develop L 2 -and L ∞ -error bounds for the least squares method.To this end we introduce some notation and the method itself.Let η 0 , . . ., η m−1 : be the Christoffel function and its supremum.For our approximation method S m we use the weighted least squares approximation depending on η 0 , . . ., η m−1 and x 1 , . . ., x n : where ∥Lâ − y∥ 2 W = (Lâ − y) * W (Lâ − y).If all weights are equal we speak of plain least squares approximation.
The coefficients ĝ of the approximation S m y are found by solving the normal equation Doing this by the means of an iterative solver, the stability and the iteration count for a desired precision are determined by the spectral properties of the above matrix, cf.[19, Theorem 3.1.1].However, these are fully determined by the spectral properties of W 1/2 L, since for a singular value decomposition W 1/2 L = U ΣV * , we obtain
The proof ideas go back to [10, Thm.1] and [11, Thm.2.1] but for the sake of readability we state it here as well.
Proof.The result is a direct consequence of Tropp's result in Lemma 2.4.For a randomly chosen point x i we define the random rank-one matrix and by the orthogonality of η k which gives E n i=1 A i = Id m×m and, therefore, µ max = µ min = 1.Further, we have Lemma 2.4 with t = 1/2 then gives the lower bound , which is smaller than exp(−t) by the assumption on m.The bound for the largest eigenvalue works analogue.
We now formulate a bound on the L 2 -error of the weighted least squares method for exact function values.This result is heavily based on [29,Theorem 3.20] which extends to a more general setting.Theorem 3.2 (L 2 -error bound without noise).Let f : D → C, x 1 , . . ., x n , n ∈ N be points drawn according to a probability measure dϱ S = 1/β dϱ T and y = (f (x 1 ), . . ., f (x n )) T exact function values.Let further, t ≥ 0, V m be an m-dimensional function space with an orthonormal basis η 0 , . . ., η m−1 satisfying Then, for S m the weighted least squares method defined in (3.1) with ω i = β(x i ), we have with probability exceeding 1 − 2 exp(−t): Proof.For abbreviation, we use e 2 = e(f, V m , L 2 ) L 2 and e ∞ = e(f, V m , L 2 ) L∞ .Further, we define the event which has probability P(A) ≥ 1 − exp(−t) by Lemma 3.1 and the assumption on V m .We split the approximation error Due to the invariance of S m to functions in V m , we pull it in front and use compatibility of the operator norm to obtain By (3.2) and the event (3.3), we have It remains to estimate the latter summand.We define , which is mean-zero since we sample with respect to the distribution ϱ S .Further, we have Thus, the conditions in order to apply Bernstein are fulfilled: .
By union bound we obtain the overall probability exceeding the sum of the probabilities of events given by (3.3) and (3.4). Provided 2 says that the least squares approximation from a finite-dimensional function space V m and given the oversampling condition has the same error as the L 2 -projection up to a multiplicative constant with high probability.This improves on [11,Theorem 2.1] where the same bound was shown in expectation or bounded by the L ∞ -error with high probability.
Next, we prove the central theorem which includes the noisy case.Proof.[Proof of Theorem 1.1] We split the approximation error ∥f − S m y∥ 2 and bound the first two summands as in the proof of Theorem 3.2 with the events given by (3. with probability 1 − exp(−t).Since L * W L ∈ C m×m , the matrix (L * W L) −1 LW 1/2 has rank at most m and, thus, we use 2m n where the last inequality follows from (3.2) and event (3.3).Therefore, By union bound we obtain the overall probability exceeding the sum of the probabilities of the events given by (3.3), (3.4), and (3.5).
Next, we prove Theorem 1.2 bounding the approximation error of least squares in the L ∞ -norm.Proof.[Proof of Theorem 1.2]For abbreviation, we use e 2 = e(f, V m , L ∞ ) L 2 and e ∞ = e(f, V m , L ∞ ) L∞ .For any g = m k=1 ⟨g, η k ⟩ L 2 η k ∈ V m the Hölder-inequality gives an estimate on the L ∞ -norm in terms of the L 2 -norm: Using this, we reduce the L ∞ -case to the L 2 -case which we already covered.We split the approximation where the last inequality follows from t ≤ n.Thus, Using the same bound as in Theorem 1.1 for ∥S m ε∥ L 2 we obtain the assertion.

Application on the unit cube
In this section we are interested in approximating functions on the d-dimensional unit cube D = [0, 1] d when sample points are drawn uniformly, i.e., with respect to the Lebesgue measure dx.The deterministic equivalent to uniform sampling are equispaced points.When using these for polynomial interpolation, Runge already knew in 1901, that higher degree polynomials lead to oscillatory behaviour towards the border which spoil the approximation error.Even though, we do not interpolate, we will observe similar behaviour using Legendre and Chebyshev polynomials.We propose alternative bases, which are stable for large m = dim(V m ) as well.
Throughout this section we have L 2 = L 2 ((0, 1) d , dx) unless stated otherwise.We consider function spaces to know about the decay of the coefficients.Note, that for individual functions they may decay faster in contrast to the worst-case setting.Literature for the worst-case setting can be found in the papers [26] for random points with logarithmic oversampling, [33,28] for subsampled points with linear oversampling and a logarithmic gap in the error bound (this was made constructive in [8]), and [16] for subsampled points with linear oversampling and sharp bounds.

Sobolev spaces on the unit interval
Let d = 1, D = [0, 1] be the unit interval equipped with the Lebesgue measure dx.In order to get hold on appropriate finite-dimensional function spaces V m for approximation, we have a look at Sobolev spaces H s = H s (0, 1) with integer smoothness s ≥ 0. The inner product of these Hilbert spaces is given by ⟨f, g⟩ k=0 is the non-increasing rearrangement of the singular values of W and (e k ) ∞ k=0 ⊂ H s the corresponding system of eigenfunctions forming an orthonormal basis in H s .Since Thus, the eigenfunctions corresponding to the largest singular values are a canonical candidate for the approximation space V m .To put this into concrete terms, in the next two theorems, we give the singular values and eigenfunctions for H 1 and H 2 .

F. Bartel
To our knowledge, the H 1 basis above was originally introduced in [25] and was already considered in [53,Lemma 4.1] with the same proof technique, in [23] as a modified Fourier expansion.It is further used in [46] as a basis for multivariate approximation in the context of samples along tent-transformed rank-1 lattices, and in [1,2,13,12,27].The following H 2 basis was already posed in [3, Section 3], where higher-order Sobolev-spaces are found as well.The proof of the following theorem is found in Appendix A.
) and Further, it holds The singular values σ k for H 2 decay quadratic in contrast to linearly for H 1 .Thus, approximating a twice differentiable function, m = dim(V m ) can be chosen smaller when using the H 2 basis whilst achieving the same truncation error e(f, V m , L 2 ) L 2 .Furthermore, as noise enters with the factor m/n, cf.Theorem 1.1, this helps prevent overfitting as well and leads to a smaller approximation error.
However, as cosh and sinh both grow exponentially, the representation of the H 2 basis in Theorem 4.2 is prone to cancellations and, therefore, numerical unstable.In the next theorem we pose an approximation which is numerically stable.Theorem 4.3.For 0 < t 2 < t 3 < . . .fulfilling cosh(t k ) cos(t k ) = 1, let η k be as in Theorem 4.2.Further, for n ≥ 2, let tk = π(2k − 1)/2 and In particular, the approximation ηk is exact up to machine precision ε = 10 −16 for k ≥ 27.
For the proof see Appendix A. For the numerical experiments we use the exact representation from Theorem 4.2 for m < 10 and the approximation from Theorem 4.3 for m ≥ 10.

Polynomial approximation on the unit interval
Next, we examine how the H 1 and H 2 bases compares to polynomial approximation when points are distributed uniformly, i.e., V m = Π m = span{1, x, . . ., x m−1 } and ϱ S (x) = 1/β(x)dϱ T (x) = dx.Polynomial approximation results often assume f ∈ X s with where BV ([0, 1]) are all functions with bounded variation.This assumtion is stronger than assuming f ∈ H s as the following remark shows.Remark 4.4 (X s → H s+1/2−ε ).For a rigorous investigation of the relation of X s and H s , we need to define the Besov space B s p,q for p = 1, q = ∞, and integer smoothness s for any ε > 0. Thus, where the third embedding follows from the Sobolev inequality, cf.[49, (2.7.1/1)], and the Sobolev space for non-integer smoothness s consists of functions f such that ⟨f, η k ⟩ ≤ Ck −s for some constant C < ∞.
Assuming f ∈ X s , we have a look into approximating with Legendre-and Chebyshev polynomials: • The canonical choice for the target measure is dϱ T = dx and β ≡ 1 such that ϱ S (x) = 1/β(x)dϱ T (x) = dx.Orthogonalizing the first m − 1 monomials with respect to dϱ T (x) = dx, we obtain the Legendre polynomials P k .
For the error of the projection, assuming f ∈ H s , the following was shown in [52,Theorem 3.5]: where V is the total variation of f (s) .With this stronger assumption f ∈ X s half an order is gained by polynomial approximation in contrast to the H s bases.This is expected as we require half an order of smoothness in L 2 more, cf.Remark 4.4.(In the later numerical experiments, we observe the gain of half an order for H 1 and H 2 as well.) A drawback comes with the Christoffel function 1), this spoils the choice of m to quadratic oversampling: which is usual for polynomial approximation in uniform points, cf.[31].
• When using the Chebyshev measure dϱ to obtain uniform random samples as well.Orthogonalizing the first m − 1 monomials with respect to the Chebyshev measure, we obtain the Chebyshev polynomials T k , which are a BOS.
As for the error, assuming f ∈ X s , we use [48,Theorem 7.1] or [38,Theorem 6.16] to obtain the same bound as for Legendre polynomials (4.2) but with respect to the L 2 ((0, 1), (1 As ∥β∥ ∞ diverges at the border, this spoils the choice of the polynomial degree m and our bound.Note, when we exclude some area around the border for sampling, it does not diverge and the resulting error can be controlled.This is called padding and was done in [39, Section 4. Thus, with polynomial approximation we assume half an order of smoothness more, cf.Remark 4.4, which we also see in the approximation rate O(m s+1/2 ).

F. Bartel
Remark 4.5.Note, when using the Chebyshev polynomials and samples with respect to the Chebyshev measure, we have β ≡ 1.Since the Chebyshev polynomials are a BOS, this does not spoil our bounds.
Furthermore, using the Legendre polynomials (dϱ T = dx) and samples with respect to the Chebyshev measure (β(x) = π(1 − (2x − 1) 2 ) 1/2 ) works as well.To see this we use [42, Lemma 5.1]: are bounded and do not spoil the choice of polynomial degree m nor the error bound.

Numerics on the unit interval
To support our findings, we give a numerical example.As a test function we use which was already considered in [40,35].The function B cut 2 is shown in Figure 1.1 and is a cutout of the B-spline of order two.It and its first derivative are absolute continuous and the second derivative is of bounded variation.Therefore f ∈ X 3 and the polynomial approximation bounds from above are applicable.According to Remark 4.4 we further have f ∈ H 5/2−ε for any ε > 0, i.e., there exists C ≥ 0 such that for k ≥ 0 it holds ⟨f, η k ⟩ L 2 ≤ Ck −5/2+ε .In particular, f ∈ H 2 and (4.1) is applicable for approximating with the H 1 and H 2 bases.
We sample f in 10 000 uniformly random points and add 0. (iii) We compute the L 2 -error by using Parseval's equality: where the coefficients fk = ⟨f, η k ⟩ L 2 are computed analyticaly.
(iv) We compute the split approximation error: , where we compute both quantities separately, again, using Parseval's equality.
The results are depicted in L 2 (dashed lines) with respect to m.
• The smallest singular values for the Chebyshev polynomials and the Legendre polynomials decay rapidly for bigger m.This coincides with the violation of the assumtion in Lemma 3.1 for small m: where ∥β(•)N (V m , •)∥ ∞ is unbounded in the Chebyshev case and quadratic in the Legendre case, cf.(1.1).In this experiment, for m = 1 000 the condition number σ max (W 1/2 L)/σ min (W 1/2 L) exceeded 10 29 for the algebraic polynomials and was below 14 for the H s basis.
• The error for exact function values ∥f − S m f ∥ 2 L 2 has decay 3/2 for H 1 and 5/2 for the other bases.This conforms with the theory for the polynomial bases.For the H 1 and H 2 bases the theory predicted only decay rate 1 and 2, cf.Theorems 4.1, 4.2, and (4.1).
• For the noise error ∥S m ε∥ 2 L 2 we observe linear growth in m = dim(V m ) as predicted in Theorem 1.1.Furthermore, this error is bigger by a factor of around 40 in the Chebyshev case compared to the others.The maximal weight ∥W ∥ ∞ in this case is around 40 as well.The error due to noise in our bound has the factor ∥β∥ ∞ which can be replaced by ∥W ∥ ∞ to sharpen the bound and explain this effect.This numerical experiment and the earlier theoretical discussion shows, that the H 1 and the H 2 bases are suitable for approximating functions on the unit interval given in uniform random samples.They are numerically stable in contrast to polynomial approximation with Chebyshev or Legendre.In particular, the least squares matrix is well-conditioned and we can limit the iterations when using an iterative solver, cf.The ideas from Subsection 4.1 can be extended to higher dimensions using the concept of dominating mixed smoothness.We focus on the case of H 1 and H 2 , but the same can be done for polynomials as well, cf.[44, Section 8.5.1].Let D = [0, 1] d be the d-dimensional unit cube equipped with the Lebesgue measure dx.The Sobolev space with dominating mixed smoothness of integer degree s ≥ 0 is given by H . The inner product of these Hilbert spaces is given by With σ k and η k the singular values and eigenfunctions of H s , the singular values and eigenfunctions of W = Id * • Id : H s mix → H s mix extend as follows: To obtain the eigenfunctions corresponding to the smallest singular values, we now work with multiindices k.The indices corresponding to the largest singular values lie on a, so called, hyperbolic cross In Figure 4.2 we have equally sized index sets for H 1 mix and H 2 mix .Note, that R is smaller for H 2 mix as its singular values decay faster, cf.Theorems 4.1 and 4.2.

Numerics on the unit cube
For a numerical experiment we do the same as in the one-dimensional case but only consider the H where B cut 2 was defined in (4.3).We increase the dimension to d = 5 and the number of samples to 1 000 000 and use Gaussian noise with variance σ 2 ∈ {0.00, 0.01M, 0.03M } where M With t = 6, we satisfy the assumptions of Theorem 1.1 for m ≤ 12 250 and obtain a probability exceeding 0.99 for the error bound in Theorem 1.1.For m = dim(V m ) up to 10 000 we do the following: (i) We use plain least squares with 20 iterations to obtain the approximation S m y = m−1 k=0 ĝk η k , defined in (3.1).
(ii) We compute the L 2 -error by using Parseval's equality analog to the one-dimensional case.
(iii) We compute our bound: Applying Theorem 1.1 using (3.6), t = 6, and n = 1 000 000, we obtain with probability exceeding 0.99 and all the remaining quantities known in our experiment.
The results are depicted in Figure 4.3.
• The bounds capture the error behaviour well.But it seems that there is room for improvement in the constants, especially in the experiments with noise.Here, improving constants in the Hanson-Wright inequality in Theorem 2.3 could be a starting point.
• Furthermor, this experiment shows, that the H 2 basis is easily suitable for high-dimensional approximation as well.

S m y∥ 2 L 2 Figure 1 . 1 . 2 L 2 (
Figure 1.1.One-dimensional approximation on the unit-interval for three different choices of m and the schematic behaviour of the L 2 -approximation error ∥f − S m y∥ 2 L 2 (solid line) split into the error for exact function values ∥f − S m f ∥ 2 L 2 and the noise error ∥S m ε∥ 2L 2 (dashed lines) with respect to m.

Figure 4 Figure 4 . 1 .
Figure 4.1.One-dimensional experiment for different choices of V m .Top row: minimal and maximal singular value of 1/ √ nW 1/2 L. Bottom row: the L 2 -approximation error ∥f − S m y∥ 2 L 2 (solid line) split into the error for exact function values ∥f − S m f ∥ 2 L 2 and the noise error ∥S m ε∥ 2 L 2 (dashed lines) with respect to m.

2 mixFigure 4 . 3 .
Figure 4.3.Five-dimensional experiment for H 2 mix .The solid lines represent the L 2error ∥f − S m y∥ 2 L 2 and the dashed lines the bound from Theorems 3.2 and 1.1.