Tuesday, January 29, 2019

Hermite-Birkhoff Interpolation of Scattered Data

$\setCounter{0}$

Suppose $P_1 = \{p_i, p_i \in R^n\}_{i=1}^{M_1}$ and $P_2 = \{p'_i, p'_i \in R^n\}_{i=1}^{M_2}$ are sets of data locations lying in a domain $\Omega \subset R^n$.
Let functional $f_i$ is a value of a function $\varphi$ at a node $p_i \in P_1$ and functional $f'_i$ is a value of that function directional derivative (a slope of the function) at a node $p'_i \in P_2$
Assume the values of functionals $f_i$ and $f'_i$ at nodes belonging to $P_1$ and $P_2$ are known:
\begin{eqnarray}
  (f_i , \varphi ) &=& \varphi (p_i) = u_i \ , \quad  p_i \in P_1 \ , 1 \le i \le M_1,
\\
  (f'_i , \varphi ) &=& \frac{ \partial{\varphi} }{ \partial{e_i} } (p'_i) =  u'_i \ , \, p'_i \in P_2 \ , 1 \le i \le M_2,
\end{eqnarray}
where $\frac{ \partial{\varphi} }{ \partial{e_i} } (p'_i) = \sum _{k=1}^{n}  \frac{ \partial{\varphi} }{ \partial{x_k} } (p'_i)  \cos \theta _{ik}$, and $\cos \theta _{ik} ,  1 \le k \le n \, -$ directional cosines of a unit vector $e_i$.

Consider a problem of the function $\varphi$ reconstruction. Let function to be approximated $\varphi$ belongs to Hilbert space $H^s_\varepsilon (R^n)$ where $s > r + n/2, \, r = 1, 2, \dots$. In that case space $H^s_\varepsilon$ is continuously embedded in Hölder space $C_b^r(R^n)$, $\varphi (x) \in C^r(R^n)$ and values of function $\varphi (x)$ and the function partial derivatives at the fixed points are linear continuous functionals in $H^s_\varepsilon$.
We assume that $P_1$ does not contain coincide nodes and there is no directional derivatives defined in a node in $P_2$ which directions are linearly dependent, also a node from $P_1$ may coincide with a node from $P_2$. Under this restriction functionals $f_i, f'_i$ are linearly independent.
In accordance with Riesz representation theorem [1] linear continuous functionals $f_i \, , f'_i$ can be represented in the form of inner product of some elements $h_i \, , h'_i \in H^s_\varepsilon$ and  $\varphi \in H^s_\varepsilon$, for any $\varphi \in H^s_\varepsilon$.
\begin{eqnarray}
 (f_i, \varphi ) = {\langle h_i,  \varphi \rangle}_{H^{s_r}_\varepsilon} \, ,
\quad
 (f'_i, \varphi) = {\langle h'_i,  \varphi \rangle}_{H^{s_r}_\varepsilon} \, , \quad \forall \varphi \in H^s_\varepsilon \  .
\end{eqnarray}
The notation ${\langle \cdot , \cdot \rangle}_{H^{s_r}_\varepsilon}$ in (3) denotes the inner product in space ${H^{s_r}_\varepsilon}$ (see the previous post for details). It is easy to see that under above assumptions elements $h_i,  h'_i$ are linearly independent.
Let $V_{s_r}(\eta , x, \varepsilon), \, s_r  = r + n/2 + 1/2, \,  r=1, 2, \dots$ is a reproduction kernel of Hilbert space $H^{s_r}_\varepsilon (R^n)$ (see the previous post). The kernel $V_{s_r}(\eta , x, \varepsilon)
\in C^r(R^n)$ as a function of $\eta \in R^n$. In accordance with reproducing property of the reproducing kernel for any $\varphi \in H^{s_r}_\varepsilon (R^n)$ and $x \in R^n$ the identity
\begin{eqnarray}
  \varphi (x) = {\langle V_{s_r}(\cdot , x, \varepsilon), \varphi \rangle}_{H^{s_r}_\varepsilon}
\end{eqnarray}
holds. This means that $V_{s_r}(\cdot , x)$ is the Riesz representer of the evaluation functional $\delta_ x$ defined as value of a function $\varphi \in H^{s_r}_\varepsilon$ at a given point $x \in R^n$. As $V_{s_r}$ is a radial basis continuously differentiable function we can differentiate the identity (4) and receive the following identity
\begin{eqnarray}
  \frac{ \partial {\varphi (x)} }{\partial{x_i}} = {\left \langle  \frac{ \partial {V_{s_r}(\cdot , x, \varepsilon)} }{\partial{x_i}}  , \varphi  \right \rangle}_{H^{s_r}_\varepsilon}
\end{eqnarray}
which holds for any $\varphi \in H^{s_r}_\varepsilon$ and $x \in R^n$. It means that function $\frac{\partial {V_{s_r}(\cdot , x)} }{\partial{x_i}}$ represents a point-wise functional defined as value of function $\frac{ \partial {\varphi (\cdot)} }{\partial{x_i}}$ at a point $x$.
Thereby
\begin{eqnarray}
  h_i (\eta, \varepsilon) &=&  V_{s_r}(\eta , p_i, \varepsilon) \, , \quad  p_i \in P_1 \ , 1 \le i \le M_1, \  \,
\\
   h'_i (\eta, \varepsilon) &=& \frac{ \partial {V_{s_r}(\eta , p'_i, \varepsilon)} }{\partial{e_i}} \, , \  p'_i \in P_2 \ , 1 \le i \le M_2, \ \eta \in R^n \, .
\end{eqnarray}
and system of constraints (1), (2) can be presented as a system of linear equations in Hilbert space $H^{s_r}_\varepsilon$:
\begin{eqnarray}
 {\langle h_i,  \varphi \rangle}_{H^{s_r}_\varepsilon} &=& u_i  \, , \quad 1 \le i \le M_1, \  \,
\\
 {\langle h'_i,  \varphi \rangle}_{H^{s_r}_\varepsilon} &=& u'_i \, , \  \  \,  1 \le i \le M_2 \, , \quad h_i \, , h'_i \, ,  \varphi \in H^{s_r}_\varepsilon (R^n).
\end{eqnarray}
This system of constraints defines a convex and closed set in Hilbert space — as an intersection of a finite number of hyperplanes (this set is not an empty set because of linear independence of functionals $h_i, h'_i$).

The problem of reconstruction of function $\varphi$ satisfying system of constraints (8) and (9) is an undetermined problem. We reformulate it as the problem of finding a solution of the system (8), (9) that has minimal norm:
\begin{eqnarray}
\sigma_1 = {\rm arg\,min}\{  \| \varphi \|^2_{H^{s_r}_\varepsilon} : (8), (9), \forall \varphi \in {H^{s_r}_\varepsilon} (R^n) \} \, ,
\end{eqnarray}
such solution exists and it is unique [1, 6]. The element $\sigma_1$ is an interpolating normal spline. 
In accordance with generalized Lagrange method ([5], [6]) the solution of the problem (10) can be written as follows:
\begin{eqnarray}
\sigma_1 =  \sum _{j=1}^{M_1} \mu_j  h_j + \sum _{j=1}^{M_2} \mu'_j  h'_j \ ,
\end{eqnarray}
here coefficients $\mu_i,  \mu'_i$  are defined by the system
\begin{eqnarray}
 \sum _{j=1}^{M_1} g_{ij} \mu_j + \sum _{j=1}^{M_2} g'_{ij} \mu'_j &=& u_i  \, , \quad 1 \le i \le M_1 \, ,
\\
 \sum _{j=1}^{M_1} g'_{ji} \mu_j + \sum _{j=1}^{M_2} g''_{ij} \mu'_j &=& u'_i  \, , \quad 1 \le i \le M_2 \, ,
\end{eqnarray}
where $g_{ij}, g'_{ij},  g''_{ij}$ are coefficients of the positive definite symmetric Gram matrix of the set of linearly independent elements $h_i, h'_i \in H^{s_r}_\varepsilon (R^n)$:
\begin{eqnarray}
 g_{ij} = {\langle h_i,  h_j \rangle}_{H^{s_r}_\varepsilon} = (f_i, h_j) \, , \quad 1 \le i \le M_1 \, , \ 1 \le j \le M_1 \, ,
\\
g'_{ij} = {\langle h_i,  h'_j \rangle}_{H^{s_r}_\varepsilon} = (f_i, h'_j) \, , \quad 1 \le i \le M_1 \, , \ 1 \le j \le M_2 \, ,
\\
g''_{ij} = {\langle h'_i,  h'_j \rangle}_{H^{s_r}_\varepsilon} = (f'_i, h'_j) \, , \quad 1 \le i \le M_2 \, ,  \ 1 \le j \le M_2 \, ,
\end{eqnarray} Taking into account (3), (6), (7) we receive:
\begin{eqnarray}
 g_{ij} &=& V_{s_r}(p_i , p_j, \varepsilon)  \, , \quad p_i, p_j \in P_1 , \  1 \le i \le M_1 , \ 1 \le j \le M_1 ,
\\ \nonumber
\\
g'_{ij} &=&  \frac{ \partial {V_{s_r}(p_i, p'_j, \varepsilon)} }{\partial{e_j}}  =  \sum_{k=1}^n  \frac{ \partial {V_{s_r}(p_i, p'_j, \varepsilon)} }{\partial{x_k}} \cos \theta _{jk}  \, , \\ \nonumber && p_i \in P_1 \, , \ 1 \le i \le M_1 , \  p'_j \in P_2 , \ 1 \le j \le M_2 \, ,
\\
g''_{ij} &=&  \frac{ \partial^2 {V_{s_r}(p'_i, p'_j, \varepsilon)} } {\partial{e_i} \partial{e_j}} =  \sum_{l=1}^n \sum_{k=1}^n \frac{ \partial^2 {V_{s_r}(p'_i, p'_j, \varepsilon)} } {\partial{\eta_l} \partial{x_k}} \cos \theta _{il} \cos \theta _{jk} \, ,\\ \nonumber  &&  p'_i, p'_j \in P_2 ,  \ 1 \le i \le M_2 \, , \  1 \le j \le M_2 \, .
\end{eqnarray}
Thus the problem (10) is reduced to solving a system of linear equations (12), (13) with Gram matrix defined by (18) — (19) and constructing the normal spline (11).

Assume the function to be approximated $\varphi$ belongs to Hilbert space $H^{s_1}_\varepsilon (R^n)$,  $s_1 = 1 + n/2 + 1/2$ (see the previous post). It is a continuously differentiable function, $\varphi \in C^1 (R^n)$. Write down the expressions for $h_j, h'_j, g_{ij}, g'_{ij}, g''_{ij}$ in space  $H^{s_1}_\varepsilon (R^n)$:
\begin{eqnarray}
  h_j (\eta, \varepsilon) &=&  \exp (-\varepsilon |\eta - p_j |) (1 + \varepsilon |\eta - p_j |) \ ,
\\ \nonumber
&&  p_j \in P_1 \ , \ 1 \le j \le M_1 \, ,
\\
  h'_j (\eta, \varepsilon) &=& \varepsilon^2 \exp (-\varepsilon | \eta - p'_j | ) \sum _{k=1}^n (\eta_k - p'_{jk})\cos \theta_{jk} \ ,
\\ \nonumber
 &&   p'_j \in P_2 \ , \ 1 \le j \le M_2 \, , \quad  \eta \in R^n \,  ,
\end{eqnarray}
then
\begin{eqnarray}
  g_{ij} &=& \exp (-\varepsilon | p_i - p_j | )(1 + \varepsilon | p_i - p_j | ) \ ,  \  p_i, p_j \in P_1 \ , 1 \le i \le M_1 \, ,
\\
  g'_{ij} &=& \varepsilon^2 \exp (-\varepsilon | p_i - p'_j | ) \sum _{k=1}^n (p_{ik} - p'_{jk})\cos \theta_{jk}  \ ,
\\ \nonumber
&&   p_i \in P_1 \, , \  p'_j \in P_2 \ , 1 \le i \le M_1 \, , \ 1 \le j \le M_2 \, ,  \\ g''_{ij} &=&  \sum_{l=1}^n \sum_{k=1}^n \frac{ \partial^2 {V_{s_1}(p'_i, p'_j, \varepsilon)} } {\partial{\eta_l} \partial{x_k}} \cos \theta _{il} \cos \theta _{jk} \, ,\\ \nonumber  &&  p'_i, p'_j \in P_2 ,  \ 1 \le i \le M_2 \, , \  1 \le j \le M_2 \, ,  i \ne j \, , \\ \nonumber && \text{where} \\ \nonumber   && \frac{ \partial^2 {V_{s_1}(p'_i, p'_j, \varepsilon)} } {\partial{\eta_k} \partial{x_k}} = \varepsilon^2 \exp (-\varepsilon | p'_i - p'_j | ) \left( 1 - \varepsilon \frac {(p'_{ik} - p'_{jk})^2}{| p'_i - p'_j |} \right) \, , \\ \nonumber   && \frac{ \partial^2 {V_{s_1}(p'_i, p'_j, \varepsilon)} } {\partial{\eta_l} \partial{x_k}} = -\varepsilon^3 \exp (-\varepsilon | p'_i - p'_j | )  \frac {(p'_{ik} - p'_{jk})(p'_{il} - p'_{jl})}{| p'_i - p'_j |} \, , \ \  l \ne k \, ,
\\
 g''_{ii} &=& \varepsilon^2 \sum _{l=1}^n\  (\cos \theta_{il})^2  = \varepsilon^2 \, ,  \quad 1 \le i \le M_2 \, .
\end{eqnarray}

Consider a case when the function to be approximated $\varphi$ may be treated as an element of Hilbert space $H^{s_2}_\varepsilon (R^n)$,  $s_2 = 2 + n/2 + 1/2$. In this case it is a twice continuously differentiable function, $\varphi \in C^2 (R^n)$. Let's write down the expressions for $h_j, h'_j$ in space  $H^{s_2}_\varepsilon (R^n)$:
\begin{eqnarray}
  h_j (\eta, \varepsilon) &=&  \exp (-\varepsilon |\eta - p_j |) (3 + 3 \varepsilon |\eta - p_j | +  \varepsilon^2  |\eta - p_j |^2) ) \ ,
\\ \nonumber
&& p_j \in P_1 \ , 1 \le j \le M_1 \, ,
\\
  h'_j (\eta, \varepsilon) &=&  \varepsilon^2 \exp (-\varepsilon |\eta - p'_j | ) (1 + \varepsilon |\eta - p'_j |) \sum _{k=1}^n (\eta_k - p'_{jk})\cos \theta_{jk}  \ ,
\\ \nonumber
 &&  p'_j \in P_2 \, , \ \ 1 \le j \le M_2 \, , \ \eta \in R^n \, ,
\end{eqnarray}
and corresponding Gram matrix elements $g_{ij}, g'_{ij}, g''_{ij}$ are as follows
\begin{eqnarray}
  g_{ij} &=& \exp (-\varepsilon |p_i - p_j |) (3 + 3 \varepsilon |p_i - p_j | +  \varepsilon^2  |p_i - p_j |^2) ) \ ,  \\ \nonumber &&  p_i, p_j \in P_1 \ , 1 \le i \le M_1 \, ,
\\
  g'_{ij} &=& \varepsilon^2 \exp (-\varepsilon |p_i - p'_j | ) (1 + \varepsilon |p_i - p'_j |) \sum _{k=1}^n (p_{ik} - p'_{jk})\cos \theta_{jk}  \ ,
\\ \nonumber
&&   p_i \in P_1 \, , \  p'_j \in P_2 \ , 1 \le i \le M_1 \, , 1 \le j \le M_2 \, ,
\\ g''_{ij} &=&  \sum_{l=1}^n \sum_{k=1}^n \frac{ \partial^2 {V_{s_2}(p'_i, p'_j, \varepsilon)} } {\partial{\eta_l} \partial{x_k}} \cos \theta _{il} \cos \theta _{jk} \, ,\\ \nonumber  &&  p'_i, p'_j \in P_2 ,  \ 1 \le i \le M_2 \, , \  1 \le j \le M_2 \, ,  i \ne j \, , \\ \nonumber && \text{where} \\ \nonumber   && \frac{ \partial^2 {V_{s_2}(p'_i, p'_j, \varepsilon)} } {\partial{\eta_k} \partial{x_k}} = \varepsilon^2 \exp (-\varepsilon | p'_i - p'_j | ) (1 + \varepsilon | p'_i - p'_j | - \varepsilon^2 (p'_{ik} - p'_{jk})^2) \, , \\ \nonumber   && \frac{ \partial^2 {V_{s_2}(p'_i, p'_j, \varepsilon)} } {\partial{\eta_l} \partial{x_k}} = -\varepsilon^4 \exp (-\varepsilon | p'_i - p'_j | )  (p'_{ik} - p'_{jk})(p'_{il} - p'_{jl}) \, , \ \  l \ne k \, ,
\\
 g''_{ii} &=& \varepsilon^2 \sum _{l=1}^n\  (\cos \theta_{il})^2  = \varepsilon^2 \, ,  \quad 1 \le i \le M_2 \, ,
\end{eqnarray}
If the original task does not contain constraints including function $\varphi$ derivatives ($P_2$ is an empty set, thereby there are no constraints (2) in (1)—(2)), we may construct an interpolating normal spline assuming $\varphi$ is an element of Hilbert space $H^{s_0}_\varepsilon (R^n)$,  $s_0 =  n/2 + 1/2$. In this case it is a continuous function, $\varphi \in C (R^n)$, and expressions for $h_i, g_{ij}$ are defined by:
\begin{eqnarray}
  h_i (\eta, \varepsilon) &=&  \exp (-\varepsilon |\eta - p_i |) \ , \eta \in R^n, \  p_i \in P_1 \ , 1 \le i \le M_1 \, , \\  g_{ij} &=& \exp (-\varepsilon | p_j - p_i | )) \ ,  \\ \nonumber && \  p_i, p_j \in P_1 \ , 1 \le i \le M_1 \, ,  \  1 \le j \le M_1 \, ,
\end{eqnarray}
When value of the parameter $\varepsilon$ is small this spline is similar to multivariate generalization of the one dimensional linear spline.

We now consider the choice of value for parameters $s$ and $\varepsilon$. The parameter $s$ defines smoothness of the spline —  if $s >  n/2$ then spline belongs to $C^{[s - n/2 ]}$, where $[\cdot]$ denotes integer part of number. Moreover, discontinuities of derivatives of a greater order are observed only at the nodes $p_i, p'_i$, at all other points of $R^n$ the spline is infinitely differentiable. With increasing $s$ the condition number of the Gram matrix increases and computational properties of the problem of the function $\varphi$ approximation are worsening. Thereby it make sense to choose a reasonably small value for the parameter $s$.
A normal spline is similar to Duchon's $D^m -$spline [3] when value of parameter $\varepsilon$ is small. Approximating properties of the spline are getting better with smaller values of $\varepsilon$, however with decreasing value of $\varepsilon$ the condition number of Gram matrix increases. Therefore, when choosing the value of parameter $\varepsilon$, a compromise is needed. In practice, it is necessary to choose such value of the $\varepsilon$ that condition number of Gram matrix is small enough. Numerical procedures of the matrix condition number estimation are well known (e.g. [4]).
As well, as a rule, it is useful to preprocess the source data of the problem by scaling the domain $\Omega \subset R^n$ in which the interpolation nodes $p_i, p'_i$ are located into a unit hypercube.

Often accurate values of $u_i, u'_i$ in (1), (2) are not known. Assuming that uniform error bounds $\delta_i, \delta'_i$ are given, we can formulate the problem of function $\varphi$ approximation by constructing a uniform smoothing normal spline $\sigma_2$: 
\begin{eqnarray}
&&  \sigma_2 = {\rm arg\,min}\{  \| \varphi \|^2_{H^{s_r}_\varepsilon} : (35), (36), \forall \varphi \in {H^{s_r}_\varepsilon} (R^n) \} \, ,
\\
&& | \varphi (p_i) - u_i  | \le \delta _i \, ,  \quad  p_i \in P_1 \ , 1 \le i \le M_1,                                              \\
&& \left|  \frac{ \partial{\varphi} }{ \partial{e_i} } (p'_i) -  u'_i  \right| \le \delta' _i \, , \  p'_i \in P_2 \ , 1 \le i \le M_2 \, .
\end{eqnarray}
In a case when a mean squared error $\delta^2$ is known an approximating mse-smoothing normal spline $\sigma_3$ can be constructed as a solution of the following variational problem:
\begin{eqnarray}
&&  \sigma_3 = {\rm arg\,min}\{  \| \varphi \|^2_{H^{s_r}_\varepsilon} : (38)\, , \forall \varphi \in {H^{s_r}_\varepsilon} (R^n) \} \, ,
\\
&& \sum _{i \in P_1} ( \varphi (p_i) - u_i )^2 +   \sum _{i \in P_2}\left( \frac{ \partial{\varphi} }{ \partial{e_i} } (p'_i) - u'_i \right)^2 \le \delta ^2 \, .
\end{eqnarray} Constraints (35)—(36) and (38) define convex closed sets in Hilbert space $H^{s_r}_\varepsilon (R^n)$, $s_r  = r + n/2 + 1/2, \,  r=1, 2, \dots$. Under assumptions made earlier these sets are not empty, thus problems (34) and (37) are problems of finding a projection of zero element of Hilbert space to a non-empty closed convex set. A solution of such problem exists and it is unique.

Algorithms of solving problems (34) and (37) will be described in next posts.

Earlier a problem of multivariate normal splines constructing was treated in ([2], [7], [8]) and presented at [9] and [10].

References

[1] A. Balakrishnan, Applied Functional Analysis, New York, Springer-Verlag, 1976.
[2] V. Gorbunov, I. Kohanovsky, K. Makedonsky, Normal splines in reconstruction of multi-dimensional dependencies. Papers of WSEAS International Conference on Applied Mathematics, Numerical Analysis Symposium, Corfu, 2004. (http://www.wseas.us/e-library/conferences/corfu2004/papers/488-312.pdf)
[3] J. Duchon, Splines minimizing rotation-invariant semi-norms in Sobolev spaces, Lect. Notes in Math., Vol. 571, Springer, Berlin, 1977
[4] W.Hager, Condition Estimates, SIAM Journal on Scientific and Statistical Computing, Vol.5, N.2, 1984
[5] A. Ioffe, V. Tikhomirov, Theory of extremal problems, North-Holland, Amsterdam, 1979.
[6] P.-J. Laurent, Approximation et optimization, Paris, 1972.
[7] I. Kohanovsky, Normal Splines in Computing Tomography, Avtometriya, 1995, N 2. (https://www.iae.nsk.su/images/stories/5_Autometria/5_Archives/1995/2/84-89.pdf)
[8] I. Kohanovsky, Data approximation using multidimensional normal splines. Unpublished manuscript. 1996.
[9] I. Kohanovsky, Multidimensional Normal Splines and Problem of Physical Field Approximation, International Conference on Fourier Analysis and its Applications, Kuwait, 1998.
[10] I. Kohanovsky, Normal splines in fractional order Sobolev spaces and some of its
applications, The Third Siberian Congress on Applied and Industrial mathematics (INPRIM-98), Novosibirsk, 1998.

Friday, January 4, 2019

Reproducing Kernel of Bessel Potential space

$\setCounter{0}$
The standard definition of Bessel potential space $H^s$ can be found in ([1], [2], [6], [11]). Here the normal splines will be constructed in the Bessel potential space $H^s_\varepsilon$ defined as:
\begin{eqnarray}
   H^s_\varepsilon (R^n) = \left\{ \varphi | \varphi \in S' ,
  ( \varepsilon ^2 + | \xi |^2 )^{s/2}{\cal F} [\varphi ] \in L_2 (R^n) \right\} ,
  \varepsilon > 0 ,  s > \frac{n}{2} .
\end{eqnarray} where $S'  (R^n)$ is space of L. Schwartz tempered distributions, parameter $s$ may be treated as a fractional differentiation order and $\cal F [\varphi ]$ is a Fourier transform of the $\varphi$. The parameter $\varepsilon$ introduced here may be treated as a "scaling parameter". It allows to control approximation properties of the normal spline which usually are getting better with smaller values of $\varepsilon$, also it may be used to reduce the illconditioness of the related computational problem (in traditional theory $\varepsilon = 1$).
    Theoretical properties of spaces $H^s_\varepsilon$ at $\varepsilon > 0$ are identical — they are Hilbert spaces with inner product
\begin{eqnarray}
 \langle \varphi , \psi \rangle _{H^s_\varepsilon} =
 \int ( \varepsilon ^2  + | \xi |^2 )^s
      \cal F [\varphi ] \overline{\cal F [\psi ] } \, d \xi
\end{eqnarray} and  norm
\begin{eqnarray}
 \| \varphi \|_ {H^s_\varepsilon} = \left( \langle \varphi , \varphi \rangle _{H^s_\varepsilon} \right)^{1/2} =
    \| (  \varepsilon ^2 + | \xi |^2 )^{s/2} {\cal F} [\varphi ] \|_{L_2} \ .
\end{eqnarray} It is easy to see that all $\| \varphi \|_{H^s_\varepsilon}$ norms are equivalent. It means that space $H^s_\varepsilon (R^n)$ is equivalent to $H^s (R^n) =  H^s_1 (R^n) $.

Let's describe the Hölder spaces $C_b^t(R^n), t > 0$ ([9], [2]).
Definition 1. We denote the space
\begin{eqnarray}
   S(R^n) = \left\{ f | f \in C^\infty (R^n) ,  \sup_{x \in R^n} | x^\alpha D^\beta f(x) |  < \infty , \forall \alpha, \beta \in \mathbb{N}^n  \right\}
\end{eqnarray} as Schwartz space (or space of complex-valued rapidly decreasing infinitely differentiable functions defined on $R^n$) ([6], [7]).

Below is a definition of Hölder space $C^t_b(R^n)$ from [9]:
Definition 2: If $0 < t = [t] + \{t\}, [t]$ is non-negative integer, $0 < \{t\} < 1$, then $C^t_b(R^n)$ denotes the completion of $S(R^n)$ in the norm 
\begin{eqnarray}
   C^t_b (R^n) &=& \left\{ f | f \in C^{[t]}_b (R^n) ,  \| f  \|_{C^t_b}  < \infty \right\} , \\
  \| f  \|_{C^t_b} &=& \| f  \|_{C_b^{[t]}} \, + \,
\sum _{|\alpha | = [t]} \sup _{x \ne y} \frac {| D^\alpha  f(x)  - D^\alpha  f(y) | } { | x - y |^{\{t\}}} \\
  \| f  \|_{C_b^{[t]}} &=& \sup _{x \in R^n} | D^\alpha f(x) |, \forall \alpha : | \alpha | \le [t].
\end{eqnarray} Here space $C^{[t]}_b (R^n)$ consists of all functions having bounded continuous derivatives up to order $[t]$. It is easy to see that $C_b^t(R^n)$ is Banach space [9].

Connection of Bessel potential spaces $H^s(R^n)$ with the spaces $C_b^t(R^n)$ is expressed in theorem ([9], [2]):
Embedding Theorem: If $s = n/2+t$, where $t$ non-integer, $t > 0$, then space $H^s(R^n)$ is continuously embedded in $C_b^t(R^n)$. 

Particularly from this theorem follows that if $f \in H^{n/2 + 1/2}_\varepsilon (R^n)$, corrected if necessary on a set of Lebesgue measure zero, then it is uniformly continuous and bounded. Further if
$f \in H^{n/2 + 1/2 + r}_\varepsilon (R^n)$, $r$ — integer non-negative number, then it can be treated as $f \in C^r (R^n)$, where $C^r (R^n)$ is a class of functions with $r$ continuous derivatives.

It can be shown ([3], [11], [8], [4], [5]) that function 
\begin{eqnarray}
 && V_s ( \eta , x, \varepsilon ) = c_V (n,s,\varepsilon) (\varepsilon |\eta - x | )^{s - \frac{n}{2}}
          K_{s - \frac{n}{2}} (\varepsilon |\eta - x | )     \ ,
\\
 && c_V (n,s,\varepsilon) = \frac{\varepsilon ^{n-2s}} { 2^{s-1} (2 \pi )^{n/2} \Gamma (s) }, \ \eta \in R^n, \  x \in R^n, \ \varepsilon > 0 ,  s > \frac{n}{2}
\end{eqnarray} is a reproducing kernel of $H^s_\varepsilon (R^n)$ space. Here $K_{\gamma}$ is modified Bessel function of the second kind [10]. The exact value of $c_V (n,s,\varepsilon)$ is not important here and will be set to $\sqrt{\frac{2}{\pi}}$ for ease of further calculations. This reproducing kernel sometimes is called as Matérn kernel [12].

The kernel $K_{\gamma}$ becomes especially simple when $\gamma$  is half-integer. 
\begin{eqnarray}
 \gamma =  r  + \frac{1}{2} \ , (r = 0, 1, \dots ).
\end{eqnarray} In this case it is expressed via elementary functions (see [10]):
\begin{eqnarray}
K_{r+1/2}(t) &=&
   \sqrt{\frac{\pi} {2t}} t^{r+1} \left (
                   - \frac{1}{t} \frac{d}{dt} \right )^{r+1} \exp (-t) \ ,
\\
K_{r+1/2}(t) &=&
   \sqrt{\frac{\pi} {2t}} \exp (-t) \sum_{k=0}^r
                   \frac{(r+k)!}{k! (r-k)! (2t)^k} \ ,
   \  (r = 0, 1, \dots ) \  .
\end{eqnarray}
Let $s_r =  r + \frac{n}{2} + \frac{1}{2}, \  r = 0, 1, \dots$, then $H^{s_r}_\varepsilon(R^n)$ is continuously embedded in $C_b^r(R^n)$ and its reproducing kernel with accuracy to constant multiplier can be presented as follows
\begin{eqnarray}
 V_{r + \frac{n}{2} + \frac{1}{2}}(\eta , x, \varepsilon) &=& \exp (-\varepsilon |\eta - x |) \
     \sum_{k=0}^{r} \frac{(r+k)!}{2^k k! (r-k)!} (\varepsilon |\eta - x |)^{r-k} \ ,
\\
\nonumber
 &&     (r = 0, 1, \dots ) \  .
\end{eqnarray} Particularly (with accuracy to constant multiplier):
\begin{eqnarray}
&&  V_{\frac{n}{2} + \frac{1}{2}}(\eta , x, \varepsilon) = \exp (-\varepsilon |\eta - x |) \ ,
\\
&&   V_{1 + \frac{n}{2} + \frac{1}{2}}(\eta , x, \varepsilon) = \exp (-\varepsilon |\eta - x |)
                             (1 + \varepsilon |\eta - x |) \ ,
\\
&&   V_{2 + \frac{n}{2} + \frac{1}{2}}(\eta , x, \varepsilon) = \exp (-\varepsilon |\eta - x |)
             (3 + 3\varepsilon |\eta - x | + \varepsilon ^2 |\eta - x | ^2 ) \ .
\end{eqnarray}

References

[1] D.R. Adams, L.I. Hedberg, Function spaces and potential theory. Berlin, Springer, 1996.
[2] M.S. Agranovich, Sobolev Spaces, Their Generalizations and Elliptic Problems in
Smooth and Lipschitz Domains, Springer, Switzerland, 2015.
[3] N. Aronszajn, K.T. Smith, Theory of bessel potentials I, Ann.Inst.Fourier, 11,  1961.
[4] A. Imamov, M. Jurabaev, Splines in Sobolev spaces. Deposited report. UzNIINTI, 24.07.89, No 880.
[5] I. Kohanovsky, Data approximation using multidimensional normal splines. Unpublished manuscript. 1996.
[6] S. M. Nikol'skiĭ, Approximation of functions of several variables and imbedding theorems, Grundl. Math. Wissensch., 205, Springer-Verlag, New York, 1975.
[7] M. Reed and B. Simon, Methods of Modern Mathematical Physics, I: Functional Analysis, Academic Press, 1972.
[8] R. Schaback, Kernel-based Meshless Methods, Lecture Notes, Goettingen, 2011.
[9] H. Triebel, Interpolation. Function Spaces. Differential Operators. North-Holland, Amsterdam, 1978.
[10] G.N. Watson, A Treatise on the Theory of Bessel Functions ( 2nd.ed.), Cambridge University Press, 1966.
[11] H. Wendland, Scattered Data Approximation. Cambridge University Press, 2005.
[12] G.E. Fasshauer, Green’s Functions: Taking Another Look at Kernel Approximation, Radial Basis Functions, and Splines. In: Neamtu M., Schumaker L. (eds) Approximation Theory XIII: San Antonio 2010. Springer Proceedings in Mathematics, vol 13. Springer, New York, 2012.