Suppose P1={pi,pi∈Rn}M1i=1 and P2={p′i,p′i∈Rn}M2i=1 are sets of data locations lying in a domain Ω⊂Rn.
Let functional fi is a value of a function φ at a node pi∈P1 and functional f′i is a value of that function directional derivative (a slope of the function) at a node p′i∈P2
Assume the values of functionals fi and f′i at nodes belonging to P1 and P2 are known:
(fi,φ)=φ(pi)=ui ,pi∈P1 ,1≤i≤M1,(f′i,φ)=∂φ∂ei(p′i)=u′i ,p′i∈P2 ,1≤i≤M2,
where ∂φ∂ei(p′i)=∑nk=1∂φ∂xk(p′i)cosθik, and cosθik,1≤k≤n− directional cosines of a unit vector ei.
Consider a problem of the function φ reconstruction. Let function to be approximated φ belongs to Hilbert space Hsε(Rn) where s>r+n/2,r=1,2,…. In that case space Hsε is continuously embedded in Hölder space Crb(Rn), φ(x)∈Cr(Rn) and values of function φ(x) and the function partial derivatives at the fixed points are linear continuous functionals in Hsε.
We assume that P1 does not contain coincide nodes and there is no directional derivatives defined in a node in P2 which directions are linearly dependent, also a node from P1 may coincide with a node from P2. Under this restriction functionals fi,f′i are linearly independent.
In accordance with Riesz representation theorem [1] linear continuous functionals fi,f′i can be represented in the form of inner product of some elements hi,h′i∈Hsε and φ∈Hsε, for any φ∈Hsε.
(fi,φ)=⟨hi,φ⟩Hsrε,(f′i,φ)=⟨h′i,φ⟩Hsrε,∀φ∈Hsε .
The notation ⟨⋅,⋅⟩Hsrε in (3) denotes the inner product in space Hsrε (see the previous post for details). It is easy to see that under above assumptions elements hi,h′i are linearly independent.
Let Vsr(η,x,ε),sr=r+n/2+1/2,r=1,2,… is a reproduction kernel of Hilbert space Hsrε(Rn) (see the previous post). The kernel Vsr(η,x,ε)∈Cr(Rn) as a function of η∈Rn. In accordance with reproducing property of the reproducing kernel for any φ∈Hsrε(Rn) and x∈Rn the identity
φ(x)=⟨Vsr(⋅,x,ε),φ⟩Hsrε
holds. This means that Vsr(⋅,x) is the Riesz representer of the evaluation functional δx defined as value of a function φ∈Hsrε at a given point x∈Rn. As Vsr is a radial basis continuously differentiable function we can differentiate the identity (4) and receive the following identity
∂φ(x)∂xi=⟨∂Vsr(⋅,x,ε)∂xi,φ⟩Hsrε
which holds for any φ∈Hsrε and x∈Rn. It means that function ∂Vsr(⋅,x)∂xi represents a point-wise functional defined as value of function ∂φ(⋅)∂xi at a point x.
Thereby
hi(η,ε)=Vsr(η,pi,ε),pi∈P1 ,1≤i≤M1, h′i(η,ε)=∂Vsr(η,p′i,ε)∂ei, p′i∈P2 ,1≤i≤M2, η∈Rn.
and system of constraints (1), (2) can be presented as a system of linear equations in Hilbert space Hsrε:
⟨hi,φ⟩Hsrε=ui,1≤i≤M1, ⟨h′i,φ⟩Hsrε=u′i, 1≤i≤M2,hi,h′i,φ∈Hsrε(Rn).
This system of constraints defines a convex and closed set in Hilbert space — as an intersection of a finite number of hyperplanes (this set is not an empty set because of linear independence of functionals hi,h′i).
The problem of reconstruction of function φ satisfying system of constraints (8) and (9) is an undetermined problem. We reformulate it as the problem of finding a solution of the system (8), (9) that has minimal norm:
σ1=argmin{‖
such solution exists and it is unique [1, 6]. The element \sigma_1 is an interpolating normal spline.
In accordance with generalized Lagrange method ([5], [6]) the solution of the problem (10) can be written as follows:
\begin{eqnarray} \sigma_1 = \sum _{j=1}^{M_1} \mu_j h_j + \sum _{j=1}^{M_2} \mu'_j h'_j \ , \end{eqnarray}
here coefficients \mu_i, \mu'_i are defined by the system
\begin{eqnarray} \sum _{j=1}^{M_1} g_{ij} \mu_j + \sum _{j=1}^{M_2} g'_{ij} \mu'_j &=& u_i \, , \quad 1 \le i \le M_1 \, , \\ \sum _{j=1}^{M_1} g'_{ji} \mu_j + \sum _{j=1}^{M_2} g''_{ij} \mu'_j &=& u'_i \, , \quad 1 \le i \le M_2 \, , \end{eqnarray}
where g_{ij}, g'_{ij}, g''_{ij} are coefficients of the positive definite symmetric Gram matrix of the set of linearly independent elements h_i, h'_i \in H^{s_r}_\varepsilon (R^n):
\begin{eqnarray} g_{ij} = {\langle h_i, h_j \rangle}_{H^{s_r}_\varepsilon} = (f_i, h_j) \, , \quad 1 \le i \le M_1 \, , \ 1 \le j \le M_1 \, , \\ g'_{ij} = {\langle h_i, h'_j \rangle}_{H^{s_r}_\varepsilon} = (f_i, h'_j) \, , \quad 1 \le i \le M_1 \, , \ 1 \le j \le M_2 \, , \\ g''_{ij} = {\langle h'_i, h'_j \rangle}_{H^{s_r}_\varepsilon} = (f'_i, h'_j) \, , \quad 1 \le i \le M_2 \, , \ 1 \le j \le M_2 \, , \end{eqnarray} Taking into account (3), (6), (7) we receive:
\begin{eqnarray} g_{ij} &=& V_{s_r}(p_i , p_j, \varepsilon) \, , \quad p_i, p_j \in P_1 , \ 1 \le i \le M_1 , \ 1 \le j \le M_1 , \\ \nonumber \\ g'_{ij} &=& \frac{ \partial {V_{s_r}(p_i, p'_j, \varepsilon)} }{\partial{e_j}} = \sum_{k=1}^n \frac{ \partial {V_{s_r}(p_i, p'_j, \varepsilon)} }{\partial{x_k}} \cos \theta _{jk} \, , \\ \nonumber && p_i \in P_1 \, , \ 1 \le i \le M_1 , \ p'_j \in P_2 , \ 1 \le j \le M_2 \, , \\ g''_{ij} &=& \frac{ \partial^2 {V_{s_r}(p'_i, p'_j, \varepsilon)} } {\partial{e_i} \partial{e_j}} = \sum_{l=1}^n \sum_{k=1}^n \frac{ \partial^2 {V_{s_r}(p'_i, p'_j, \varepsilon)} } {\partial{\eta_l} \partial{x_k}} \cos \theta _{il} \cos \theta _{jk} \, ,\\ \nonumber && p'_i, p'_j \in P_2 , \ 1 \le i \le M_2 \, , \ 1 \le j \le M_2 \, . \end{eqnarray}
Thus the problem (10) is reduced to solving a system of linear equations (12), (13) with Gram matrix defined by (18) — (19) and constructing the normal spline (11).
Assume the function to be approximated \varphi belongs to Hilbert space H^{s_1}_\varepsilon (R^n), s_1 = 1 + n/2 + 1/2 (see the previous post). It is a continuously differentiable function, \varphi \in C^1 (R^n). Write down the expressions for h_j, h'_j, g_{ij}, g'_{ij}, g''_{ij} in space H^{s_1}_\varepsilon (R^n):
\begin{eqnarray} h_j (\eta, \varepsilon) &=& \exp (-\varepsilon |\eta - p_j |) (1 + \varepsilon |\eta - p_j |) \ , \\ \nonumber && p_j \in P_1 \ , \ 1 \le j \le M_1 \, , \\ h'_j (\eta, \varepsilon) &=& \varepsilon^2 \exp (-\varepsilon | \eta - p'_j | ) \sum _{k=1}^n (\eta_k - p'_{jk})\cos \theta_{jk} \ , \\ \nonumber && p'_j \in P_2 \ , \ 1 \le j \le M_2 \, , \quad \eta \in R^n \, , \end{eqnarray}
then
\begin{eqnarray} g_{ij} &=& \exp (-\varepsilon | p_i - p_j | )(1 + \varepsilon | p_i - p_j | ) \ , \ p_i, p_j \in P_1 \ , 1 \le i \le M_1 \, , \\ g'_{ij} &=& \varepsilon^2 \exp (-\varepsilon | p_i - p'_j | ) \sum _{k=1}^n (p_{ik} - p'_{jk})\cos \theta_{jk} \ , \\ \nonumber && p_i \in P_1 \, , \ p'_j \in P_2 \ , 1 \le i \le M_1 \, , \ 1 \le j \le M_2 \, , \\ g''_{ij} &=& \sum_{l=1}^n \sum_{k=1}^n \frac{ \partial^2 {V_{s_1}(p'_i, p'_j, \varepsilon)} } {\partial{\eta_l} \partial{x_k}} \cos \theta _{il} \cos \theta _{jk} \, ,\\ \nonumber && p'_i, p'_j \in P_2 , \ 1 \le i \le M_2 \, , \ 1 \le j \le M_2 \, , i \ne j \, , \\ \nonumber && \text{where} \\ \nonumber && \frac{ \partial^2 {V_{s_1}(p'_i, p'_j, \varepsilon)} } {\partial{\eta_k} \partial{x_k}} = \varepsilon^2 \exp (-\varepsilon | p'_i - p'_j | ) \left( 1 - \varepsilon \frac {(p'_{ik} - p'_{jk})^2}{| p'_i - p'_j |} \right) \, , \\ \nonumber && \frac{ \partial^2 {V_{s_1}(p'_i, p'_j, \varepsilon)} } {\partial{\eta_l} \partial{x_k}} = -\varepsilon^3 \exp (-\varepsilon | p'_i - p'_j | ) \frac {(p'_{ik} - p'_{jk})(p'_{il} - p'_{jl})}{| p'_i - p'_j |} \, , \ \ l \ne k \, , \\ g''_{ii} &=& \varepsilon^2 \sum _{l=1}^n\ (\cos \theta_{il})^2 = \varepsilon^2 \, , \quad 1 \le i \le M_2 \, . \end{eqnarray}
Consider a case when the function to be approximated \varphi may be treated as an element of Hilbert space H^{s_2}_\varepsilon (R^n), s_2 = 2 + n/2 + 1/2. In this case it is a twice continuously differentiable function, \varphi \in C^2 (R^n). Let's write down the expressions for h_j, h'_j in space H^{s_2}_\varepsilon (R^n):
\begin{eqnarray} h_j (\eta, \varepsilon) &=& \exp (-\varepsilon |\eta - p_j |) (3 + 3 \varepsilon |\eta - p_j | + \varepsilon^2 |\eta - p_j |^2) ) \ , \\ \nonumber && p_j \in P_1 \ , 1 \le j \le M_1 \, , \\ h'_j (\eta, \varepsilon) &=& \varepsilon^2 \exp (-\varepsilon |\eta - p'_j | ) (1 + \varepsilon |\eta - p'_j |) \sum _{k=1}^n (\eta_k - p'_{jk})\cos \theta_{jk} \ , \\ \nonumber && p'_j \in P_2 \, , \ \ 1 \le j \le M_2 \, , \ \eta \in R^n \, , \end{eqnarray}
and corresponding Gram matrix elements g_{ij}, g'_{ij}, g''_{ij} are as follows
\begin{eqnarray} g_{ij} &=& \exp (-\varepsilon |p_i - p_j |) (3 + 3 \varepsilon |p_i - p_j | + \varepsilon^2 |p_i - p_j |^2) ) \ , \\ \nonumber && p_i, p_j \in P_1 \ , 1 \le i \le M_1 \, , \\ g'_{ij} &=& \varepsilon^2 \exp (-\varepsilon |p_i - p'_j | ) (1 + \varepsilon |p_i - p'_j |) \sum _{k=1}^n (p_{ik} - p'_{jk})\cos \theta_{jk} \ , \\ \nonumber && p_i \in P_1 \, , \ p'_j \in P_2 \ , 1 \le i \le M_1 \, , 1 \le j \le M_2 \, , \\ g''_{ij} &=& \sum_{l=1}^n \sum_{k=1}^n \frac{ \partial^2 {V_{s_2}(p'_i, p'_j, \varepsilon)} } {\partial{\eta_l} \partial{x_k}} \cos \theta _{il} \cos \theta _{jk} \, ,\\ \nonumber && p'_i, p'_j \in P_2 , \ 1 \le i \le M_2 \, , \ 1 \le j \le M_2 \, , i \ne j \, , \\ \nonumber && \text{where} \\ \nonumber && \frac{ \partial^2 {V_{s_2}(p'_i, p'_j, \varepsilon)} } {\partial{\eta_k} \partial{x_k}} = \varepsilon^2 \exp (-\varepsilon | p'_i - p'_j | ) (1 + \varepsilon | p'_i - p'_j | - \varepsilon^2 (p'_{ik} - p'_{jk})^2) \, , \\ \nonumber && \frac{ \partial^2 {V_{s_2}(p'_i, p'_j, \varepsilon)} } {\partial{\eta_l} \partial{x_k}} = -\varepsilon^4 \exp (-\varepsilon | p'_i - p'_j | ) (p'_{ik} - p'_{jk})(p'_{il} - p'_{jl}) \, , \ \ l \ne k \, , \\ g''_{ii} &=& \varepsilon^2 \sum _{l=1}^n\ (\cos \theta_{il})^2 = \varepsilon^2 \, , \quad 1 \le i \le M_2 \, , \end{eqnarray}
If the original task does not contain constraints including function \varphi derivatives (P_2 is an empty set, thereby there are no constraints (2) in (1)—(2)), we may construct an interpolating normal spline assuming \varphi is an element of Hilbert space H^{s_0}_\varepsilon (R^n), s_0 = n/2 + 1/2. In this case it is a continuous function, \varphi \in C (R^n), and expressions for h_i, g_{ij} are defined by:
\begin{eqnarray} h_i (\eta, \varepsilon) &=& \exp (-\varepsilon |\eta - p_i |) \ , \eta \in R^n, \ p_i \in P_1 \ , 1 \le i \le M_1 \, , \\ g_{ij} &=& \exp (-\varepsilon | p_j - p_i | )) \ , \\ \nonumber && \ p_i, p_j \in P_1 \ , 1 \le i \le M_1 \, , \ 1 \le j \le M_1 \, , \end{eqnarray}
When value of the parameter \varepsilon is small this spline is similar to multivariate generalization of the one dimensional linear spline.
We now consider the choice of value for parameters s and \varepsilon. The parameter s defines smoothness of the spline — if s > n/2 then spline belongs to C^{[s - n/2 ]}, where [\cdot] denotes integer part of number. Moreover, discontinuities of derivatives of a greater order are observed only at the nodes p_i, p'_i, at all other points of R^n the spline is infinitely differentiable. With increasing s the condition number of the Gram matrix increases and computational properties of the problem of the function \varphi approximation are worsening. Thereby it make sense to choose a reasonably small value for the parameter s.
A normal spline is similar to Duchon's D^m -spline [3] when value of parameter \varepsilon is small. Approximating properties of the spline are getting better with smaller values of \varepsilon, however with decreasing value of \varepsilon the condition number of Gram matrix increases. Therefore, when choosing the value of parameter \varepsilon, a compromise is needed. In practice, it is necessary to choose such value of the \varepsilon that condition number of Gram matrix is small enough. Numerical procedures of the matrix condition number estimation are well known (e.g. [4]).
As well, as a rule, it is useful to preprocess the source data of the problem by scaling the domain \Omega \subset R^n in which the interpolation nodes p_i, p'_i are located into a unit hypercube.
Often accurate values of u_i, u'_i in (1), (2) are not known. Assuming that uniform error bounds \delta_i, \delta'_i are given, we can formulate the problem of function \varphi approximation by constructing a uniform smoothing normal spline \sigma_2:
\begin{eqnarray} && \sigma_2 = {\rm arg\,min}\{ \| \varphi \|^2_{H^{s_r}_\varepsilon} : (35), (36), \forall \varphi \in {H^{s_r}_\varepsilon} (R^n) \} \, , \\ && | \varphi (p_i) - u_i | \le \delta _i \, , \quad p_i \in P_1 \ , 1 \le i \le M_1, \\ && \left| \frac{ \partial{\varphi} }{ \partial{e_i} } (p'_i) - u'_i \right| \le \delta' _i \, , \ p'_i \in P_2 \ , 1 \le i \le M_2 \, . \end{eqnarray}
In a case when a mean squared error \delta^2 is known an approximating mse-smoothing normal spline \sigma_3 can be constructed as a solution of the following variational problem:
\begin{eqnarray}
&& \sigma_3 = {\rm arg\,min}\{ \| \varphi \|^2_{H^{s_r}_\varepsilon} : (38)\, , \forall \varphi \in {H^{s_r}_\varepsilon} (R^n) \} \, ,
\\
&& \sum _{i \in P_1} ( \varphi (p_i) - u_i )^2 + \sum _{i \in P_2}\left( \frac{ \partial{\varphi} }{ \partial{e_i} } (p'_i) - u'_i \right)^2 \le \delta ^2 \, .
\end{eqnarray} Constraints (35)—(36) and (38) define convex closed sets in Hilbert space H^{s_r}_\varepsilon (R^n), s_r = r + n/2 + 1/2, \, r=1, 2, \dots. Under assumptions made earlier these sets are not empty, thus problems (34) and (37) are problems of finding a projection of zero element of Hilbert space to a non-empty closed convex set. A solution of such problem exists and it is unique.
Algorithms of solving problems (34) and (37) will be described in next posts.
Earlier a problem of multivariate normal splines constructing was treated in ([2], [7], [8]) and presented at [9] and [10].
[1] A. Balakrishnan, Applied Functional Analysis, New York, Springer-Verlag, 1976.
[2] V. Gorbunov, I. Kohanovsky, K. Makedonsky, Normal splines in reconstruction of multi-dimensional dependencies. Papers of WSEAS International Conference on Applied Mathematics, Numerical Analysis Symposium, Corfu, 2004. (http://www.wseas.us/e-library/conferences/corfu2004/papers/488-312.pdf)
[3] J. Duchon, Splines minimizing rotation-invariant semi-norms in Sobolev spaces, Lect. Notes in Math., Vol. 571, Springer, Berlin, 1977
[4] W.Hager, Condition Estimates, SIAM Journal on Scientific and Statistical Computing, Vol.5, N.2, 1984
[5] A. Ioffe, V. Tikhomirov, Theory of extremal problems, North-Holland, Amsterdam, 1979.
[6] P.-J. Laurent, Approximation et optimization, Paris, 1972.
[7] I. Kohanovsky, Normal Splines in Computing Tomography, Avtometriya, 1995, N 2. (https://www.iae.nsk.su/images/stories/5_Autometria/5_Archives/1995/2/84-89.pdf)
[8] I. Kohanovsky, Data approximation using multidimensional normal splines. Unpublished manuscript. 1996.
[9] I. Kohanovsky, Multidimensional Normal Splines and Problem of Physical Field Approximation, International Conference on Fourier Analysis and its Applications, Kuwait, 1998.[10] I. Kohanovsky, Normal splines in fractional order Sobolev spaces and some of its
applications, The Third Siberian Congress on Applied and Industrial mathematics (INPRIM-98), Novosibirsk, 1998.