Loading [MathJax]/jax/output/HTML-CSS/jax.js

Tuesday, January 29, 2019

Hermite-Birkhoff Interpolation of Scattered Data


Suppose P1={pi,piRn}M1i=1 and P2={pi,piRn}M2i=1 are sets of data locations lying in a domain ΩRn.
Let functional fi is a value of a function φ at a node piP1 and functional fi is a value of that function directional derivative (a slope of the function) at a node piP2
Assume the values of functionals fi and fi at nodes belonging to P1 and P2 are known:
(fi,φ)=φ(pi)=ui ,piP1 ,1iM1,(fi,φ)=φei(pi)=ui ,piP2 ,1iM2,
where φei(pi)=nk=1φxk(pi)cosθik, and cosθik,1kn directional cosines of a unit vector ei.

Consider a problem of the function φ reconstruction. Let function to be approximated φ belongs to Hilbert space Hsε(Rn) where s>r+n/2,r=1,2,. In that case space Hsε is continuously embedded in Hölder space Crb(Rn), φ(x)Cr(Rn) and values of function φ(x) and the function partial derivatives at the fixed points are linear continuous functionals in Hsε.
We assume that P1 does not contain coincide nodes and there is no directional derivatives defined in a node in P2 which directions are linearly dependent, also a node from P1 may coincide with a node from P2. Under this restriction functionals fi,fi are linearly independent.
In accordance with Riesz representation theorem [1] linear continuous functionals fi,fi can be represented in the form of inner product of some elements hi,hiHsε and  φHsε, for any φHsε.
(fi,φ)=hi,φHsrε,(fi,φ)=hi,φHsrε,φHsε .
The notation ,Hsrε in (3) denotes the inner product in space Hsrε (see the previous post for details). It is easy to see that under above assumptions elements hi,hi are linearly independent.
Let Vsr(η,x,ε),sr=r+n/2+1/2,r=1,2, is a reproduction kernel of Hilbert space Hsrε(Rn) (see the previous post). The kernel Vsr(η,x,ε)Cr(Rn) as a function of ηRn. In accordance with reproducing property of the reproducing kernel for any φHsrε(Rn) and xRn the identity
φ(x)=Vsr(,x,ε),φHsrε
holds. This means that Vsr(,x) is the Riesz representer of the evaluation functional δx defined as value of a function φHsrε at a given point xRn. As Vsr is a radial basis continuously differentiable function we can differentiate the identity (4) and receive the following identity
φ(x)xi=Vsr(,x,ε)xi,φHsrε
which holds for any φHsrε and xRn. It means that function Vsr(,x)xi represents a point-wise functional defined as value of function φ()xi at a point x.
Thereby
hi(η,ε)=Vsr(η,pi,ε),piP1 ,1iM1, hi(η,ε)=Vsr(η,pi,ε)ei, piP2 ,1iM2, ηRn.
and system of constraints (1), (2) can be presented as a system of linear equations in Hilbert space Hsrε:
hi,φHsrε=ui,1iM1, hi,φHsrε=ui,  1iM2,hi,hi,φHsrε(Rn).
This system of constraints defines a convex and closed set in Hilbert space — as an intersection of a finite number of hyperplanes (this set is not an empty set because of linear independence of functionals hi,hi).

The problem of reconstruction of function φ satisfying system of constraints (8) and (9) is an undetermined problem. We reformulate it as the problem of finding a solution of the system (8), (9) that has minimal norm:
σ1=argmin{φ2Hsrε:(8),(9),φHsrε(Rn)},
such solution exists and it is unique [1, 6]. The element σ1 is an interpolating normal spline. 
In accordance with generalized Lagrange method ([5], [6]) the solution of the problem (10) can be written as follows:
σ1=M1j=1μjhj+M2j=1μjhj ,
here coefficients μi,μi  are defined by the system
M1j=1gijμj+M2j=1gijμj=ui,1iM1,M1j=1gjiμj+M2j=1gijμj=ui,1iM2,
where gij,gij,gij are coefficients of the positive definite symmetric Gram matrix of the set of linearly independent elements hi,hiHsrε(Rn):
gij=hi,hjHsrε=(fi,hj),1iM1, 1jM1,gij=hi,hjHsrε=(fi,hj),1iM1, 1jM2,gij=hi,hjHsrε=(fi,hj),1iM2, 1jM2, Taking into account (3), (6), (7) we receive:
gij=Vsr(pi,pj,ε),pi,pjP1, 1iM1, 1jM1,gij=Vsr(pi,pj,ε)ej=nk=1Vsr(pi,pj,ε)xkcosθjk,piP1, 1iM1, pjP2, 1jM2,gij=2Vsr(pi,pj,ε)eiej=nl=1nk=12Vsr(pi,pj,ε)ηlxkcosθilcosθjk,pi,pjP2, 1iM2, 1jM2.
Thus the problem (10) is reduced to solving a system of linear equations (12), (13) with Gram matrix defined by (18) — (19) and constructing the normal spline (11).

Assume the function to be approximated φ belongs to Hilbert space Hs1ε(Rn)s1=1+n/2+1/2 (see the previous post). It is a continuously differentiable function, φC1(Rn). Write down the expressions for hj,hj,gij,gij,gij in space  Hs1ε(Rn):
hj(η,ε)=exp(ε|ηpj|)(1+ε|ηpj|) ,pjP1 , 1jM1,hj(η,ε)=ε2exp(ε|ηpj|)nk=1(ηkpjk)cosθjk ,pjP2 , 1jM2,ηRn,
then
gij=exp(ε|pipj|)(1+ε|pipj|) , pi,pjP1 ,1iM1,gij=ε2exp(ε|pipj|)nk=1(pikpjk)cosθjk ,piP1, pjP2 ,1iM1, 1jM2,gij=nl=1nk=12Vs1(pi,pj,ε)ηlxkcosθilcosθjk,pi,pjP2, 1iM2, 1jM2,ij,where2Vs1(pi,pj,ε)ηkxk=ε2exp(ε|pipj|)(1ε(pikpjk)2|pipj|),2Vs1(pi,pj,ε)ηlxk=ε3exp(ε|pipj|)(pikpjk)(pilpjl)|pipj|,  lk,gii=ε2nl=1 (cosθil)2=ε2,1iM2.

Consider a case when the function to be approximated φ may be treated as an element of Hilbert space Hs2ε(Rn)s2=2+n/2+1/2. In this case it is a twice continuously differentiable function, φC2(Rn). Let's write down the expressions for hj,hj in space  Hs2ε(Rn):
hj(η,ε)=exp(ε|ηpj|)(3+3ε|ηpj|+ε2|ηpj|2)) ,pjP1 ,1jM1,hj(η,ε)=ε2exp(ε|ηpj|)(1+ε|ηpj|)nk=1(ηkpjk)cosθjk ,pjP2,  1jM2, ηRn,
and corresponding Gram matrix elements gij,gij,gij are as follows
gij=exp(ε|pipj|)(3+3ε|pipj|+ε2|pipj|2)) ,pi,pjP1 ,1iM1,gij=ε2exp(ε|pipj|)(1+ε|pipj|)nk=1(pikpjk)cosθjk ,piP1, pjP2 ,1iM1,1jM2,gij=nl=1nk=12Vs2(pi,pj,ε)ηlxkcosθilcosθjk,pi,pjP2, 1iM2, 1jM2,ij,where2Vs2(pi,pj,ε)ηkxk=ε2exp(ε|pipj|)(1+ε|pipj|ε2(pikpjk)2),2Vs2(pi,pj,ε)ηlxk=ε4exp(ε|pipj|)(pikpjk)(pilpjl),  lk,gii=ε2nl=1 (cosθil)2=ε2,1iM2,
If the original task does not contain constraints including function φ derivatives (P2 is an empty set, thereby there are no constraints (2) in (1)—(2)), we may construct an interpolating normal spline assuming φ is an element of Hilbert space Hs0ε(Rn)s0=n/2+1/2. In this case it is a continuous function, φC(Rn), and expressions for hi,gij are defined by:
hi(η,ε)=exp(ε|ηpi|) ,ηRn, piP1 ,1iM1,gij=exp(ε|pjpi|)) , pi,pjP1 ,1iM1, 1jM1,
When value of the parameter ε is small this spline is similar to multivariate generalization of the one dimensional linear spline.

We now consider the choice of value for parameters s and ε. The parameter s defines smoothness of the spline —  if s>n/2 then spline belongs to C[sn/2], where [] denotes integer part of number. Moreover, discontinuities of derivatives of a greater order are observed only at the nodes pi,pi, at all other points of Rn the spline is infinitely differentiable. With increasing s the condition number of the Gram matrix increases and computational properties of the problem of the function φ approximation are worsening. Thereby it make sense to choose a reasonably small value for the parameter s.
A normal spline is similar to Duchon's Dmspline [3] when value of parameter ε is small. Approximating properties of the spline are getting better with smaller values of ε, however with decreasing value of ε the condition number of Gram matrix increases. Therefore, when choosing the value of parameter ε, a compromise is needed. In practice, it is necessary to choose such value of the ε that condition number of Gram matrix is small enough. Numerical procedures of the matrix condition number estimation are well known (e.g. [4]).
As well, as a rule, it is useful to preprocess the source data of the problem by scaling the domain ΩRn in which the interpolation nodes pi,pi are located into a unit hypercube.

Often accurate values of ui,ui in (1), (2) are not known. Assuming that uniform error bounds δi,δi are given, we can formulate the problem of function φ approximation by constructing a uniform smoothing normal spline σ2
σ2=argmin{φ2Hsrε:(35),(36),φHsrε(Rn)},|φ(pi)ui|δi,piP1 ,1iM1,|φei(pi)ui|δi, piP2 ,1iM2.
In a case when a mean squared error δ2 is known an approximating mse-smoothing normal spline σ3 can be constructed as a solution of the following variational problem:
σ3=argmin{φ2Hsrε:(38),φHsrε(Rn)},iP1(φ(pi)ui)2+iP2(φei(pi)ui)2δ2. Constraints (35)—(36) and (38) define convex closed sets in Hilbert space Hsrε(Rn), sr=r+n/2+1/2,r=1,2,. Under assumptions made earlier these sets are not empty, thus problems (34) and (37) are problems of finding a projection of zero element of Hilbert space to a non-empty closed convex set. A solution of such problem exists and it is unique.

Algorithms of solving problems (34) and (37) will be described in next posts.

Earlier a problem of multivariate normal splines constructing was treated in ([2], [7], [8]) and presented at [9] and [10].

References

[1] A. Balakrishnan, Applied Functional Analysis, New York, Springer-Verlag, 1976.
[2] V. Gorbunov, I. Kohanovsky, K. Makedonsky, Normal splines in reconstruction of multi-dimensional dependencies. Papers of WSEAS International Conference on Applied Mathematics, Numerical Analysis Symposium, Corfu, 2004. (http://www.wseas.us/e-library/conferences/corfu2004/papers/488-312.pdf)
[3] J. Duchon, Splines minimizing rotation-invariant semi-norms in Sobolev spaces, Lect. Notes in Math., Vol. 571, Springer, Berlin, 1977
[4] W.Hager, Condition Estimates, SIAM Journal on Scientific and Statistical Computing, Vol.5, N.2, 1984
[5] A. Ioffe, V. Tikhomirov, Theory of extremal problems, North-Holland, Amsterdam, 1979.
[6] P.-J. Laurent, Approximation et optimization, Paris, 1972.
[7] I. Kohanovsky, Normal Splines in Computing Tomography, Avtometriya, 1995, N 2. (https://www.iae.nsk.su/images/stories/5_Autometria/5_Archives/1995/2/84-89.pdf)
[8] I. Kohanovsky, Data approximation using multidimensional normal splines. Unpublished manuscript. 1996.
[9] I. Kohanovsky, Multidimensional Normal Splines and Problem of Physical Field Approximation, International Conference on Fourier Analysis and its Applications, Kuwait, 1998.
[10] I. Kohanovsky, Normal splines in fractional order Sobolev spaces and some of its
applications, The Third Siberian Congress on Applied and Industrial mathematics (INPRIM-98), Novosibirsk, 1998.

No comments:

Post a Comment