Monday, May 25, 2015

The Riesz representation of functionals and a reproducing kernel Hilbert space

$\setCounter{0}$
The basic idea of the normal spline method was described in the previous post. Now we discuss a way of constructing the Riesz representers of the continuous linear functionals $f_i$ assuming the Hilbert space $H$ is a reproducing kernel Hilbert space. Let's recall the Riesz representation theorem and the reproducing kernel Hilbert space definition.

Riesz representation theorem ([1]): If $f$ is a linear continuous functional on a Hilbert space $H$ then there exists some $h \in H$ such that for every $\varphi \in H$ we have
\begin{eqnarray*}
     (f, \varphi) = {\langle \varphi , h \rangle}_H
\end{eqnarray*}
Reproducing Kernel Hilbert space ([2]): A Hilbert space $H(R^n)$ is called a reproducing kernel Hilbert space (RKHS) if there is a reproducing kernel function $K(\eta, x)$ of $\eta$ and $x$ in $R^n$ such that:
1) For any $x \in R^n$ the function $K(\eta, x)$ belongs to $H(R^n)$ as a function of the $\eta$.
2) The reproducing property. For any $x \in R^n$ and any $\varphi \in H(R^n)$, the following equality is valid:
$\qquad \qquad  \qquad  \varphi (x) = {\langle \varphi(\eta) , K(\eta , x) \rangle}_H$

Inner product here applies to functions of $\eta$. It is known, if a reproducing kernel exists it is unique and it is symmetric with respect to the $\eta$ and $x$ ([2]): $K(\eta , x) = K(x, \eta)$.

Let $K(\eta, x)$ is a reproducing kernel of the Hilbert space $H(R^n)$, then we can find the Riesz representers (images) $h_i$ of the functionals $f_i$. Namely:
\begin{eqnarray}
      h_i (x) = (f_i, K (\cdot, x)) \, .
\end{eqnarray}
Indeed, by reproducing property:
\begin{eqnarray*}
      h_i (x) = {\langle h_i , K(\cdot , x) \rangle}_H
\end{eqnarray*}
but since the $h_i$ is a Riesz representer of the $f_i$ (see (5) in the previous post):
\begin{eqnarray*}
      {\langle h_i , K(\cdot , x) \rangle}_H = (f_i , K(\cdot , x) ) \,
\end{eqnarray*}
(here $K(\eta, x) \in H$ as a function of the $\eta$).
\begin{eqnarray}
  g_{ij} = {\langle h_i , h_j \rangle}_H = (f_i , h_j) =  \bigl(f_i , ( f_j , K ) \bigr) \ .
\end{eqnarray}
In the next post a reproducing kernel of the Bessel Potential space will be presented.

References

[1] A. Balakrishnan. Applied Functional Analysis. // New York: Springer-Verlag, 1976.
[2] N. Aronszajn, Theory of reproducing kernels, Tranzactions of the AMS.– 950 – Vol.68.

Saturday, January 3, 2015

A normal spline method for multidimensional dependency reconstruction

$\setCounter{0}$

A problem of reconstruction of multivariate dependency under incomplete data set arises in many areas of research. Often there is a finite set of experimental measurements $\{ \tilde u_i\}_{i=1}^m$ and we may treat an unknown function $\varphi (x)$ as an element of a Hilbert space $H(R^n)$. Consider the following data model
\begin{eqnarray}
&&   (f_i ,\varphi) =  u_i  \, , \qquad   \tilde  u_i\ = u_i + \epsilon_i \, \qquad        1 \le i \le  \ m,
\end{eqnarray}
where $\{f_i\}$ - linear continuous functionals, $\{u_i\}$ - "exact value of measurement", and $\{\epsilon_i\}$ - random values uniformly distributed in $\left[ -\delta , \delta \right]$. Hence we have a system of constraints
\begin{eqnarray}
&& \left| (f_i ,\varphi) - \tilde u_i \right| \le \delta \, , \qquad 1 \le i \le \ m \, .
\label{eq:sys1}
\end{eqnarray}
Problem of the function $\varphi$ reconstruction is under-determined. We will approximate $\varphi$ by constructing an element of minimal norm from the set of the system (2) solutions. Let's introduce a penalty functional $J$
\begin{eqnarray}
(J, \varphi) = { \| \varphi\| }_H ^2 \ ,
\end{eqnarray}
and find a function $\varphi$ in the Hilbert space $H$ to minimize $J$ subject to (2) (here $\| \cdot \|_H$ denotes the $H$ norm). Solution of this problem is a generalized spline of Atteia-Laurent [1]. We name it a normal spline following to the V. Gorbunov work [2] where the normal spline-collocation method for linear ordinary differential and integral equations was developed.

The normal spline method consists of a Hilbert norm minimization on the set of a collocation system solutions. In contrast to the classical collocation methods the basis system here is not given a priory, instead it is constructed in accordance with the chosen Hilbert space norm. The base functions are canonical images of the continuous linear functionals presented as inner product in the Hilbert space. Such functional presentation can be found if the Hilbert space $H$ is a reproducing kernel Hilbert space and the corresponding reproducing kernel is known. In order to construct a reproducing kernel useful for a case of one-dimensional problems treated in [2] it was sufficient to suppose the Hilbert space $H$ is a classical Sobolev space with integer order. Multivariate generalization of the normal spline method described in this blog was done ([6] — [10]) with usage of the Bessel potential spaces [3].

Essentially the normal spline method  is based on classical functional analysis results: the Sobolev embedding theorem [4] and the F.Riesz representation theorem [5].

Here and further we assume that $\{f_i\}$ are linearly independent functionals therefore the set of the system (2) solutions is not empty, convex and closed one. It is known that every closed convex set in a Hilbert space has a unique element of minimal norm [1, 5] - here it is a uniform smoothing normal spline:
\begin{eqnarray}
      \sigma = {\mathop{\rm arg\,min}\nolimits} \lbrace {\| \varphi \| }^2_H : (2) \rbrace \ .
\end{eqnarray}
In general a normal spline can be treated as a projection of an element of a Hilbert space to a closed convex set in that Hilbert space. Thus the problem (4) always has the unique solution.

In accordance with Riesz representation theorem [5] every linear continuous functional $f_i$ on a Hilbert space $H$ can be represented as inner product of some element $h_i \in H$ and $\varphi \in H$, for any $\varphi \in H$ :
\begin{eqnarray}
      (f_i ,\varphi) = {\langle h_i , \varphi \rangle}_H \, , \qquad  \forall \varphi \in H \  ,
\end{eqnarray}
where ${\langle \cdot , \cdot \rangle}_H$ - inner product in $H$. Then (2) can be written in form:
\begin{eqnarray}
    | {\langle h_i , \varphi \rangle}_H - \tilde u_i | \le \delta \, ,     \qquad 1 \le i \le m \  .
\end{eqnarray}
and problem (4) is reduced to:
\begin{eqnarray}
      \sigma = {\mathop{\rm arg\,min}\nolimits} \lbrace {\langle \varphi , \varphi \rangle}_H : (6) \rbrace \ .
\end{eqnarray}
In accordance with extremum conditions for the problem (7) its solution can be presented in form [1]:
\begin{eqnarray}\label{1_spl}
    \sigma =  \sum _{j=1} ^m (\mu _j - \mu _{j+m}) h_j \ , \quad \text{where} \quad \mu _j  \le 0 , \quad \mu _{j+m} \le 0 , \quad 1\le j \le m .
\end{eqnarray}
It allows to reduce the initial problem (7) of constructing a uniform smoothing normal spline to solving a finite-dimensional quadratic programming problem:
\begin{eqnarray}
   &&   \sigma = {\mathop{\rm arg\,min}} \Big\lbrace {\sum _{i=1} ^m \sum _{j=1} ^m   (\mu _i - \mu _{i+m}) (\mu _j - \mu _{j+m}) g_{ij}} : (10), (11) \Big\rbrace \ ,
\\  &&  \Big| \sum _{j=1} ^m g_{ij} (\mu _j - \mu _{j+m}) - \tilde u_i \Big| \, \le \, \delta \, ,    \qquad 1 \le i \le m \  ,
\\  && \mu _j  \le 0 , \quad \mu _{j+m} \le 0 , \quad 1\le j \le m .
\end{eqnarray}
Here $g_{ij}={\langle h_i , h_j \rangle}_H $ - coefficients of the symmetric Gram matix of the set of the elements $\{h_i\}, h_i \in H$. Notice that elements $\{h_i\}$ (images of functionals $\{f_i\}$) are linearly independent therefore the Gram matrix is a positive definite one.

It was shown [2] that it is not necessary to formulate the problem (9) in its explicit form. A special version of algorithm for solving this simple quadratic programming problem in Hilbert space will be described in the next post.

Results related to multidimensional normal spline method and its applications were published in works [6, 10] and presented at conferences [8, 9, 11, 12].

References

[1] P.-J. Laurent. Approximation et optimization // Paris, 1972.
[2] Gorbunov V.K. A Method of  Normal Spline Collocation // Computational Mathematics and Mathematical Physics. 1989. V.29. N2. PP.212-224.
[3] Aronszajn N., Smith K.T. Theory of bessel potentials I. // Ann.Inst.Fourier, 11 (1961), 385-475.
[4] S. L. Sobolev. Some applications of functional analysis in mathematical physics [3rd ed]. // AMS, 2008.
[5] A. Balakrishnan. Applied Functional Analysis. // New York: Springer-Verlag, 1976.
[6] I. Kohanovsky. Normal Splines in Computing Tomography. Avtometriya. – 1995 – N 2. (https://www.iae.nsk.su/images/stories/5_Autometria/5_Archives/1995/2/84-89.pdf)
[7] I. Kohanovsky. Data approximation using multidimensional normal splines. Unpublished manuscript. 1996.
[8] I. Kohanovsky, Multidimensional Normal Splines and Problem of Physical Field Approximation, International Conference on Fourier Analysis and its Applications, Kuwait, 1998.
[9] I. Kohanovsky, Normal splines in fractional order Sobolev spaces and some of its
applications, The Third Siberian Congress on Applied and Industrial mathematics (INPRIM-98), Novosibirsk, 1998.
[10] V. Gorbunov, I. Kohanovsky, K. Makedonsky. Normal splines in reconstruction of multi-dimensional dependencies. Papers of WSEAS International Conference on Applied Mathematics, Numerical Analysis Symposium, Corfu, 2004. (http://www.wseas.us/e-library/conferences/corfu2004/papers/488-312.pdf)
[11] I. Kohanovsky, Inequality-Constrained Multivariate Normal Splines with Some Applications in Finance. 27th GAMM-Seminar Leipzig on Approximation of Multiparametric functions, Max-Planck-Institute for Mathematics in the Sciences, Leipzig, Germany, 2011.
[12] V. Gorbunov, I. Kohanovsky, Heterogeneous Parallel Method for the Construction of Multi-dimensional Smoothing Splines. ESCO 2014 4th European Seminar on Computing. University of West Bohemia, Plzen, Czech Republic, 2014.