### Advanced Analysis, Notes 17: Hilbert function spaces (Pick’s interpolation theorem)

In this final lecture we will give a proof of Pick’s interpolation theorem that is based on operator theory.

Theorem 1 (Pick’s interpolation theorem): Let $z_1, \ldots, z_n \in D$, and $w_1, \ldots, w_n \in \mathbb{C}$ be given. There exists a function $f \in H^\infty(D)$ satisfying $\|f\|_\infty \leq 1$ and

$f(z_i) = w_i \,\, \,\, i=1, \ldots, n$

if and only if the following matrix inequality holds:

$\big(\frac{1-w_i \overline{w_j}}{1 - z_i \overline{z_j}} \big)_{i,j=1}^n \geq 0 .$

Note that the matrix element $\frac{1-w_i\overline{w_j}}{1-z_i\overline{z_j}}$ appearing in the theorem is equal to $(1-w_i \overline{w_j})k(z_i,z_j)$, where $k(z,w) = \frac{1}{1-z \overline{w}}$ is the reproducing kernel for the Hardy space $H^2$ (this kernel is called the Szego kernel). Given $z_1, \ldots, z_n, w_1, \ldots, w_n$, the matrix

$\big((1-w_i \overline{w_j})k(z_i,z_j)\big)_{i,j=1}^n$

is called the Pick matrix, and it plays a central role in various interpolation problems on various spaces.

I learned this material from Agler and McCarthy’s monograph [AM], so the following is my adaptation of that source.

(A very interesting article by John McCarthy on Pick’s theorem can be found here).

#### 1. A necessary condition for interpolation

Lemma 1: Let $T \in B(H)$. Then $\|T\|\leq 1$ if and only if $I - T^*T \geq 0$

Proof: Let $\|h\|=1$ in $H$. Then

$\|Th\|^2 \leq 1 = \|h\|^2 \Leftrightarrow \langle (I - T^*T)h,h \rangle = \langle h,h \rangle - \langle Th, Th \rangle \geq 0.$

Proposition 2: Let $H$ be a RKHS and a set $X$ with kernel $k$. A function $f : X \to \mathbb{C}$ is a multiplier of norm $1$ if and only if for every $n \in \mathbb{N}$ and every $n$ points $x_1, \ldots, x_n \in X$ the associated Pick matrix is positive semi-definite, meaning:

$\big((1-f(x_i)\overline{f(x_j)}) k(x_i, x_j) \big) \geq 0.$

Proof: Let us define the operator $T: span\{k_x : x \in X\} \rightarrow H$ by extending linearly the rule

$T k_x = \overline{f(x)} k_x .$

Suppose first that $f$ is a multiplier. Then by Proposition 3 in the previous post $T$ extends to $H$ and is equal to $T = M_f^*$. Let $x_1, \ldots, x_n \in X$ and $c_1, \ldots, c_n \in \mathbb{C}$. Then

$\langle (I-T^*T) \sum c_j k_{x_j}, \sum c_i k_{x_i} \rangle =$

$= \sum_{i,j} c_j\overline{c_i} (1-f(x_i)\overline{f(x_j)}) \langle k_{x_j}, k_{x_i} \rangle = \sum_{i,j} c_j \overline{c_j}(1- f(x_i)\overline{f(x_j)}) k(x_i,x_j).$

Since the span of $k_x$s is dense in $H$, the lemma implies that $\|f\| = \|M_f^*\|\leq 1$ if and only if the Pick matrix is positive semi-definite. The other direction is similar.

Exercise A: Complete the above proof.

Corollary 3:  Let $z_1, \ldots, z_n \in D$, and $w_1, \ldots, w_n \in \mathbb{C}$ be given. A necessary condition for there to exist a function $f \in H^\infty(D)$ satisfying $\|f\|_\infty \leq 1$ and

$f(z_i) = w_i \,\, \,\, i=1, \ldots, n$

is that the following matrix inequality holds:

$\big(\frac{1-w_i \overline{w_j}}{1 - z_i \overline{z_j}} \big)_{i,j=1}^n \geq 0 .$

Proof: This follows from Proposition 2 and from $Mult(H^2(D)) = H^\infty(D)$.

Exercise B: Find the correct version of the above three results when the condition of having norm less than or eqaul to $1$ is replaced by the condition of being bounded.

Remark: Note that it is important to put the bar on the right element. If $f$ is a norm one $H^\infty$ function then there must be $z_1, \ldots, w_n$ such that the matrix

$\big(\frac{1-w_j \overline{w_i}}{1 - z_i \overline{z_j}} \big)_{i,j=1}^n$

is not positive semi-definite (why?).

Proposition 2 gives one way of proving Pick’s theorem: one shows that values of $f$ can be chosen at all points of the disc so that the Pick matrix (on $D \times D$) is positive semi-definite. This approach is perhaps the purest RKHS approach. We will use a different approach, which will actually give a formula for a solution. Both approaches are treated in [AM].

#### 2. The realization formula

A key to the approach we will see for the interpolation problem is the following characterization of functions in $(H^\infty)_1$ — the closed unit ball of $H^\infty(D)$.

Theorem 4: Let $f : D \rightarrow \mathbb{C}$. Then $f \in (H^\infty)_1$ if and only if there is a Hilbert space $K$ and an isometry $V: \mathbb{C} \oplus K \rightarrow \mathbb{C} \oplus K$ with block structure

$V =\left(\begin{smallmatrix} a&X \\ Y&Z \end{smallmatrix}\right),$

such that

(*) $f(w) = a + wX(I-wZ)^{-1}Y$

for all $w \in D$

Proof: For sufficiency, suppose that $V$ is an isometry as in the theorem. Then $Z$ is also a contraction, thus $I-wZ$ is invertible for all $w \in D$ and $(I-wZ)^{-1}$ is an analytic operator valued function. Therefore $f$ defined by (*) is analytic in D. To see that $f \in (H^\infty)_1$ we make a long computation:

$1 - \overline{f(w)} f(w) = 1 - (a + wX(I-wZ)^{-1} Y)^* (a+wX(I-wZ)^{-1} Y) = \ldots$

$\ldots = Y^*((I-wZ)^{-1})^*[1-|w|^2](I-wZ)^{-1}Y \geq 0 ,$

where to get from the first line to the second one we used the fact that $V$ is an isometry:

$V^*V = \left(\begin{smallmatrix} 1&0 \\ 0& I \end{smallmatrix}\right)$

which gives $\overline{a} a + Y^* Y = 1$, $\overline{a} X + Y^* Z = 0$ and $X^* X + Z^* Z = I$.

We leave the converse as an exercise for two reasons. First, we need the sufficiency part of the above theorem for our proof of Pick’s theorem. Second, our proof  of Pick’s theorem contains the same idea that is used to proof the necessity part.

Exercise C: Complete the proof of the above theorem (suggestion: first give it a try on your own, and if you get stuck you can look at the proof of Pick’s theorem below).

#### 3. Proof of Pick’s theorem

We now complete the proof of Pick’s theorem. Corollary 3 takes care of one direction: positivity of the Pick matrix is a necessary condition for a norm one multiplier interpolant to exist. It remains to show that this condition is sufficient. For that, we require first a lemma, which interest of its own.

Lemma 5: Let $k : X \times X \rightarrow \mathbb{C}$ be a positive semi-definite kernel. Then there exists a Hilbert space $K$ and a function $F: X \rightarrow K$ such that $k(x,y) = \langle F(x), F(y) \rangle$ for all $x,y \in X$. If $X$ is a set with $n$ points, then the dimension of $K$ can be chosen to be the rank of $k$ when considered as a positive semi-definite $n \times n$ matrix.

Proof: Let $K$ be the RKHS associated with $k$ as in Theorem 1 in the previous post (in that theorem the space was denoted $H$). Define $F: X \rightarrow K$ by $F(x) = k_x$. Then we have

$\langle F(x), F(y) \rangle = k(y,x) .$

This is not exactly what the theorem asserts; to obtain the assertion of the theorem apply what we just did to the kernel $\tilde{k}(x,y) = k(y,x)$, which is also positive semi-definite.

To prove that theorem in the finite case, assume that $X = \{1,2,\ldots, n\}$ and consider the positive semi-definite matrix $A = \big(k(i,j) \big)_{i,j=1}^n$. Let the rank of this matrix be $m \leq n$. Then by the spectral theorem $A = \sum_{l=1}^m v_l v_l^*$, where $v_1, \ldots, v_m \in \mathbb{C}^n$ are the eigenvectors corresponding to non-zero eigenvalues. Denote $v_l = (v_l^1, v_l^2, \ldots, v_l^n)$ for $l=1, \ldots, m$. Now define $F: X \rightarrow \mathbb{C}^m$ by $F(i) = (v_1^i, v_2^i, \ldots, v_m^i)$ for $i=1, \ldots, n$. Then we have that

$k(i,j) = (A)_{i,j} = \sum_{l=1}^m (v_l v_l^*)_{i,j} = \sum_{l=1}^m v_l^i \overline{v_l^j} = \langle F(i), F(j) \rangle ,$

as was to be proved.

The following theorem completes the proof of Pick’s theorem. In fact, we obtain a little bit more information then claimed in Theorem 1.

Theorem 6: Let $z_1, \ldots, z_n \in D$ and $w_1, \ldots, w_n \in \mathbb{C}$ be given. If the Pick matrix

$\left ( \frac{1-w_i \overline{w_j}}{1-z_i \overline{z_j}} \right)_{i,j=1}^n$

is positive semi-definite and has rank $m$, then there exists a rational function $f$ of degree at most $m$ such that $\|f\|_\infty \leq 1$ and $f(z_i) = w_i$, for all $i=1, \ldots, n$.

Remark: By degree of a rational function we mean either the degree of the numerator or the degree of the denominator — the bigger one — appearing in the rational function in reduced form. A consequence of the boundedness of $f$ is that the poles of $f$ lie away from the closed unit disc.

Proof: By Lemma 5 we know that there are $F_1, \ldots, F_n \in \mathbb{C}^m$ such that $\frac{1 - w_i \overline{w}_j}{1 - z_i \overline{z}_j} = \langle F_i, F_j \rangle$ for all $i,j$. We can rewrite this identity in the following form:

$1 + \langle z_i F_i, z_j F_j \rangle = w_i \overline{w_j} + \langle F_i, F_j \rangle .$

This means that we define a linear map $V$ from $span\{(1, z_i F_i) : i=1, \ldots, n\} \subseteq \mathbb{C} \oplus \mathbb{C}^m$ into $span \{(w_i, F_i) : i = 1, \ldots, n\} \subseteq \mathbb{C} \oplus \mathbb{C}^m$, by sending $(1, z_i F_i)$ to $(w_i, F_i)$ and extending linearly, then this $V$ is an isometry. Since isometric subspaces have equal dimension, we can extend $V$ to an isometry $V : \mathbb{C} \oplus \mathbb{C}^m \rightarrow \mathbb{C} \oplus \mathbb{C}^m$.

Recall that the realization formula gives us a way to write down a function $f \in (H^\infty)_1$ given an isometric matrix in block form. So let us write

$V = \left(\begin{smallmatrix} a&X \\ Y&Z \end{smallmatrix} \right) ,$

where the decomposition is according to $\mathbb{C} \oplus \mathbb{C}^m$, and define $f(w) = a + wX(I-wZ)^{-1}Y$. By Theorem 4, $f \in (H^\infty)_1$, and it is rational  because there are rational formulas for the computation of the inverse of a matrix. The degree of $f$ is evidently not greater than $m$. We have to show that $f$ interpolates that data.

Fix $i \in \{1, \ldots, n\}$. From the definition of $V$ we have

$\left(\begin{smallmatrix} a&X \\ Y&Z \end{smallmatrix} \right) \left(\begin{smallmatrix} 1\\z_i F_i \end{smallmatrix} \right) = \left(\begin{smallmatrix} w_i\\F_i \end{smallmatrix} \right).$

The first row gives $a + z_i X F_i = w_i$. The second row gives $Y + z_i Z F_i = F_i$, so solving for $F_i$ we obtain $F_i = (I - z_i Z)^{-1}Y$ (where the inverse is legal). Plugging this in the first row gives

$w_i = a + z_i X F_i = a+ z_i X (I-z_iZ)^{-1}Y = f(z_i)$

thus $f$ interpolates the data, and the proof is complete.

Exercise D: Go over the proof and make sure you understand how it gives a formula for the solution.

Remark: The argument that we used to prove Pick’s theorem has a cute name: the lurking isometry argument. It generalizes to broader contexts, and can be used to prove the necessity part of Theorem 4.

#### 4. Brief on the commutant lifting approach

There is another approach for the interpolation problem that is “even more” operator theoretic. It is now known as the commutant lifting approach. The interested student can find a more detailed treatment in [AM].

We need the following definition. Let $T$ be a contraction on a Hilbert space $H$. Suppose that $K$ is a Hilbert space which contains $H$ as a subspace and that $V$ is an isometry on $K$. Denote by $P_H$ the orthogonal projection of $K$ onto $H$. If

$T^n = P_H V^n P_H$

for all $n\in \mathbb{N}$ then we say that $V$ is an isometric dilation of $T$, and that $T$ is a compression of $V$.

An isometric dilation $(V,K)$ of $(T,H)$ is said to be minimal if the only subspace of $K$ which contains $H$ and is invariant for $V$ is $K$ itself.

Remark: Note that if $(V, K)$ is a minimal isometric dilation for $T$ then $K = \overline{span}\{V^n h : n \in \mathbb{N}, h \in H\}$. Also, if $m\leq n$ then

$\langle V^n h, V^m g \rangle = \langle V^{n-m} h, g \rangle = T^{n-m} h, g$

which depends only on $m,n,h,g$.

Theorem 7 (Sz.-Nagy’s isometric dilation): Every contraction $T$ on a Hilbert space $H$ has a minimal isometric dilation. The minimal isometric dilation is unique in the following sense: if $V_i$ are both minimal isometric dilations of $T$ on Hilbert spaces $K_i$, $i=1, 2$ then there exists a unitary $U : V_1 \rightarrow V_2$ such that

$Uh = h$

for all $h \in H$ and such that $V_2 = U V_1 U^*$

Proof: Define $d_T = \sqrt{I-T^*T}$, and let $D = \overline{d_T(H)}$. Define $K = H \oplus D \oplus D \oplus D \oplus \ldots$ and

$V(h, f_1, f_2, f_3, \ldots) = (Th, d_T h, f_1, f_2, \ldots) .$

We can check that $V$ is indeed a minimal isometric dilation for $T$ (exercise). As for minimality, if $(V_i, K_i)$ are two minimal isometric dilations we define $U : V_1 \rightarrow V_2$ by

$U \sum_n V_1^n h_n = \sum_n V_2^n h_n.$

By the remark before the theorem we have that

$\|\sum_n V_2^n h_n \|^2 = \sum_{m,n} \langle V_2^m h_m , V_2^n h_n \rangle =$

$= \sum_{m,n} \langle V_1^m h_m , V_1^n h_n \rangle = \| \sum_n V_1^n h_n \|^2.$

so $U$ extends to a unitary between $K_1$ and $K_2$. Again, we may check that $U V_1 = V_2 U$ and that $Uh = h$ for  $h \in H$.

Example: Let $S$ be the multiplication operator $M_z$ on $H^2$. Then $M_z$ is an isometry. If $z_1, \ldots, z_n \in D$ and $G = span\{k_{z_i} : i = 1, \ldots, n\}$, then $G$ is an invariant subspace for $S^*$ (why?) therefore $T = P_G S\big|_G$ is a compression of $S$ and $S$ is an isometric dilation for $T$. $S$ is in fact the minimal isometric dilation for $T$ (exercise).

The key operator theoretic result we shall need is the following:

Theorem 8 (Foias–Sz.-Nagy commutant lifting theorem): Let $V \in B(K)$ be an isometric dilation of $T \in B(H)$. Suppose that $X \in B(H)$ satisfies $TX = XT$. Then there exists $Z \in B(K)$ such that $Z^* \big|_H = T^*$ which satisfies $VZ = ZV$ and $\|Z\| = \|X\|$

I will not supply a proof of this theorem here. We will also require the following fact:

Proposition 9: Let $S$ be as in the example above and let $Z \in B(H^2)$ satisfy $SZ = ZS$. Then there exists $f \in H^\infty$ such that $Z = M_f$

Proof: Let $f = Z 1 \in H^2$. For every polynomial $p$, $Z p = Z M_p 1 = Z p(M_z) 1 = p(M_z) Z 1 = pf = M_f p$. From the density of polynomials in $H^2$ and the boundedness of $Z$ it follows that $f$ is a multiplier and that $Z = M_f$.

Alternative proof of Pick’s theorem: Suppose that the Pick matrix is positive semi-definite. Define $X$ on $G = span \{k_{z_1}, \ldots, k_{z_n}\}$ to be the adjoint of $k_{z_i} \mapsto \overline{w_i} k_{z_i}$. Positivity of the Pick matrix implies that $X$ is a contraction. Now $X$ commutes with $T = P_G S \big|_G$ (their adjoints are both diagonal with respect to the basis $\{k_{z_1}, \ldots, k_{z_n}\}$). So by the commutant lifting theorem and the above proposition there exists some $f \in H^\infty$ such that $M_f^* \big|_{G} = T^*$ and $\|f\|_\infty = \|M_f\| \leq 1$. This $f$ solves the interpolation problem.