### Topics in Operator Theory, Lecture 4: Pick interpolation via commutant lifting

#### by Orr Shalit

Finally we reached the point where we can apply the general theory that we developed in the last two weeks to obtain an interesting application to function theory, namely, the Pick interpolation theorem.

#### 1. Statement of the theorem and background

To ease my life when typing this stuff up, I will denote by the open unit disc in . Our goal is to prove the following theorem.

**Theorem 1 (Pick’s interpolation theorem):** *Let , and be given. There exists a function satisfying and *

*if and only if the following matrix inequality holds:*

(*)

To make this problem into a problem in operator theory, we will need to introduce the Hardy space and a little bit of the theory of reproducing kernel Hilbert spaces. Fortunately, I’ve written some notes on this topic a few years ago.

See this linked post for an introduction to the theory of reproducing kernel Hilbert spaces. You need to read that (if you don’t know this stuff), in order to continue reading.

In today’s post, I will prove Pick’s theorem using the commutant lifting theorem. In a previous post, I proved it using the “lurking isometry” argument, another Hilbert space approach, which has the advantage that it also gives rise to the “realization formula” for the interpolating function (in that old post I also said a few words about the commutant lifting approach). The commutant lifting approach has the advantage that it is very beautiful, elegant, and generalizes to a plethora of situations.

In the linked post that I asked you to read, we met the Hilbert function space

and we saw that its multiplier algebra is – the algebra of all bounded analytic functions on the unit disc with the supremum norm. We saw that is spanned by the collection of its kernel functions , where

is the * Szego kernel . *Note that the Szego kernel appears in the statement of Pick’s theorem above. In fact, the “Pick matrix” appearing in (*) can be seen to be the matrix .

#### 2. A small detour to explore the dilation of certain contractions

We haven’t yet computed a non-trivial example of an isometric dilation of an operator. Let us do this now.

The most important operator on is the shift

given by (or ). It is easy to see that is unitarily equivalent to the unilateral shift of multiplicity one.

An important fact about the Hilbert function space is (Proposition 3 in the linked post) that

(**)

for every and every . In particular, if we define , then is invariant under the adjoint of every multiplier, and in particular it is co-invariant under . If we write for the compression (so ), then is a diagonalizable operator, diagonal with respect to the **non**-orthonormal basis , with corresponding eigenvalues .

**Proposition 2: ***With the notation as above, let . Then is a contraction, and its minimal isometric dilation is equal to . *

**Remark: **You might be thinking: this is obnoxious! We took an isometry, compressed it, and now we compute its minimal dilation – which is what we started with!! Patience, my friend.

**Proof: **As we remarked above, it is clear that is an isometric co-extension of . Let us show that

(#) .

, and so

.

Since the right hand side is invariant under , we find that it contains every polynomial. Since the polynomials are dense in , we have (#), and the proof is complete.

#### 3. Proof of Theorem 1

Our proof strategy (for Theorem 1) will be to consider the Hilbert space of . On this space we can define the operator and determined by

.

Since the kernel functions are linearly independent, the above equation well-defines a linear operator on , and hence this determines an operator .

**Proof of Theorem 1: **We start with the easy direction (which holds in every RKHS). Assume that there exists multiplier such that and for ; we need to prove that condition (*) holds. In the situation considered, is the co-restriction of to . Indeed, by the very definition of , and the “important fact” recalled in the previous section, . Thus, , and so for all . Writing for some , and expanding, we obtain

,

and since this has to hold for all choices of , this is just the condition (*).

Now, for the converse, assume that the condition (*) holds, that is, the Pick matrix is positive semi-definite. We need to show that existence of a multiplier such that and for .

Now let . Since and are both diagonal with respect to the same basis, they commute. We know that is a contraction, and that its minimal isometric dilation is . Now, the assumption (*) is precisely that (whence ) is a contraction. Indeed, this follows from the computation that we carried out two paragraphs above, in the first part of the proof.

We know, by the commutant lifting theorem (Theorem 3 in the previous post) that there exists such that and .

In the next section we will prove that , in the sense that if , then there exists such that . Believing this for the moment, we have . But then is the required interpolant: since , while , we find that for all .

This completes the proof of Pick’s theorem, modulo the computation of the commutant of .

**Question: **Note that the second part used features of the space , and it does not hold in arbitrary RKHS. What property of the RKHS was used to make the argument work?

#### 4. Completing the proof by computing the commutant of

**Theorem 3: **.

**Proof: **Clearly (the first algebra is commutative, and ). So let . We need to show that there exists a such that . Since , we define . Then is an analytic function on . If we show that , then it will follow that and is a multiplier, and the proof will be complete.

For every , its Taylor series converges in norm, thus

,

and the series converges in norm. If we apply point evaluation at , we get the convergent numerical series . It follows that is a multiplier, , and .