To explain, I will need some notation.

Let be a field. We write – the algebra of all polynomials in (commuting) variables over the field .

Fix . For , let denote the set of all commuting -tuples of matrices over . We let . Now we are looking at all commuting -tuples of commuting matrices of all sizes.

Points in can be plugged into any polynomial . In fact, points in can be naturally identified with the space of finite dimensional representations of , by

.

(We shall use the word “representation” to mean a linear algebraic homomorphism of an algebra into for some ).

Now, given an ideal , we can consider its zero set in :

for all .

(We will omit the subscript for brevity.) In the other direction, given a subset , we can define the ideal of functions that vanish on it

for all .

Clearly, for every . The interesting statement is the converse.

The following theorem appears as Corollary 11.7 from the paper “Algebras of bounded noncommutative analytic functions on subvarieties of the noncommutative unit ball” by Guy Salomon, Eli Shamovich and myself (though, as I explained already in several previous posts, this result has already been known to algebraists for quite some time).

**Theorem (free commutative Nullstellensatz): ***For every , *

*.*

**Proof:** Originally, We intended to prove it only over the complex numbers, I didn’t guess that it could hold over the reals, for example. But an examination of the proof shows that it works when the field is replaced by an arbitrary field . For our proof, see this blog post. The only problematic point might be the lemma (there is only one lemma there), where one will need something called “Zariski’s lemma” which says that a field extension of which is finitely generated **as an algebra**, is actually finite dimensional over the base field, that is, it is a finite extension of . This fact is needed for the proof of the lemma, in order to see that the quotient of a commutative unital Noetherian algebra over k by a maximal ideal is a finite dimensional field extension of . Happily, Zariski’s lemma works for every field . Thus the original proof holds and we are done!

Since the statement of the theorem is interesting also for and , I will concentrate now on explaining why this case is true. In two words, the proof for the case over the complex numbers can be summarized as: Jordan form (at the end of this older post, why the theorem follows from the Jordan form). Now I will finish by explaining why the real theorem follows form the complex theorem.

So, assume that the theorem is proved over the complex numbers. One can see that the theorem holds over the reals quite easily, by using the fact that complex numbers can be modelled as matrices over the reals, so complex matrices can be thought of as matrices over the reals with blocks.

For example, in the one variable case, if are if polynomials with real coefficients, and if vanishes at every real matrix zero of , then vanishes at every complex matrix zero of , so by the complex version of the theorem we have where is a polynomial with perhaps complex coefficients. But then you can see that actually has to be a polynomial with real coefficients. (This proves ).

I still can’t quite believe the theorem holds over any field, so if any readers finds a bug in our proof I will be happy to hear.

]]>[Several years ago I went to a conference in China and came back with the insight that in international conferences I should give a computer presentation and not a blackboard talk, because then people who cannot understand my accent can at least read the slides. It’s been almost six years since then and indeed I gave only beamer-talks since. My English has not improved over this period, I think, but I have several reasons for allowing myself to give an old fashioned lecture – the main ones are the nature of the workshop, the nature of the audience and the kind of things I have to say].

In the workshop Guy Salomon, Eli Shamovich and I will give a series of talks on our two papers (one and two). These two papers have a lot of small auxiliary results, which in usual conference talk we don’t get the chance to speak about. This workshop is a wonderful opportunity for us to highlight some of these results and the ideas behind them, which we feel might be somewhat buried in our paper and have gone unnoticed.

I want to begin by discussing something very general that we mathematicians do. I want to say something about how we give birth to problems and how they then start having a life of their own.

Suppose we have an infinite countable discrete group . Then there is a very natural and well known construction of the ** group von Neumann algebra** , which is an operator algebra on (for the construction see, e.g., the second section of this post). The operator algebra contains information on the group: for example the group has the ICC property if and only if is a factor; further, is amenable if and only if is hyperfinite. However, these fancy results are somewhat sophisticated, and the mathematical child is prone to ask a more basic question:

Even though this is the first and most naive question that a mathematical child might ask, it very hard to solve, even for mathematical grown-ups. Notoriously, it is an open question whether and are isomorphic, where denotes the free group on generators (even though the question is open for such a basic pair of groups, there do exist known pairs of non-isomorphic groups and which give rise to isomorphic von Neumann algebras).

A mathematician might obsess over such a question for an entire lifetime. To solve this question one might introduce all kinds of strategies, involving disparate fields of mathematics. To carry out the strategies one will need to introduce new tools, and then one will have to develop and refine these tools. Soon enough, the development and refinement of the tools raise new questions, and answering these questions gives rise to the need for more new tools. Some very interesting results might be proved along the way, some very surprising applications to completely different problems are sometimes discovered. The original question was asked by the mathematical child out of curiosity and wonder more than anything else, but is never forgotten, even though it is sometimes regarded as a naive fancy and not as something that one can openly confess to be aiming at. The various technical issues that arise in analyzing the fine structure of the constructions defining the question in many cases reveal themselves as beautiful problems in their own right, filling the mathematician with new wonder and ambition.

The process described above is what I do for a living. I do not work on the free group factor problem nor on the problems in free probability that it gave birth to, nor on the problems that free probability gave birth to, etc. That example was given only for the sake of illustration.

As a mathematical child (and I still am) I asked a different question. But it is in the same spirit. If you want to get the gist of what I do – this is it. But to really tell you about my work, I need to get say more.

Sometimes I feel the need to defend the choice of my own questions. The BIG PEOPLE asked different questions – maybe those are the important questions?

We observe nature and wish to understand it. We see animals. They are beautiful, they are interesting, they excite us. We wish to understand these animals. What does it mean to understand?

One aspect of understanding is classification. We know that animal A is different from animal B, because one is standing here and the other is sitting there. So they are not the same animal. This classification is somewhat *too fine* to be interesting. A coarser classification might be more interesting (but not too coarse, we are not satisfied by noting that they are both “animals”).

To be able to classify in a meaningful way, we need to study the properties of the animals. For example, suppose that we have never seen African animals before, but we walk into a savanna where there are two kinds: elephants and giraffes. If we have never seen and never heard of elephants and giraffes, we do not know that these are elephants and giraffes, but maybe we are clever enough to notice that there are two distinct *kinds of animals *in sight. Now, different people will find different properties interesting, and this will affect their classification scheme. Some will tell the elephants and the giraffes apart by noting that one kind is grey and the other is yellow with brown spots. Some will note that one kind has a long nose (or whatever that is!) and the other has a long neck! Someone else might notice that one has four knees while the other has two knees and two elbows. They are all classifying but they are using different kinds of properties.

Anyone who is studying the world in a creative way will have to ask their own questions, and will have to define to themselves what constitutes an answer.

Let , let be the open unit ball in . We let be the Hilbert space of all analytic functions such that the coefficients of the power series for given by satisfies . This space is known as the ** Drury-Arveson space** (see, e.g., this post or this survey). Let be the multiplier algebra of , that is:

for all .

We can identify every multiplier with the multiplication operator that it gives rise to . In this way, becomes an operator algebra.

Now given an analytic variety , we can consider by which we mean all functions , for which there exists such that . It can be shown that is also a multiplier algebra, acting on the space of functions . It can also be shown that the algebra of multiplication operators is completely isometrically isomorphic to , where

for all .

We ask the mathematical child’s question: *do these algebras know the variety from which they came? *

With Davidson and Ramsey we studied this question, and found the following answer:

**Theorem:** * and are completely isometrically isomorphic if and only is the image of under a conformal automorphism of the ball. *

However, it is more fun to ask questions in the wrong category. We ask: these algebras have a coarser structure that depends on the variety and is simply there. For example, there is the raw algebraic structure. Does the algebraic structure hold the variety in its memory? Following my work with Davidson and Ramsey, one can come up with the following statement.

**“Theorem”:** * and are isomorphic (as algebras) if and only and biholomorphically equivalent via a multiplier map. *

There are two problems with the above theorem.

The first one is that it only treats quotients of the form where is a “radical” ideal.

The second problem is that it is false! The forward implication is proved only under additional assumptions, and we don’t really know if it is true in general (though I suspect that it is). As for the backward implication, there are counter-examples showing that there are pairs of varieties and which are multiplier biholomorphic but the algebras are non-isomorphic.

So it’s not really a theorem.

As I learned from George Elliott, when your candidate for a classification fails, there are two things you can do: 1) you can try to work in a restricted class of algebras or varieties, in hope that the invariant is a complete invariant in that setting; 2) you can refine the invariant.

We have papers dealing with restricted classes of varieties (see this paper with Kerr and McCarthy where we show that the “theorem” is true for reasonable one dimensional varieties; this paper with Davidson and Hartz studies the extent of the failure of “theorem” when we consider discs embedded in an finite dimensional ball). More recently I have been concentrating on studying the isomorphism problem through a refined invariant. The refined invariants are noncommutative (nc) varieties and their derivatives. These were described in depth in the talks by Eli Shamovich and by Guy Salomon.

To illustrate how noncommutative varieties arise even for commutative problems, I will describe a Nullstellensatz that we obtained apropos our work on the isomorphism problem for algebras of bounded nc analytic functions. (BTW, I blogged about this in the past, and what follows has some overlap with the previous post).

Let be the algebra of polynomials in commuting variables.

Let us define the zero locus as follows:

for all .

We also introduce the following notation: given , we write

for all .

The question that every mathematical child will ask immediately, is: *(to what extent) can we recover from ?*

Hilbert answered this question thusly:

**Theorem (Hilbert’s Nullstellensatz): **For every ,

.

Recall that the ** radical **of is the ideal

there exists some such that .

To describe the Nullstellensatz that appears in my paper with Guy and Eli I will need to introduce some notation (after we proved it, we found that it can be dug out of a paper of Eisenbud and Hochester – but does it does not seem to be well known, at least not in our transparent formulation).

Let denote the set of **all** -tuples of matrices. We let be the disjoint union of all -tuples of matrices, where runs from to . That is, we are looking at all -tuples of commuting matrices of all sizes. Elements of can be plugged into polynomials in noncommuting variables.

Similarly, we let denote the set of all commuting -tuples of matrices. We let . Now we are looking at all commuting -tuples of commuting matrices of all sizes. This can be considered as the “noncommutative variety” cut out in by the equations (in noncommuting variables)

, .

Points in can be plugged into any polynomial .

In fact, points in can be naturally identified with the space of finite dimensional representations of , by

.

(We shall use the word “representation” to mean a homomorphism of an algebra or ring into for some ).

Now, given an ideal , we can consider its zero set in :

for all .

(We will omit the subscript for brevity.) In the other direction, given a subset , we can define the ideal of functions that vanish on it:

for all .

Tautologically, for every ideal ,

,

because every polynomial in annihilates every tuple on which every polynomial in is zero, right? The beautiful (and maybe surprising) fact is the converse.

The following formulation is taken from Corollary 11.7 from the paper “Algebras of bounded noncommutative analytic functions on subvarieties of the noncommutative unit ball” by Guy Salomon, Eli Shamovich and myself.

**Theorem (free commutative Nullstellensatz): ***For every ,*

*.*

For a proof, see the paper, or if you want a version for dummies (== analysts) see this blog post.

The first thing to note about this theorem is that it is not trivial. One might say: “yeah sure, is finite dimensional so can guess that everything is determined by the finite dimensional representations”, but please note that this theorem fails if we replace by the (finitely generated) algebra of polynomials in **non**-commuting variables. It also fails in the algebra of bounded analytic functions on the disc, even if one restricts attention to weak-* closed ideals (a setting in which becomes a (commutative) principal ideal domain).

Consider the following theorem.

**Theorem:** *Let be such that for every representation of that annihilates . Then . *

In a sense, this is the correct theorem, which holds for *every* algebra, not just for the algebra . Isn’t it a better theorem?

NO! Because this theorem is **too good to be wrong!** It is stated in such a way that it is true automatically. Indeed, just consider the representation . By assumption, goes to zero under this representation, and hence .

The commutative free Nullstellensatz is interesting precisely because it does not hold for all algebras, it is a special theorem true for the polynomial algebra .

Besides being non-trivial, the free commutative Nullstellensatz is pleasing and beautiful. Let us now see what we can do with it.

**Theorem: ***. *

**Proof: **The map is surjective, and by the commutative free Nullstellensatz its kernel is .

From here it is easy to show:

**Theorem:** * iff there exists a polynomial isomorphism of varieties . *

**Proof:** The existence of such a map implies the existence of an isomorphism between and . Conversely, and isomorphism gives rise to a map between the spaces of finite dimensional representations. But it is easy to see that the finite dimensional representations of are in bijection with via

.

That completes the proof.

We now return to operator algebras. The ideas in the last section help us find the correct invariant for quotients of by non-radical ideals. The completely general problem is still not solved – we need to assume that the ideals are homogeneous.

For a homogeneous ideal , we write . We write for the nc set of all commuting tuples which are also strict row contractions. Then we have the following result, which identifies our quotient algebras as algebras of nc functions on an nc variety.

**Theorem: **.

Using this, we showed:

**Theorem: *** if and only if there exists an appropriate nc analytic “isomorphism” . *

In my talk, the emphasis is on that an isomorphism of the algebras gives rise to a map between the varieties. To make this work, we needed to show that if then the induced map is regular. The case of radical homogeneous ideals was previously solved in works with Davidson and Ramsey, and in a work of Hartz. The case of non-radical homogeneous ideals is solved by an argument that makes use of the proof of the radical case, essentially reducing things to the radical case using the following Nullstellensatz:

**Theorem: ***Let be a homogeneous ideal. There exists an such that for every , if , then . *

Recall that the in Bernstein’s proof of the Weierstrass polynomial approximation theorem, one associates with every continuous function a *Bernstein polynomial *

.

The operators are clearly linear, positive and unital. It can be shown that and . Therefore

(*) uniformly for every .

To prove Weierstrass’ approximation theorem, one needs to show that uniformly for all . One can give a non-probabilistic proof of this fact, using just that is a sequence of positive and unital maps satisfying (*) (see Chapter 10.3 in Davidson and Donsig’s book “Real Analysis and Applications“).

In fact, Korovkin proved that given a sequence of positive linear and unital operators on such that uniformly for , then uniformly for all . This implies that the generating set has a very “rigid” hold on the (closed) unital algebra that it generates, .

The above discussion should serve as background and motivation for the following definition.

**Definition 1:** Let be a generating subset of a C*-algebra . We say that is ** hyperrigid in ** if for every faithful nondegenerate representation and every sequence of UCP maps,

.

In the above definition I forced myself to overcome my pedantic self and use Arveson’s notation for different faithful (and nondegenerate) representations of . What we really mean is that is a certain fixed C*-algebra (either abstract or represented faithfully on *some* space, that’s not the point here) and that for every faithful and nondegenerate representation and sequence of UCP maps , the condition

(*) for all

implies the consequence

(**) for all .

**Conventions:** Unless emphasized otherwise, our C*-algebras will be unital and all the representations will be nondegenerate (hence unital). In his paper Arveson discussed the nonunital setting to some extent, and time permitting I will touch upon this briefly in class (for a additional discussion of hyperrigidity in the context of nonunital algebras see this paper by Guy Salomon). We also note that a generating set is hyperrigid if and only if the operator system that it generates is. Although it is more precise to say that a set of generators (or the operator algebra that it generates ) is “*hyperrigid in B*“, we will sometimes just say that it is “*hyperrigid*“.

In Arveson’s paper it is always assumed that the generating set is finite or countably infinite, and that all Hilbert spaces are separable. I think the reason is that at the time, the existence of boundary representations was known only for separable operator systems. I will not make any countability assumptions here, I think they are not needed now that we know that boundary representations always exist (I will be grateful if an alert reader finds a gap in these notes). On the other hand, we may always stay within the realm of separable Hilbert spaces if our C*-algebras are separable – this is left as an exercise.

The definition of hyperrigidity (Definition 1) has an approximation theoretic flavor. The following theorem connects hyperrigidity to the unique extension property, which we studied in previous lectures.

**Theorem 2:** *Let be an operator system generating a C*-algebra . The following conditions are equivalent. *

*is hyperrigid in .**For every nondegenerate representation and every sequence , if for every , then for all .**For every nondegenerate representation , the UCP map has the unique extension property.**For every unital C*-algebra , every unital -homomorphism and every UCP map*

for all for all .

**Proof:** 1 2: Condition 2 is very similar in appearance to the definition of hyperrigidity given by “(*) implies (**)” in the previous section. Indeed, it follows readily by assuming that is represented faithfully as , and then considering the faithful representation

If for all , then for all . By the definition of hyperrigidity (summoned only after invoking Arveson’s extension theorem, so that we will be discussing UCP maps on ), we find that

for all ,

which implies that . Note how we really needed the assumption of hyperrigidity to address **every** faithful nondegenerate representation of .

2 3: This follows by taking for all .

3 4: Let and be as in Condition 4. Represent faithfully (and non-degenerately) on a Hilbert space as , and extend to a UCP map . Then can be considered as representation of on . If does not fix , then then is an extension of which is different from , in contradiction to Condition 3.

4 1: Suppose that faithfully and non-degenerately, and let be a series of UCP maps. Assume, as in the definition of hyperrigidity, that

for all .

Assuming that Condition 4 holds, we wish to prove that

for all .

Construct the C*-algebra of bounded sequences with values in with the obvious structure, and consider the UCP map given by

.

If is the ideal of all sequence tending to zero, then . Write and so induces a UCP map . By defining a *-homomorphism by

we are precisely in the situation of Condition 4. It follows that fixes , and this is the same as for all .

That concludes the proof.

The following is a simple corollary that is worth recording:

**Corollary 3:** *Let be a hyperrigid operator system in . Then is the C*-envelope of .*

**Proof:** Since for every representation , the restriction has the UEP, the Shilov ideal – which we know is the intersection of all the kernels of boundary representations – is trivial.

**Corollary 4:** Let be a hyperrigid operator system in , let be an ideal, and let be the quotient map. Then is hyperrigid in .

**Proof:** Condition 2 in the theorem is preserved under taking quotients.

It is important to note that the converse to Corollary 3 does not hold: an operator system with trivial Shilov ideal in the C*-algebra it generates need not be hyperrigid. Moreover, in the context of Corollary 4, we note that having trivial Shilov ideal is not preserved by quotients. Here is an example (due to Davidson; personal communication) that illustrates both of these statements.

**Example 1: **Let be an orthonormal basis for a Hilbert space , and let be a sequence of complex numbers such that and is a dense sequence in . Define and let be the unital (norm closed) operator algebra generated by . (We are suddenly discussing an operator algebra and not an operator system, but we can always pass from to and back).

One can check that is an irreducible operator algebra containing the compacts. The Calkin map is not isometric on , so by the boundary theorem, the Shilov ideal is trivial and .

On the other hand, is a normal operator with spectrum . It follows that and is the disc algebra. Thus, after passing to the quotient, the Shilov boundary ideal is not trivial.

This example shows that – unlike hyperrigidity – trivial Shilov ideal is a property that does not pass to quotients. It also shows that a trivial Shilov boundary does not imply hyperrigidity (why?).

By Theorem 2, if is hyperrigid, then, in particular, every irreducible representation is a boundary representation. Arveson conjectured that the converse also holds true.

**Arveson’s Hyperrigidity Conjecture:** * is hyperrigid in if and only if every irreducible representation of is a boundary representation for .*

**Example 2:** Suppose that is a selfadjoint operator with at least three points in the spectrum, and let be the (unital, as always) C*-algebra generated by . We will show that:

- is hyperrigid in .
- is not hyperrigid in .

(The assumption on the spectrum is no biggy: if the spectrum has two or less points, then is the C*-algebra generated by , and of course it is hyperrigid.)

Note that if we let be the multiplication operator by the identity function on , and if we identify as a C*-subalgebra of , then we obtain a **strengthened version** of Korovkin’s theorem that we mentioned in the first section. Indeed, in Korovkin’s theorem, the conclusion for all follows from this convergence only of elements only for sequences of (completely) positive maps . Hyperrigidity shows that follows from the same assumption, but now for any sequence of UCP maps .

Let us first show that is not hyperrigid. Suppose that , , and is a point in the spectrum that lies strictly between and . Then point evaluation at is representation of . However, does not have the unique extension property, since is a convex combination of and .

Now let . To show that is hyperrigid, we need to show that for every nondegenerate , the UCP map has the UEP.

To this end, suppose that is a UCP map that extends . This just means that and . We have to show that is multiplicative – this will imply that .

Let be a Stinespring representation of , where is a *-representation and is a space containing . We write , and compute:

.

This means that , or put differently, that is invariant under . It follows that is a reducing subspace for , and so is multiplicative, as required.

It is worth noting that under the assumption above on , Arveson proved that if is hyperrigid (where is continuous on the spectrum of ) then must be either strictly convex or strictly concave. He also proved that the converse holds, assuming that the Hyperrigidity Conjecture is true.

**Example 3:** Let be isometries generating a unital C*-algebra . The generating set

.

is hyperrigid in .

In particular, if are Cuntz isometries (i.e., isometries with pairwise orthogonal ranges, such that ), then it is well known that they generate the Cuntz algebra . Then we have that the operator system generated by is hyperrigid in .

Let’s prove the claimed hyperrigidity in the special case of Cuntz isometries.; the general case is based on the same idea but slightly more tedious. Write for the operator system generated by . Let be a (nondegenerate, as always) representation. Let be a UCP map such that , and let be a representation that is a dilation of . Define for . Then for every ,

.

is a *-representation, so

.

Comparing the (1,1)-entry, we find . Since , we must have that , so that . On the other hand

.

As before, this implies that , which implies that for all .

We conclude that all reduce , and (since these operators generate ) it follows that is a reducing subspace for . As a consequence, is its own minimal Stinespring dilation, so it is multiplicative. Hence , as required.

Arveson established his hyperrigidity conjecture for the special case of *countable spectrum*. Recall that the ** spectrum** of a C*-algebra is the set of all unitary equivalence classes of irreducible representations (if you are the kind of person who -like me – always worries about such things, let me tell you that it is OK to use the word “set” for . Hint: we are speaking about

**Theorem 5:** *Let be an operator system such that the generated C*-algebra has countable spectrum. If every is a boundary representation, then is hyperrigid in .*

**Proof:** Assume that . To prove that is hyperrigid, we need to prove that for **every** representation , the UCP map has the UEP. But every representation of is the direct sum of irreducible representations (here we are using the fact that is countable, and some non-trivial facts from the representation theory of (type I) C*-algebras). Since the direct sum of UCP maps with the UEP has the UEP (yet another fact that requires proof, but not as deep as the previous fact we used), it follows that has the UEP.

In this section I wish to discuss the connection between the notion of hyperrigidity and another conjecture of Arveson – the essential normality conjecture.

The problem of essential normality can take place in many spaces, but I like to view in the Drury-Arveson space . See this old post (mostly Section 1) and this old post (mostly Sections 2 and 3) where I already discussed this space. In this old post (Section 1) I discuss the essential normality problem.

In class I plan to discuss the paper “*Essential normality, essential norms, and hyperrigidity“* by Kennedy and myself (here is a link to an arxiv version; here is a link to the corrigendum). I wrote about this problem and about this paper a few times before (for example, when announcing the preprint), so I with the above pointers and links in place, we end these notes!

Thanks for listening! You all get a grade 100 in the course!

]]>**Theorem (the implementation theorem)**: *Let be an operator system in , for . If the Shilov ideal of both is trivial, then every unital and completely isometric map of onto is implemented by a *-isomorphism. *

**Proof:** If the Shilov boundary ideals are both trivial, then are the C*-envelopes of , respectively. By the universal property of the C*-envelope, the exist surjective *-homomorphisms and such that

and

for all . We find that is a *-homomorphism of onto itself which fixes . Hence it is the identity, and must be a *-isomorphism.

**Example:** Suppose that are irreducible. If

for all , then and are unitarily equivalent. In other words, the unitary equivalence class of an irreducible operator is completely determined by the “structure” of the unital operator space generated by the operator. This follows from the implementation theorem because is simple.

In fact, the same argument works for a pair of irreducible compact operators, since the C*-algebra generated by an irreducible compact operator is the algebra of compact operators, which is simple.

The assumption that the operator is irreducible is obviously required, since , and (for example) all generate completely isometric operator systems.

Recall that a set of operators on a Hilbert space is said to be ** irreducible** if there is no nontrivial subspace that jointly reduces all the operators in . If is selfadjoint, then it is irreducible if and only there is no joint proper invariant subspace. An operator space, system or algebra is said to be irreducible if it is irreducible as a set.

If denotes the algebra of compact operators on , then the quotient map is called the Calkin map. If a C*-algebra contains , then we abuse notation and use the same symbol to denote the quotient , and we also call the Calkin map.

The C*-algebra of compacts is evidently irreducible, and whenever an operator system is such that the C*-algebra , then is also irreducible.

Let be an operator system. It is of interest to determine when has trivial Shilov boundary ideal in . Here is one easy criterion: if the identity map is a boundary representation, then the Shilov boundary ideal (being the intersection of the kernels of all boundary representations) must be trivial. The converse is also true when contains the compacts (I leave this as an exercise).

The following is Arveson’s boundary theorem, which gives a usable criterion for when the identity is a boundary representation. We first present it as presented in the second “Subalgebras” paper.

**Theorem (the boundary theorem):** *Let be an operator system in such that . The identity map is a boundary representation for in if and only if the restriction of the Calkin map to is*** not**

The following relatively-simple-but-ingenious proof that we will present is due to Davidson. Arveson’s original proof was much more involved, and used stuff from the theory of von Neumann algebras (it is always beautiful to see proofs that make use of tools somewhat unrelated to the problem at hand, but it is always best – especially when teaching – to be able to explain things using the simplest and most relevant tools). A recent preprint by Hasegawa and Ueda presents another proof. Below we will give another very simple proof, that became available only rather recently, after it was proved that there always exist many boundary representations.

Before the proof, note that the assumption that implies that is irreducible. If is assumed irreducible, then if the quotient map is not isometric on it means that . Indeed, is not isometric on , whence it cannot be injective, so contains a compact operator. By irreducibility, contains all compact operators.

**Davidson’s proof:** Suppose first that is completely isometric on , then we can define a completely isometric and unital map

,

which, by Arveson’s extension theorem, extends to a UCP map . Now, is a UCP map that fixes but annihilates , so it is not the identity representation. Hence, the identity has more than one UCP extension to .

For the converse, we assume that is not **isometric**, and prove that this implies that every UCP extension of to is the identity. The assumption that is not **isometric**, is stronger than the assumption that is merely not **completely isometric**, and we leave it to the reader to explain why there is no loss of generality in this assumption.

So suppose that is not isometric, and let be a UCP extension of to . We need to show that is the identity map.

Let be the minimal Stinespring dilation of . We will make use of basic facts regarding representations of C*-algebras containing an ideal. The representation breaks up into a direct sum on as

(here the subscript “a” stands for “analytic” and the subscript “s” stands for “singular”), such that annihilates the compacts and is a direct sum of the identity representation.

To obtain the split, one defines and proceeds from there.

The explanation why is a direct sum of the identity representation has two main parts:

- Every non-degenerate representation of the compacts breaks up into a direct sum of representations unitarily equivalent to the identity representation. We have explained this in class for representations of on finite dimensional spaces; here the explanation is similar (making use of matrix units, or rank one operators), and needs to throw in a Zorn Lemma argument.
- By the above, where every summand is equivalent to the identity. Now one observes that every extends uniquely to by the formula

.

For more details, confer either Davdison’s “C*-Algbegras by Example” or Arveson’s “An Invitation to C*-Algebras”.

Now let such that . If , then , and we find that . By considering the spectral projection of , which must be compact and hence finite dimensional, we obtain a nonzero finite dimensional subspace

.

Now we carry out a little computation. Fix , and write . Then

.

Since , we must have equality throughout. Using the fact that (because and for every ), we obtain, first that , and second, that .

We have found a nonzero finite dimensional space such that . Interestingly, at this point we can forget about the non-isometricness of , we discard , and the proof proceeds in an ingenious way.

Let be a minimal nonzero subspace with the property that . We define

for all .

Note that is a closed subspace that contains the identity.

**Claim:** If and , then .

Assuming the claim for the moment (and in order to immediately justify this definition), let’s see how it implies that must be the identity. Clearly, if the claim holds, then . If , then for all ,

.

But don’t forget that is irreducible, so elements of the form , where and must span the whole space. It follows that , and this concludes the proof, modulo the claim.

To prove the claim, we fix and . Define

.

is a nonzero subspace of (for the same reasons that defined above is a subspace). For every , we have

.

Since we have equality throughout, it must be that , and by minimality . The inequalities also give

so must be in the range of ( the range of the projection ), and, using that , we find that

,

and this shows that , as claimed. The proof of the boundary theorem is now complete.

**Modern simple proof:** (Thanks to Adam Dor-On, Satish Pandey and Marina Prokhorova for pointing my attention to this).

We first recall that every representation of has the form

,

so that is a sum of identity representations and annihilates the compacts. By Davidson and Kennedy’s theorem, for every ,

is a boundary rep. .

Suppose that is not a boundary representation. Then the above maximum is attained at a singular representation. If follows that is completely isometric, since every singular representation factors through the quotient .

Conversely, suppose that is a boundary representation. Then the Shilov boundary – which is the intersection of all boundary ideals, is trivial. It follows that the compacts do not constitute a boundary ideal, and so the is not completely isometric.

**Example:** Let be the shift on symmetric Fock space, and let be the unital operator algebra that the shift generates. Note that is a commutative operator algebra, while is not commutative and contains the compacts (showing this for requires some work). Using the boundary theorem, we can see that

- The case : The Calkin map is isometric. It follows that the Shilov ideal of is equal to the compacts, so the C*-envelope is .
- The case : The Calkin map is not completely isometric, as one can see by considering the row , which has norm at least

.

On the other hand, one can show that is essentially normal, so

.

This shows that is not completely isometric. It follows that the identity **is** a boundary representation, and , and in particular it is non-commutative.

Let be a vector space (over the complex numbers, to be concrete). We can form the space of matrices with entries in . If , then embeds in in a natural way (actually, there are several). It follows that usually, it suffices to consider only the spaces of square matrices, i.e., those of the form .

If is an operator space, then we have already discussed how one obtains a natural norm and order structure on . Although usually we are interested in the case where is an operator space, this will play a small role in this lecture.

The action of the scalars on extends naturally to left and right actions of the matrices on , by the usual formulas of matrix multiplication.

A special case of interest is when is a finite dimensional space. Fixing a basis, we have an identification . This gives rise to a natural identification

,

that is, just the space of all -tuples of matrices. The left action of on then becomes

,

(as one easily checks) and similarly for the right action. Given , we have an action by conjugation given by

.

If , then we obtain as usual a map by operating entry-wise. If we view , then this map is just . The complementary situation also arises: if is a linear map, we can consider , which we will usually just write as . For example, if is given by multiplying from the left by a matrix , then for is

.

Conjugation by a matrix is also an operation of this form.

Finally, one last operation: given and , we can always form the direct sum

.

**Definition 1:** A * matrix convex set* (

Being closed under direct sums just means that whenever . Being closed under conjugation with isometric scalar matrices, means that whenever satisfies and , then .

The following exercise contains two additional equivalent definitions of matrix convex sets.

**Exercise A:** . Prove that the following are equivalent:

- is matrix convex.
- is closed under direct sums, and for every UCP map , and every , we have that .
- For every scalar matrices , , which satisfy , and for every , , we have that . (This can be summarized by saying that is closed under
).*matrix convex combinations*

We can equip every with the operator (row) norm . A matrix convex set is said to be ** closed** if is closed for every , and it is said to be

**Example:** Let . The * matrix range* of is the set , where

.

It is not hard to show that the matrix range of is a closed and bounded matrix convex set. In fact, every closed an bounded matrix convex set over arises this way.

**Example:** Simple examples of matrix convex sets (over ) are the ** matrix intervals**, those matrix convex sets of the form

,

for some , where is the set of all selfadjoint such that .

**Example:** If is an operator space, then the “matrix ball” given by , where , form a matrix convex set. Similarly, if is an operator system, then the “matrix cone” , where , is matrix convex.

**Example:** As a final example, suppose that is an operator system. Then the sets

where , is a matrix convex set (over the vector space ) which is often referred to as the * matrix state space*. Similarly if is another operator space or system, using the identification , we can discuss matrix convex sets or .

If is a *-vector space (meaning just that is has an involution defined on it), then is also a *-vector space, with involution given by . This allows us to define selfadjoint elements, as well as the real and imaginary parts of an element . To be explicit, we define

and .

Suppose now that is a locally convex topological vector space, and that is its dual. (In typical situations, with the usual pairing). A set matrix convex set over is said to be * weakly closed* if it is closed with respect to the weak topology (generated by on ).

Continuous linear maps are in one-to-one correspondence with matrices of continuous linear functionals in . Effros and Winkler proved the following Hahn-Banach type theorem (see Section 5 in their paper):

**Theorem 2:** *Let be as above, and let be a weakly closed matrix convex set over such that . If , then there exists a continuous linear map , such that:*

*, and**for all and all .*

Just as Arveson’s extension theorem is a Hahn-Banach-type extension theorem, this theorem should be considered as a Hahn-Banach-type separation theorem. It is completely analogous to the fact that if is a weakly closed and convex set in which contains , and if , then there exists a continuous functional that separates from in the sense that and for all . The assumption that corresponds to the normalization appearing in the inequality ; this is obviously no big deal because we can always translate the set, and this translates the value of the functional. The assumption is no loss in the noncommutative case as well, since matrix convex sets always contain scalar points.

**Example:** If , then we identify with in the natural way: acts on by . A linear map can be identified with a tuple , where

.

The ampliation then takes the form

.

In this setting it is convenient to discuss a bipolar theorem.

Here I talked about minimal and maximal matrix convex sets, and about polar duality. (See Sections 2 and 3 in this paper for more details.)

We talked about examples: , and .

**Definition 3:** A matrix convex combination is said to be ** proper** if all s are surjective.

It is clear why this definition is important: if has a proper range, then will vanish for certain , and will contribute notion to the matrix convex combination.

**Definition 4:** A point in a matrix convex set is said to be a ** matrix extreme point** if whenever is a proper matrix convex combination, then are all invertible (se that for all ) and are all unitarily equivalent to . We denote the set of all matrix extreme points of by .

It is easy to see that every matrix extreme point in is an extreme point (in the usual sense) of in the linear space .

**Example:** .

**Example:** For every matrix convex set , .

**Definition 5:** The *closed ** matrix convex hull* of a set is the smallest closed matrix convex set containing .

The matrix convex hull is also just the closure of the set of all matrix convex combinations of elements from .

The following theorem of Webster and Winkler is a noncommutative version of the Krein-Milman theorem (here is a link to their paper).

**Theorem 6 (Webster-Winkler Krein-Milman theorem):** *Let be a bounded and closed matrix convex set. Then (in particular, the set of matrix extreme points is not empty. *

**Proof:** First, is not empty, because . Let us write . Then, clearly, .

We wish to prove the converse inclusion. For this, we assume without loss of generality that .

Recall how the proof of the Krein-Milman theorem roughly goes: if there is a point then we can separate it from with a linear functional, say and for all . Then the set of points for which is itself a weakly compact convex set, and so by the non-emptiness part of KM, it has an extreme points, which must be an extreme point of – a contradiction.

We can’t use this proof since the matrix extreme points might come from different levels (and the set of matrix extreme points in a certain level might indeed be empty – no contradiction). Thus we will separate a point from all levels of using a “matrix functionals” as in Theorem 2, and then we will try to reduce the situation somehow to the classical Krein-Milman theorem to derive a contradiction. The details are as follows.

Assume that . By Theorem 2, there exists a continuous linear map , such that:

- , and
- for all and all .

We define a linear functional as we did in Notes 5 (but forgetting the normalization), by

or, equivalently,

.

A little bit of algebra shows that for every and every pair of matrices , we have and

,

where in the inner product and are considered as elements of , simply by taking the rows and putting them one after the other in one long column (this is like deciding to identify ).

Now, define the set

.

(Here we write for the Frobenius norm ).

First, we note that is convex (in ). Indeed, if and , then we can write

,

where and satisfy all the requirements. This shows that is convex.

Next, we note that one can use in the above definition only s that are surjective, and in particular we can use only values of :

.

We leave the proof of this as an exercise. From this fact it follows that is closed, too.

We shall need the following technical claim:

**Claim:** If is an extreme point, then there exists a matrix extreme point for some such that (where is as in the definition of ).

**Proof of claim:** As noted above, we can choose , and which is surjective, so that

.

We shall prove that is a matrix extreme point in . Suppose that is given as a proper matrix convex combination:

,

with and for all .

Set . Note that

.

Also, , because is surjective (as a product of such), so non-zero. We therefore find as a nontrivial convex combination (in the usual sense) of elements in :

.

Now, since was assumed to be an extreme point in , we find that

for all . Conjugating with the right inverse of , we have that

(*)

for all . We would like to deduce from (*) that is unitarily equivalent to for every . To achieve this, I will use some trickery I came up with, which I myself find somewhat dubious (see Webster and Winkler’s paper (and also the remark below) for a different solution).

First, I will assume that is an operator system. It is true that every weakly closed and bounded matrix convex set is “matrix affine homeomorphic” to a matrix convex set in an operator system, but since we need this theorem only for operator systems in the first place, let’s just grab this assumption without question.

Next, I will assume that is finite dimensional, say . I can do this, because there are only finitely many elements in that appear in (*) for so we may as well replace by the operator subsystem that these elements generate.

Finally, I will use without proof the (easy) fact mentioned above in the first section, that every closed and bounded matrix convex set in is the matrix range of some tuple . With this in mind, we have that there is a fixed such that

.

We go one step further, and throw in the identity in the tuple even if it wasn’t there (say ), noting that this doesn’t change the geometry of the problem. Dubious, isn’t it?

Now we have for , and for . The equation (*) now implies that

as maps on the operator system generated by , for every , where . Since and are all UCP maps, we can plug in the identity (say ) and find that , so these s are isometries. But are all surjective, hence they are unitaries, and **the proof of the claim** is complete.

So now we know that given any extreme point then there exists a matrix extreme point for some such that . Therefore

,

where we used the fact that and the assumption on .

Since this is true for every extreme point of , we find by the usual Krein-Milman theorem

.

Now let be from the beginning of the proof, such that . So let s.t.

.

This contradicts what we just found out. So no such exists, and the proof is complete.

**Discussion:** Recall ordered vector spaces, convex sets , and the affine functions on convex sets . The NC generalization is operator systems, matrix convex sets, and the matrix state space.

We are almost ready to prove the following theorem, which completes our work from the previous notes about the existence of boundary representations.

**Theorem 7:** *Let be an operator system. Then there exist sufficiently many pure UCP maps, in the sense that for every and every , *

is pure .

Before the proof, let us recall that is ** pure** if whenever is CP and is CP, then for some .

**Proof:** It is easy to see that

and is finite dimensional .

Indeed, simply consider compressions. In other words (or should I say, using a different notation)

.

Now, it is easy to see that matrix convex combinations are norm decreasing, hence thanks to Theorem 6

.

The proof will therefore be complete if the pure UCP maps in were the same as the matrix extreme points. This is in fact true (it is due to Farenick, here is a link to the relevant paper). Thus, the Theorem 9 below – which says that pure points are matrix extreme points in – completes this proof.

**Remark:** There is an alternative proof of the existence of pure UCP maps in Davidson and Kennedy’s paper which avoids using matrix convexity (hence avoids using Webster and Winkler’s result as well as Farenick’s). The reader s will have to fill in some details on their own.

To prove Farenick’s theorem on pure UCP maps of an operator system, we shall first need to understand better pure UCP maps of C*-algebras. This task has already been carried out by Arveson in Section 1.4 of his first “Subalgebras” paper. It is interesting to note that in the beginning of that section Arveson wrote: “The results in the later parts of this section go somewhat beyond our immediate needs in this paper; we feel, however, that these results may be interesting, and that they will prove useful in future developments”.

**Theorem 8:** *Let be a C*-algebra, and let be a UCP map with minimal Stinespring representation . Then is pure if and only if is irreducible. *

**Remark:** Compare this with Proposition 6 in Lecture 7.

**Proof:** Define

.

If is positive, then can define a CP map by

.

By the definition of the Stinespring representation, . The first part of the proof consists of proving that is in an order preserving one-to-one correspondence with , via the assignment .

Now, if is pure, then every has the form for some ; it follows that every satisfying must be of the form . This implies that , whence is irreducible.

Working the argument backwards gives the converse, and the proof is complete.

**Theorem 9:** *For every operator system , a UCP map into is pure if and only if it is a matrix extreme point of . Moreover, every pure UCP map extends to a pure UCP map .*

**Proof:** We shall outline the proof only in the direction which we need for the existence of boundary representations (that is, we’ll argue that matrix extreme UCP maps are pure). The converse is much simpler, is similar to an argument carried in the proof of Theorem 6, and is left as an exercise.

Assume that is a matrix extreme point in . It is required to prove that is pure, and that it extends to a pure UCP map . This will prove the “moreover” part of the theorem (thanks to the implication that we have left unproven).

Since is pure, its range must be an irreducible set of operators on . Indeed, otherwise, if is a nontrivial projection, then would be a direct sum of matrix states, in contradiction to matrix-extremality of (direct sums are matrix convex combinations).

Now define

.

This set is weakly compact and convex, so it has extreme points (by the ordinary Krein-Milman theorem). It turns out that every extreme point in is actually a pure UCP map. To show this, we fix , and consider its minimal Stinespring representation . By Theorem 8, is pure if and only if . To show this, it suffices to prove that every strictly positive element is a scalar multiple of the identity. Now, the fact that is an extreme point in implies that the compression map is injective on (I am omitting the proof, which involves using the definitions).

To summarize where we are in the proof: to prove that the map that we fixed (an extreme point that extends ), we now take a strictly positive , and we aim to prove that is a scalar. Clearly there is no loss of generality in assuming that is also strictly positive.

We obtain a decomposition of as a sum

.

Put and . Then we have

,

and once we restrict the domain to we obtain

.

We have that , so this gives as a matrix convex combination of . Since is assumed to be a matrix extreme point, we obtain for some unitary . Writing , we find that

.

This means that CP map agrees with the identity map on the irreducible set of operators . Recall that we proved several weeks ago that the * multiplicative domain* of a CP map is a C*-algebra. It follows therefore that is the identity CP map. By the uniqueness part of Choi’s theorem, must be in the linear span of (because is the canonical Choi-Kraus representation of the identity), and so are scalars. We find that is a scalar, and since , we find that is a scalar, as required.

If you don’t like the multiplicative domain argument, here is a nicer one: the operator system is irreducible, hence . But is simple, so . Now, by the general theory of the C*-envelope, the UCIS map must extend uniquely to a *-isomorphisms between the C*-envelopes, hence uniquely extends to the identity map (roughly speaking). We will return to such considerations in more detail in the next week of lectures, when we discuss the “boundary theorem”.

**Summary:** Up to here, we proved that every matrix extreme point extends to a pure map in . In particular, we proved that a matrix extreme point in is pure. It remains to show that itself is pure. This requires some work, but since the key ideas are similar, we finish the outline of the proof at this point.

The paper is a very long paper, so it has a very long introduction too. To help to get into the heart of editors and referees, we wrote, at some point, a shorter cover letter which attempts to briefly explain what the main achievements are. See below the fold for that.

**But first, a rant! **

This paper took *forever* to publish.

In January, 2015, we submitted our paper to a top journal (and I announced arxiv submission in a previous blog post). About a month later it was rejected. We then submitted it to another journal, one tier lower. Two years (!) later it was returned to us with no referee report (!!), just an explanation that they did not manage to get it refereed, and maybe it is in our best interest to try somewhere else. We then sent the paper to several other pretty good journals, and it was rejected, at times after the editor received several mixed “quick opinions”, or after a referee said the work is good but “lacks applications”, or after a full referee report that was positive about some aspects of the paper but overall had some criticism about the length and “disjointedness” of the paper. In the end we sent it to JMAA and we are very pleased with the treatment it got there.

The bottom line: the paper was accepted almost four years after the first submission. In summary, a lot of time was wasted for referees, editors, and us.

I don’t want this rant to be long, but I just want to scream that the way things work is so screwed up! The pressure to publish in top journals, and the fact that this pressure affects everyone and therefore also increases the pressure on the journals and in particular the editors, leads to a big waste of time that has nothing to do with mathematics. Not to mention the sad fragmentation and specialization of mathematics.

You can say: “Well, your paper was weak. *You* are to blame for wasting the editor’s and referees’ time. If you sent it to a low tier journal right away, it would have been accepted faster, and less referees would have wasted *their* precious time”.

To this I can only answer: I think that our paper is very strong, and yes, we wanted this to be recognized. Dammit, it is stronger than papers that I accepted (as referee), to journals that rejected it. Oh, and BTW, **referees who reject good papers are wasting their own time, as well as everyone else’s**.

Along the way people asked: “Eighty pages? Why didn’t you split it into three papers?” Well, we finished about a year’s worth of hard work, and were very proud of the result. We felt that everything fitted beautifully together. Splitting the paper might have gotten us more publications faster, but we thought it would involve repetition and be a disservice to anyone trying to study the literature.

Maybe we could have played this better. But should this really be played like a game?

**A little bit of constructive advice:** Here are a few pieces of advice for authors/editors when dealing with long, mult-area papers that are hard to referee:

- At some point a wise colleague suggested that we include a cover letter that anticipates the objections that referees had (long, too many topics, why did you include this or that theorem), and answered them
*at the outset*instead of answering them after rejection. - At some point, we shared the information about where our paper has been considered earlier and also the names of editors. This led to editors communicating and possibly (I don’t know) using previous referee opinions reports that were good enough for one journal but not great enough for another (maybe).
- I made myself a rule of thumb that I will strive whenever possible to write papers that are less than 40 pages long. After all, we all wnat our ideas to get through, don’t we? It isn’t always possible, but in case it is possible at all, then there is no dilemma.
- Here is another piece of advice which I don’t know if I can follow (because I am working with junior people) but I know that I want to. One strategy with publishing is to always try great journals, and who knows, maybe we’ll get lucky. This works to some extent. In general, it is very hard to get excellent papers in the wrong field to top journals, the competition is very fierce. I would like to be able to say: I’ll submit only to a place where the chances of acceptance are
**high**. I have used this strategy with very good (but not top) journals and I was very happy with the results. - Overall,
**I think that the pressure we feel to publish in top journals is fake news**. Of course everyone who looks at your CV and sees an annals paper will be automatically impressed, no matter how little they understand your field or care about it. But when you are evaluated for a position, tenure, or promotion, the really important things are the letters of evaluations. There need to be**more than a handful**of prominent mathematicians who will be able (and willing) to vouch for the merit of your work, and write something substantial about your achievement and their impact. If you have gotten to this stage, the journal names don’t matter that much any more, and the actual content of your work (and how well you communicated it) is what really matters.**So most of the effort and also most of the worries should be aimed at making great research.** - On the other hand, don’t sell yourself too cheap. If you believe in your work, don’t give up.

*Along with the paper, we submitted a cover letter that contained the following overview.*

The paper concerns the study of operator algebras related to monomial ideals. We introduce several operator algebras arising from monomial ideals, and we study how these operator algebras relate to the initial data (that is, the actual monomials determining the ideal) and how these operator algebras relate to each other. This setting accommodates also algebras related to subshifts, from which we derive our motivation.

Our approach considers both C*-algebras and nonselfadjoint operator algebras that arise through the apparatus of C*-correspondences and subproduct systems. Results from C*-algebras feed in the examination of the nonselfadjoint ones, and vice versa.

Let us give a brief list of some connections.

Beginning with a monomial ideal we consider two nonselfadjoint algebras, namely:

- the nonselfadjoint algebra of the related subproduct system , and
- the
*tensor algebra*of a certain C*-correspondence .

Along with that there are several C*-algebras that can be associated to and , namely:

- the C*-algebras , , and
- the
*Toeplitz-Pimsner algebra*and the*Cuntz-Pimsner algebra*.

When the monomial ideal comes from a subshift then is the Matsumoto algebra on the Fock space of allowable finite words.

In contrast to C*-algebras of C*-correspondences, the C*-algebras of subproduct systems are not well understood in general. However the C*-correspondence that we introduce, is one of the novelties of the paper, and is useful to completely describe them (Theorem 6.1).

This also allows to further compute the C*-envelope of , by using that it sits inside the tensor algebra of (Theorem 7.6). The C*-envelope question is wide open for general subproduct systems, yet here we offer a C*-correspondence link to a definite answer.

The nonselfadjoint operator algebras also contribute to the study of the arising C*-algebras, as in Theorem 7.11. We note that in general the Cuntz-Pimsner algebra is not always . In fact we give necessary and sufficient conditions on the forbidden word set for that to be the case (Proposition 5.8 and Theorem 6.1).

This raises the question, what is then ? We answer this in Proposition 10.2 where we show that is the graph algebra of the follower set graph of the subshift (a natural construction that gives a representation of the subshift).

The classification results (Corollary 8.12 and Theorem 9.2) are strikingly different: up to permutation of symbols for and and up to local piecewise conjugacy for . The preparatory work on the quantised dynamics (Section 4) allows us to easily illustrate the differences (see Example 9.8). As a further preparatory work we give general results on classification of general subproduct systems (Section 3) which are needed in Section 9 for the classification of the algebras . In particular we show that isometric isomorphisms (resp. bounded isomorphisms) are equivalent to isomorphism (resp. similarity) for general subproduct systems, which improves previous work in the area.

Rigidity for classes of C*-correspondences has been under considerable examination in the past years. Hence the classification results for the algebras are follow-ups of the work of Davidson-Katsoulis and Davidson-Roydor. Part of our study addresses some subtle issues with their known results (in conjunction to Remarks 8.4 and 8.5), which called for some attention.

In the process we develop some alternative techniques that provide some additional insight and extnesions:

- they apply in the study of relevant C*-algebras (Corollary 8.19), while,
- they manage more examples beyond that of Davidson-Katsoulis (Application 11.1).

Recall the definition of a boundary representation.

Our setting will be of an operator system contained in a C*-algebra . Recall that earlier we discussed the situation of a unital operator algebra , and later we extended our attention to unital operator spaces. In this post we will consider only operator systems, but there will be no loss of generality (because every unital completely contractive map extends to a unique unital completely positive map , and vice versa).

**Definition 1:** Let be a unital operator space. A ** boundary representation **for is an irreducible representation such that the only UCP map that extends is itself.

The kernel of the above definition can also make sense when is not irreducible. Some authors have used the term “boundary representation” in this more general sense. We make the following definition:

**Definition 2: **Let be an operator system. A UCP map is said to have the * unique extension property* (

- There exists a
**unique**UCP map such that . - is a *-representation.

Clearly, an irreducible *-representation is a boundary representation for if and only if has the UEP.

**Remark:** One might imagine a situation where a UCP map has a unique UCP extension , but is not necessarily a *-representation. It might have made sense to say that **this** kind of map has the UEP, but the notion has been defined as above (bby Arveson), and we stick with this definition. The alternative notion that we suggest has not been explored much.

How does one prove that a UCP map has the UEP? A very interesting solution, due to Dritschel and McCullough, and inspired by earlier work of Muhly and Solel, is the observation that the UEP is equivalent to a certain extremal property.

To define this extremal property, we need the notion of the * dilation order* on UCP maps. If and , then we say that is a

A dilation is said to be ** trivial** if there is a UCP map such that .

**Definition 3:** Let be an operator system. A UCP map is said to be ** maximal** if every UCP dilation of is a trivial dilation.

The following proposition connects between maximal maps and ones with the UEP.

**Proposition 4:** *A UCP map is maximal if and only if it has the unique extension property. *

**Proof:** Let be a maximal UCP map. We wish to prove that has the UEP. Let be a UCP extension of to . Let be a minimal Stinespring dilation of . But then is a dilation of , so by maximality, , so is invariant under . By minimality of the Stinespring representation, . It follows that , that is , so is already multiplicative and has the UEP.

Conversely, assume that has the UEP. Suppose that is a dilation. Our goal is to prove that is a trivial dilation. Without loss of generality, we may assume that is a *minimal* dilation, in the sense that (because if it wasn’t, the space on the LHS would be reducing for and we could restrict attention to there).

Now let be an extension of to , which we know exists, thanks to Arveson’s extension theorem. Now is a UCP map from to which restricts to , so by the UEP, the map is multiplicative. For every , we have (using the Kadison-Schwarz inequality)

.

Write . Then, after moving all the terms to one side, the above inequality can be rewritten as , which means that , and consequently is invariant under for all . Therefore . We have shown that every *minimal* dilation has to be *really* trivial, and consequently every dilation must be trivial.

Dritschel and McCullough proved (essentially) that every UCP map of an operator system can be dilated to a maximal UCP map. When one begins with a completely isometric and UCP map (say, the identity representation of the operator system ), then the maximal dilation has the UEP, and the unique extension is a *-representation whose image is . This is enough to prove the existence of the C*-envelope and the Shilov ideal, but it does not settle the existence of boundary representations.

Davidson and Kennedy used the idea in Dritschel and McCullough’s proof to show the existence of sufficiently many boundary representation by dilating certain UCP maps to maximal dilations. This will be our topic in the next few sections. In the final section we will use the existence of boundary representations to prove the existence of the C*-envelope and the Shilov ideal.

By Proposition 4, we know that every maximal UCP map extends to a *-representation of on . To be a boundary representation, must be irreducible. How can we tell, looking at , whether will turn out to be irreducible? The following notion, introduced by by Arveson in 1969, was shown by Davidson and Kennedy more than four decades later to be of key importance.

**Definition 5:** A CP map is said to be ** pure** if the only CP map satisfying (that is, and are CP) is a scalar multiple of , i.e., .

**Examples:** The identity map of of the matrices is pure. The identity map of the diagonal matrices is not pure.

**Proposition 6:** *Every pure maximal UCP map extends to a boundary representation. *

**Proof:** As we noted before Definition 5, Proposition 4 implies that every maximal extends to a *-representation . We need to show that the pureness assumption implies irreducibility of . So let’s show that if is not irreducible, then is not pure.

If is not irreducible, there is a subspace that reduces . Define for all . Then is clearly CP, and (using the fact that commutes with for all ) we verify that is CP. But is a non-trivial projection and not a scalar multiple of . Thus is not pure, and the proof is complete.

The simple Proposition is the key to Kennedy and Davidson’s approach to showing that there exist *sufficiently many* boundary representations. There are two steps that now need to be carried out:

- Show that there exist sufficiently many pure UCP maps to completely norm .
- Show that every pure UCP map can be dilated to to a maximal pure UCP map.

We will carry out these steps in the next sections.

The following “local” version of maximality facilitates the proof of the main result of this section.

**Definition 7:** Let be a an operator system. A UCP map is said to be ** maximal** at if for every dilation , we have .

It is easy to see that is maximal at if and only if for every dilation . Moreover, *using that is selfadjoint*, we see that is maximal if and only if it is maximal at for all ; it even suffices to know maximality at a dense subset of .

**Lemma 8:** *Let be a UCP map and . Then can be dilated to a UCP map such that is maximal at . *

**Proof:** Consider the supremum

.

The supremum is finite, since and are held fixed, and is assumed UCP, so in particular it is contractive. If we replace every in the supremum with its compression to the space spanned by and , then the same supremum is achieved. So we may just consider

.

By compactness of in the BW topology, the supremum is attained at some , which is the sought after dilation.

The real tricky part, and a key to Davidson and Kennedy’s approach, is to show that a pure map can be dilated to a pure one which is maximal at a given pair:

**Proposition 9:** *Let be a ***pure*** UCP map and . Then for every , can be dilated to a ***pure*** UCP map such that is maximal at . *

**Proof:** Fix as in the statement of the proposition. Define

and

,

where

.

By the lemma is non-empty, and it is clear that it is a BW-compact and convex face of . Thus, has an extreme point , which is also an extreme point of .

It is clear that is maximal at . The point is that this extreme point must be a pure. The details will be left out of the lecture, and I refer the interested student to take a look at the proof of Lemma 2.3 in the Davidson-Kennedy paper.

Now, the existence of a pure maximal dilation follows by a transfinite induction argument.

**Theorem 10:** *Let be a ***pure*** UCP map. Then can be dilated to a pure and maximal UCP map. *

**Proof:** As mentioned above, this is achieved using a transfinite induction; see the proof of Theorem 2.4 in the Davidson-Kennedy for full details. For brevity, we’ll just give the case of separable and separable .

Let be a dense sequence in , and write . By the previous proposition, we can find inductively, for every , a pure dilation of which is maximal at . If we let be the space such that , then we can define the direct limit of these spaces

.

We can define a UCP map uniquely by insisting that . It is not hard to see that is pure and maximal on .

Are we done? No, we’re not done. To be maximal, must be maximal on . We therefore repeat the procedure above and obtain a pure UCP dilation , which is maximal on . Repeating this infinitely many times, we find a sequence such that is maximal on . Finally, defining

and , we have a pure UCP map that dilates and is maximal on . This completes the proof.

In the first version of their paper, Davidson and Kennedy proved the following theorem.

**Theorem 11:** *Let be an operator system. Then there are sufficiently many pure UCP maps of , in the sense that for all and all ,*

(*) is a pure UCP map .

Their proof uses a characterization of pure UCP maps due to Farenick, the notion of matrix convex sets and matrix extreme points, and a Krein-Milman type theorem by Webster and Winkler. We will turn to the proof of Theorem 11 in the next lectures, since it takes time to introduce these notions.

Now if (*) holds, then dilating every pure UCP map to a maximal one (as we have done in the previous section), and then extending the maximal dilation to a boundary representation, one obtains that there are ** sufficiently many boundary representations**, in the sense that for all and all ,

(**) is a boundary representation of .

**Theorem 12:** *Let be an operator system. Then there are sufficiently many boundary representations for relative to . *

In the final version of their paper, Davidson and Kennedy present also a quicker route (which they attribute to Craig Kleski) to obtaining (**). But I had some problems filling in the details. Bonus points to the student who will explain this to me!

**Proposition 13: **(Invariance principle). *Let , , be two operator systems, and suppose that is a unital completely isometric map of onto . Then a map has the UEP if and only if has the UEP. In particular, the boundary representations of and are in one-to-one correspondence.*

**Proof:** Since the UEP is equivalent to maximality (Proposition 4), this is quite easy to see. Indeed, if is a dilation of , then is a dilation of , and they both have the same “shape” (i.e., break up as a direct sum or not).

We can finally show that the Shilov boundary and the C*-envelope exist.

**Theorem 14:** *Let be an operator system. There exists a largest boundary ideal . Moreover, the quotient together with the completely isometric unital quotient map have the following universal property: for every unital completely isometric map , there exits a surjective *-homomorphism such that .*

**Proof:** Let be the direct sum of all boundary representations. We shall prove first that is the Shilov ideal. Because of (**), is completely isometric on , and since , the quotient map is completely isometric on . Thus is a boundary ideal.

Let be a boundary ideal. Our goal is to prove that . Now, the quotient map is completely isometric on . The map () is a well defined UCIS – hence also UCP – map from onto . The direct sum of maps with UEP also has UEP, and thus has UEP. By the previous proposition, extends uniquely to a representation . In other words, we obtain the factorization , and it follows that must be contained in . This shows that is the Shilov ideal.

The existence of the C*-envelope follows from the invariance principle, similarly to the above. I’ll go through the details, but the readers should at least stop here and see that they can do it themselves.

Consider the map given by . By the invariance principle, has the UEP, so it extends to a *-representation onto . Composing with the *-isomorphism , we obtain a surjective *-homomorphism , and we are almost done. To finish, one needs to check that for all , which readily follows from the constructions.

]]>**Note: **If you are following the notes of this course, please note that the previous lecture has been updated with quite a lot of material.

**Definition 1:** A * uniform algebra *is a norm closed subalgebra of the algebra continuous functions on some compact space , which contains the constant functions and separates points.

Sometimes, one uses the terminology **function algebra*** *instead. Note that we are talking about algebras of complex valued functions; there is no point in introducing uniform algebras in the case of real valued functions (why?).

The prototypical example of a uniform algebra is the closure of a nice algebra of functions – such as polynomials or rational functions – in the supremum norm of , where is a subset of . The * Disc Algebra* is defined to be the closure of polynomials in the sup norm on the closed unit disc .

We will refer to the example of the Disk Algebra repeatedly in this discussion, to illustrate our ideas in a concrete way. But it is good to keep in mind that every notion that we illustrate in the case of the Disc Algebra, or every question we ask about it, makes sense in general for uniform algebras at large.

The Disc Algebra is also a Banach algebra, so it has an abstract life of its own. It is is clear to us that one would be able to cook up different “representations” of this algebra that will be isometrically isomorphic. Can we represent as a function algebra on some other topological space ?

Consider the unit circle , and consider the restriction map given by . The kernel of is quite big, but once it is modded out we have . Thanks to the maximum modulus principle, the restriction/quotient map is an isometry when restricted to . Said differently, we have a natural isometric inclusion

,

and we see that a particular Banach algebra can be isometrically isomorphically represented as a uniform algebra of functions living on different topological spaces. The * Shilov boundary *is introduced as a kind of compass to help us find our way among the different possibilities.

**Definition 2:** Let be a uniform algebra. A **boundary*** *for is a closed subset such that the maximum of every function is attained on :

.

**Proposition 3 (Shilov): **The intersection of all boundaries for is a boundary.

This makes the following definition sound:

**Definition 4: **Let be a uniform algebra. The * Shilov boundary* of (relative to ) is the smallest boundary for . The Shilov boundary is denoted .

Thus, the Shilov boundary for satisfies the following:

- .
- The map is isometric on . In particular, .
- is the smallest closed subset of with this property.

Note that the Shilov boundary of relative to is simply . The Shilov boundary has a deeper property, that it is independent of the original topological space which we used to represent as functions on. This deeper property (as well as the previous ones, and in particular the existence) will follow from the work we shall do in the noncommutative setting.

Let us compute the Shilov boundary of relatie to . On the one hand, we know that is a boundary for , so . On the other hand, it is easy to find, for every , a function such that and for all . Such a point is said to be a **peak point*** *for . It is easy to see that in general, any peak point of a function algebra is contained in its Shilov boundary, hence , and so the circle is indeed the Shilov boundary in this case.

The set of * peak points *has another characterization which generalizes more readily. In the case of the Disc Algebra, we know that every unital and contractive functional extends to unital and positive functional . Now, we know from the Riesz representation theorem that there exists a probability measure on such that for all (and in particular for all ). Forgetting for a moment about , we find that every unital contractive functional on has a

Now for any point , the point evaluation on is a well defined unital contractive functional, and there is at least one very obvious representing measure: the Dirac measure . A point in the interior of the disc has many more representing measures; for example,

,

for every .

On the other, if is on the unit circle, then has only one representing measure – the Dirac measure. It is easy to see that this is so, because this point is a peak point.

We see that in the case of the Disc Algebra, the Shilov boundary is the circle, which is also the set of peak points, which is also precisely the set of points that have aunique representing measure. In general, for a uniform algebra the set of peak points does not have to equal to the Shilov boundary; however it is a dense subset. Moreover, the set of peak points coincides with the points for which there exists a unique representing measure (equivalently, the set of points that have a unique positive extension to ).

**Definition 5: **Let be a uniform algebra. The * Choquet boundary *for is the set of all for which the evaluation functional has a unique representing measure.

Although the notions of peak points and the Choquet boundary are equivalent in uniform algebras, the definition of Choquet boundary is made in terms of unique representing measures, since this notion generalizes well to the case of unital function **spaces***. *

In the late 1960s William Arveson initiated a program to study non-selfadjoint operator algebras (here’s something I wrote on Arveson several years ago, which contains some details on his early papers; you might want to read that too because it contains a presentation of the ideas from a slightly different angle). Arveson realized that it would be fruitful to study the relationship between a unital operator algebra and the C*-algebra which it generates. Perhaps his greatest contribution in this context, is the recipe that he came up with for obtaining noncommutative generalizations of the Shilov and Choquet boundaries (which, significantly, he was able to implement in several important applications).

Let us first see how to generalize the notion of a boundary. In the setting of , there is a one-to-one correspondence between closed sets and closed ideals , given by

,

where . It is useful to note that the quotient algebra is isometrically isomorphic to , and the quotient map can be identified with the restriction map .

Now, is a boundary for a function algebra precisely when the restriction/quotient map is isometric when restricted to . So if we replace subsets of with ideals, then we are led to the following definition. In what follows, we will use the word *ideal *to mean a closed, two-sided ideal.

**Definition 6: **Let be a unital operator space. A ideal is said to be a * boundary ideal *for if the quotient map restricts to a complete isometry .

Note that we made the definition for unital operator *spaces, *to accommodate future applications. Note also that we switched from the quotient map being isometric to it being completely isometric; in the commutative case (that is, when is commutative) we know that these two notions are equivalent, and so when generalizing to the noncommutative setting one has to make a choice between merely isometric or completely isometric. It turns out that completely isometric is the correct notion to use.

The Shilov boundary was defined as the smallest boundary. In the noncommutative setting we are led to the following definition.

**Definition 7: **The largest boundary ideal is said to be the * Shilov boundary ideal *(or simply the

It is not clear that for every unital operator space there exists a Shilov boundary ideal; we will prove later that it does indeed exist. After Arveson introduced the Shilov boundary he proved that it exists in certain cases, but its existence in general remained an open problem for about a decade until Hamana proved it using different ideas (in this post that I linked to earlier the history is explained briefly).

I outlined Hamana’s solution in class because I think it is interesting, and makes use of * injective envelopes *as well as

Recall that in the commutative case, we mentioned (without any proof) that the Shilov boundary of a uniform algebra does not depend on the particular choice of inclusion . A similar phenomenon happens in the noncommutative setting.

Let be a unital operator space, and let be its Shilov ideal. Then it is clear that is smallest *quotient* of into which is mapped completely isometrically. Now suppose that we have a surjective unital complete isometry , and consider the Shilov ideal for in . Then we can also form the quotient , which is the smallest quotient of that contains completely isometrically. It is not obvious, but we will later prove that . That is, the quotient of by the Shilov ideal is an invariant of . It is perhaps re-emphasizing that in general .

**Definition 8: **Let be a unital operator space and let be the Shilov ideal. The * C*-envelope* of is the quotient .

Proving that the quotient is invariant to the choice of inclusion is closely related to showing that it enjoys the following universal property.

**Theorem 9: **(Universal property of the C*-envelope). Let be a unital operator space, and let be the Shilov ideal for in . Write , and let be the quotient map. Then for every unital completely isometric map , there exists a unique surjective *-homomorphism , such that .

(A commutative diagram should be sketched at this point.)

Finally, we present the noncommutative analogues of Choquet boundary points. Recall that a point is said to be in the Choquet boundary of a uniform algebra if the point evaluation had a unique representing measure; equivalently (since positive unital maps on are in one-to-one correspondence with probability measures) if the point evaluation has a unique unital positive extension to . Now what are the point evaluations ? They are precisely the irreducible representations of .

This leads to the following definition.

**Definition 10: **Let be a unital operator space. A * boundary representation *for is an irreducible representation such that the only UCP map that is an extension of is itself.

Arveson showed that if a unital operator space has * sufficiently many *boundary representations, then the intersection of the kernels of all boundary representations is the Shilov ideal, and that the image of under the direct product of all boundary representations is the C*-envelope. We will explain this later. For the time being, I will just mention that the existence of boundary representations in general was an open problem for forty five years, until it was solved by Davidson and Kennedy not long ago.

Let us see how the above strategy for finding the Shilov boundary works in the simple case of the Disc Algebra. Indeed, we knew that the Shilov boundary was contained in the circle thanks to the maximum modulus principle. To see that the circle is the Shilov boundary, we used the fact that every point in is a peak point – that is – a point in the Choquet boundary. This is precisely the step of “showing that there are sufficiently many boundary representations”.

In the next lecture we will begin to give proofs for the existence of boundary representations, from which the existence of the Shilov ideal and C*-envelope follows. Later we will see non-commutative examples and applications.

]]>- Positive functionals and states on C*-algebras and the GNS construction.
- For a linear functional on a C*-algebra, .
- The Gelfand-Naimark theorem .
- A Hahn-Banach extension theorem: If is a unital C*-algebra and is a unital C*-subalgebra, then every state on extends to a state on .

From now on we will begin a systematic study of operator spaces, operator systems, and completely positive maps. I will be following my old notes, which for this part are based on Chapters 2 and 3 from Vern Paulsen’s book , and I will make no attempt at writing better notes.

As I start with some basic things this week, the students should brush up on tensor products of vector spaces and of Hilbert spaces.

**UPDATE DECEMBER 4th: **

I decided to record here in some more details the material that I covered following Paulsen’s book, since my presentation was not 1-1 according to the book. In what follows, will denote a unital operator space, an operator system, and and are C*-algebras. Elements in these spaces will be denoted as etc.

**Proposition 1: **For a positive map , .

**Example A: **The map from into is unital positive, and satisfies , so the previous proposition gives the best bound.

The reader is invited to use this example to check that several of the results below cannot be improved.

**Proposition 2: **For a completely positive (or just two-positive) map .

**Proposition 3: **(Kadison-Schwarz inequality) For a completely positive (or just two-positive) map and .

**Proposition 4: **If is a unital contraction, then the map given by

.

is well defined and positive.

**Proposition 5: **Consequently, if above is completely contractive, then is completely positive and completely contractive. In particular, a unital map is completely positive if and only if it is completely contractive.

**Proposition 6: **Let be a linear map. If is a commutative C*-algebra (that is ) then , so in particular it is completely contractive if and only if it is contractive. Likewise, if is an operator system (and still assumed to be a commutative C*-algebra), then is completely positive if and only if it is positive.

An analogue of the above proposition “from the other side” is as follows:

**Proposition 7: **If is a linear map from a commutative C*-algebra into and arbitrary C*-algebra , then is completely positive if and only if it is positive (and a similar statement about cb norms **does not** hold).

**Theorem 8: **(Stinespring’s theorem). Let be a completely positive map. Then there exists a Hilbert space , a *-representation , and a linear map , such that

for all .

**Remarks:** 1) Moreover, one can choose in such a way that . In this case, we say that is * the minimal Stinespring representation* of . As the terminology suggests, the minimal Stinespring representation is unique (up to unitary equivalence that respects ).

2) If is unital, then , that is, is an isometry. In that case, it is common to identify with and then the Stinespring representation is written as a dilation theorem:

.

1. As a first application of Stinespring’s theorem, we noted in class that every *-representation of has the form (up to unitary equivalence)

,

where is a finite family of isometries with mutually orthogonal ranges. (If is a normal *-representation of , then , where is now a finite or infinite family of isometries with mutually orthogonal ranges.) From Stinespring’s theorem we find that every CP map from (or on in the normal case) has a so-called **Choi-Krauss decomposition**

,

where .

2. As a second application of Stinespring’s theorem as well as of the above results, we considered a “dilation machine”. As an example, consider the unital operator space , which is just the closure of all polynomials in the supremum norm in . Given a contraction , the map (defined for polynomials, and extended by continuity to all ) is clearly a contraction, thanks to von Neumann’s inequality. By Proposition 4, extends to a well defined, unital and **positive** extension of from into , which we also call . By Proposition 1, is bounded. Now, since is dense in , we obtain a positive unital continuous extension . By Proposition 7, is **completely positive**. (Now by Proposition 2 we find that is in fact completely contractive, but we don’t require it directly). By Stinespring’s theorem, dilates to a *-representation , and

.

Defining , we see that is unitary (since it is the image of a unitary under a representation), and we have found a unitary dilation:

, for all .

**Remark: **There was a certain amount of cheating involved in the above application, since we used von Neumann’s inequality to pull it off, whereas we proved von Neumann’s inequality using the unitary dilation of a contraction. Don’t feel cheated: first, von Neumann’s inequality can be used by other methods, which do not go through a unitary dilation. Moreover, this is an outline for a general dilation machine.

**Attempting to prove Ando’s dilation theorem using Stinespring’s theorem. **

Suppose that we have a pair of commuting contractions , and that we want to prove that there exists a unitary dilation for and . Let’s try to pull of the above trick. Using Ando’s inequality, getting a unital contractive map given by . We can extend this to a map . At this stage we get stuck: the space is **not** dense in . So we get a unital positive map , but we don’t know that it is CP, and we don’t know that it extends to . Without the extension we don’t know that is UCP. Even if we did, we wouldn’t be able to apply Stinespring’s theorem (which would give unitaries and , the sought after dilation).

Thus we are led to seek a hahn-Banach type extension theorem for CP maps.

**Theorem 10: **(Arveson’s extension theorem). Let be an operator system in a C*-algebra , and let be a CP map. Then there exists a CP map that extends .

**Remark: **The extension also has the same norm as .

The proof of Arveson’s extension theorem involved some tools that are worth recording. The proof consists of three steps.

**STEP I. **

**Proposition 11:** (Krein’s extension theorem): Let be as above, and assume that is a positive map. Then extends to a positive map .

Recall that when the target C*-algebra is commutative then a positive map is CP (Proposition 7), then Krein’s extension theorem is actually Arveson’s extension theorem for the special case where . We then used Krein’s theorem to obtain Arveson’s extension theorem for the case where . This is :

**STEP II. **

A key gadget for this is as follows. If be a linear map. Then we can define a linear functional by

(*) .

Now, one can check that the map is bijective from to $L(M_n(S),\mathbb{C})$, with inverse , where

.

As one of the students pointed out to me in class, although the above clunky definitions were set up to have the proof (of a statement below) boil down to it might be better to view this in a coordinate free manner. This is done as follows.

For linear spaces , then

,

by associating with the map . More generally, see “Tensor-Hom Adjunction“.

If we let , and in the above, then (using that ), we get the isomorphism

.

Let us chase the identifications. A map is associated to the linear functional . Here, we used the identification of with the linear functionals on it. The standard way to do this is to identify every matrix with the linear functional , where is the normalized trace . Thus we see that under the above isomorphism, using this particular identification of with its dual, a map is identified with the linear functional

.

It is now easy to check (for example on elements of the form ) that this linear functional is precisely the one that we defined above in (*).

With these gadgets in place, the following Proposition is STEP II in the proof of Arveson’s extension theorem.

**Proposition 12: **With the above notation in the case , the following are equivalent:

- is CP.
- is -positive.
- is positive.

The proof actually goes through noting that (1) implies (2) implies (3) is immediate, and then proving that (3) implies

4. extends to a positive linear functional .

And then proving that the linear map corresponding to is CP. This is a CP extension of , so itself must be CP. In particular, this proves Arveson’s extension theorem in the case .

**STEP III. **

Finally, the last step of the proof of Arveson’s extension theorem uses the result in the finite dimensional case to obtain the general case. The idea is, that we can take a net of finite dimensional projections and define a net of CP maps by . Every extends to by STEP II.

Now the net is a bounded net of CP maps. It turns out that every closed ball in can be equipped with a topology that makes it compact. This is the * BW topology* (bounded weak topology), which is the topology where a bounded net converges to if and only if for all , . (For more information consult pp. 84-85 in Paulsen’s book).

Now the extension of is simply any cluster point of the net . That completes the idea of the proof of Arveson’s extension theorem.

To ease my life when typing this stuff up, I will denote by the open unit disc in . Our goal is to prove the following theorem.

**Theorem 1 (Pick’s interpolation theorem):** *Let , and be given. There exists a function satisfying and *

*if and only if the following matrix inequality holds:*

(*)

To make this problem into a problem in operator theory, we will need to introduce the Hardy space and a little bit of the theory of reproducing kernel Hilbert spaces. Fortunately, I’ve written some notes on this topic a few years ago.

See this linked post for an introduction to the theory of reproducing kernel Hilbert spaces. You need to read that (if you don’t know this stuff), in order to continue reading.

In today’s post, I will prove Pick’s theorem using the commutant lifting theorem. In a previous post, I proved it using the “lurking isometry” argument, another Hilbert space approach, which has the advantage that it also gives rise to the “realization formula” for the interpolating function (in that old post I also said a few words about the commutant lifting approach). The commutant lifting approach has the advantage that it is very beautiful, elegant, and generalizes to a plethora of situations.

In the linked post that I asked you to read, we met the Hilbert function space

and we saw that its multiplier algebra is – the algebra of all bounded analytic functions on the unit disc with the supremum norm. We saw that is spanned by the collection of its kernel functions , where

is the * Szego kernel . *Note that the Szego kernel appears in the statement of Pick’s theorem above. In fact, the “Pick matrix” appearing in (*) can be seen to be the matrix .

We haven’t yet computed a non-trivial example of an isometric dilation of an operator. Let us do this now.

The most important operator on is the shift

given by (or ). It is easy to see that is unitarily equivalent to the unilateral shift of multiplicity one.

An important fact about the Hilbert function space is (Proposition 3 in the linked post) that

(**)

for every and every . In particular, if we define , then is invariant under the adjoint of every multiplier, and in particular it is co-invariant under . If we write for the compression (so ), then is a diagonalizable operator, diagonal with respect to the **non**-orthonormal basis , with corresponding eigenvalues .

**Proposition 2: ***With the notation as above, let . Then is a contraction, and its minimal isometric dilation is equal to . *

**Remark: **You might be thinking: this is obnoxious! We took an isometry, compressed it, and now we compute its minimal dilation – which is what we started with!! Patience, my friend.

**Proof: **As we remarked above, it is clear that is an isometric co-extension of . Let us show that

(#) .

, and so

.

Since the right hand side is invariant under , we find that it contains every polynomial. Since the polynomials are dense in , we have (#), and the proof is complete.

Our proof strategy (for Theorem 1) will be to consider the Hilbert space of . On this space we can define the operator and determined by

.

Since the kernel functions are linearly independent, the above equation well-defines a linear operator on , and hence this determines an operator .

**Proof of Theorem 1: **We start with the easy direction (which holds in every RKHS). Assume that there exists multiplier such that and for ; we need to prove that condition (*) holds. In the situation considered, is the co-restriction of to . Indeed, by the very definition of , and the “important fact” recalled in the previous section, . Thus, , and so for all . Writing for some , and expanding, we obtain

,

and since this has to hold for all choices of , this is just the condition (*).

Now, for the converse, assume that the condition (*) holds, that is, the Pick matrix is positive semi-definite. We need to show that existence of a multiplier such that and for .

Now let . Since and are both diagonal with respect to the same basis, they commute. We know that is a contraction, and that its minimal isometric dilation is . Now, the assumption (*) is precisely that (whence ) is a contraction. Indeed, this follows from the computation that we carried out two paragraphs above, in the first part of the proof.

We know, by the commutant lifting theorem (Theorem 3 in the previous post) that there exists such that and .

In the next section we will prove that , in the sense that if , then there exists such that . Believing this for the moment, we have . But then is the required interpolant: since , while , we find that for all .

This completes the proof of Pick’s theorem, modulo the computation of the commutant of .

**Question: **Note that the second part used features of the space , and it does not hold in arbitrary RKHS. What property of the RKHS was used to make the argument work?

**Theorem 3: **.

**Proof: **Clearly (the first algebra is commutative, and ). So let . We need to show that there exists a such that . Since , we define . Then is an analytic function on . If we show that , then it will follow that and is a multiplier, and the proof will be complete.

For every , its Taylor series converges in norm, thus

,

and the series converges in norm. If we apply point evaluation at , we get the convergent numerical series . It follows that is a multiplier, , and .

]]>