(Well, these are the usual highs and lows of being a mathematician, but since this is a survey paper and not a research paper, I feel comfortable enough to share these feelings).

This survey was submitted (and will hopefully appear in) to the Proceedings of the International Workshop on Operator Theory and its Applications (IWOTA) 2019, Portugal. It is an expanded version of the semi-plenary talk that I gave in that conference. I used a preliminary version of this survey as lecture notes for the mini-course that I gave at the recent workshop “Noncommutative Geometry and its Applications” at NISER Bhubaneswar.

I hope somebody finds it useful or entertaining

]]>I have two stories two tell about Janos.

When I came to do a postdoc at the University of Waterloo in 2009, I was already working in operator algebras, and my supervisor was Ken Davidson. However, I remembered my functional equations origins, and I was happy to find out that Janos Aczel is an Emeritus Professor there. I was invited to give a talk at the Analysis Seminar, and my plan was to talk about the triumph of my PhD work. However, my talk was planned quite badly, or maybe I was just too excited, and my talk was over before half an hour. For a young mathematician in a new department, fresh out of his PhD, this is a disaster. The seminar leader asked if there were any questions or comments, and there was a brief and awkward silence. But then Janos raised his hand, and when he got permission to make his comment he said: “a good talk should have a good beginning, a good end, *and the two should be close to one another*“, and when he said the word “close” he held his hands open with palms facing each other, indicating a

Speaking of beautiful acts of kindness, this brings me to the second story. Janos and I met several times to discuss functional equations. In one of our meetings he brought up the fact that his wife is Jewish. Janos was a student in Budapest during World War II. He told me that during World War II, the Jews in Hungary at a certain age were sent to forced labor camps. Hungary was an ally of Germany, and had several anti-semitic laws and measures (a number of Jews were deported to Poland where they were murdered) but Jews were not closed in ghettos and for the most part were not sent to concentration camps, until Germany invaded Hungary, in 1944.

Janos told me that some of the Jewish students had to go to forced labor, but then some were released and allowed to go back to university. (About three fourth of the forced laborers did not survive). But they missed a year of classes! So some top students, and Janos was among them, volunteered to teach the returning Jewish students the material that they missed. This might sound like a light anecdote, missing classes! You have to remember Janos acted in a deeply anti-semitic country, and even though there were no deportations to concentration camps yet, there were quotas of how many Jews could work in certain jobs, and there were also “spontaneous” mass executions, before the Germans invaded. I consider this a beautiful act of solidarity and resistance.

One of the students that Janos tutored was Susan, they fell in love, and she later became his wife. I don’t remember if he told me how exactly Susan survived the horrible years to come (did she hide her identity, hide herself, or did she flee). She survived, and they lived together until Susan passed away in 2010.

I am sorry that I did not interrogate Janos for more details at the time and write them down, but I thought I should write down what I remember. I added some historical details using this Wikipedia article (and also the corresponding one in Hebrew).

]]>(Lectures 1,2,3 were board talks).

]]>Recall that the ** matrix range** of a -tuple of operators is the noncommutative set , where

is UCP .

The matrix range appeared in several recent papers of mine (for example this one), it is a complete invariant for the unital operator space generated by , and is within some classes is also a unitary invariant.

The idea for this paper came from my recent (last couple of years or so) flirt with numerical experiments. It has dawned on me that choosing matrices randomly from some ensembles, for example by setting

`G = randn(N);`

`X = (G + G')/sqrt(2*N);`

(this is the GOE ensemble) is a rather bad way for testing “classical” conjectures in mathematics, such as what is the best constant for some inequality. Rather, as increases, random behave in a very “structured” way (as least in some sense). So we were driven to try to understand, roughly what kind of operator theoretic phenomena do we tend to observe when choosing random matrices.

The above paragraph is a confession of the origin of our motive, but at the end of the day we ask and answer honest mathematical questions with theorems and proofs. If is a -tuple of matrices picked at random according to the Matlab code above, then experience with the law of large numbers, the central limit theorem, and Wigner’s semicircle law, suggests that will “converge” to something. And by experience with free probability theory, if it converges to something, then is should be the matrix range of the free semicircular tuple. We find that this is indeed what happens.

**Theorem**: *Let be as above, and let be a semicircular family. Then for all , *

almost surely

*in the Hausdorff metric.*

The semicircular tuple is a certain -tuple of operators that can be explicitly described (see our paper, for example).

We make heavy use of some fantastic results in free probability and random matrix theory, and our contribution boils down to finding the way to use existing results in order to understand what happens at the level of matrix ranges. This involves studying the continuity of matrix ranges for continuous fields of operators, in particular, we study the relationship between the convergence

(*)

(which holds for as above and by a result of Haagerup and Torbjornsen) and

(**) .

To move from (*) to (**), we needed to devise a certain quantitative Effros-Winkler Hahn-banach type separation theorem for matrix convex sets.

]]>I am writing this post as notes for my upcoming *Pizza & Beer* seminar talk. The section at the end of the notes contains references and links to papers where this was used.

Since the disc trick is just a trick, we can explain it with an example. Let be the open unit ball in complex -space. We wish to prove the following theorem.

**Theorem A:** *Let be two homogeneous analytic varieties.* *Suppose that there exists a biholomorphism . Then there exists a zero preserving biholomorphism . *

(The statement of the theorem should remain on the board the whole talk.)

The intuitive idea is that zero is contained in every homogeneous variety, and that it is a distinguished point in varieties. The theorem really should seem obvious. If you think it’s trivial, then please don’t waste your time giving me your proof. Instead, try to prove it for the case where *biholomorphism* is replaced by *homeomorphism.*

If we could prove the above theorem, then we could obtain the following corollary (with simple proof):

**Corollary**: *Let be two homogeneous analytic varieties.* *Suppose that there exists a biholomorphism . Then there exists a linear map such that .*

This has following striking corollary (with complicated proof):

**Corollary: ***Let be two irreducible homogeneous analytic varieties.*

These kinds of questions arose in my study of the isomorphism problem for universal operator algebras, and the varieties played the role of maximal ideal spaces of the algebras. That’s not so important to understand right now.

We will prove Theorem A in the next section. The rest of this section is devoted to a crash course in several complex variables. All of the following definitions are given in the form most convenient for my presentation, and are equivalent to the “usual” definitions (the equivalence might depend on **very** deep theorems in complex analysis; though I don’t remember whether I need Cartan’s Theorem A, Theorem B, or both).

(In the talk itself, I will begin by discussing Cartan’s Uniqueness Theorem, that a self map of a domain that fixes a point and has derivative equal to the identity at that point must be the identity map. Then I will show that a biholomorphism between circled domains that sends the origin to the origin must be a linear map, concluding that the ball and polydisc are not biholomorphic. Since these are well known results I will not type them up here.)

We shall write for the open unit ball in , and we shall denote the unit disc as . We will use the word “disc” to refer to any intersection of with a one dimensional subspace. Likewise, the word “ball” will refer to the intersection of a unit ball with a linear subspace.

**Definition 1:** A function is said to be ** analytic** (or

,

where we use the multi-index notation: if and , then we write .

**Definition 2:** A map is ** holomorphic** (or

(Sorry, I really do use the terms analytic/holomorphic interchangeably. Bear with me.)

**Definition 3:** An *analytic*** variety** in (or an

Usually we’ll just say “variety” or “subvariety”, omitting the word “analytic”. All of our varieties will be subsets of the ball.

**Definition 4:** A variety is said to be ** homogeneous** if for every and every , we have that , too.

It’s a fact that a homogeneous analytic subvariety in is actually an ** algebraic** subvariety, meaning that it is the zero set of a family of homogeneous ideals.

Note that for every homogeneous variety , the circle acts on by *gauge automorphisms*

for and .

**Definition 5:** A map between two subvarieties in the ball is said to be * holomorphic* if it is the restriction of a holomorphic map on the ball. It is a

**WARNING:** A biholomorphism between subvarieties of the ball need not be the restriction of an automorphism.

We will prove the theorem (and understand why it is not trivial) by considering a sequence of cases.

**1. Points. **The simplest kind of homogeneous variety is the singleton . If and is a biholomorphism, then and there is really nothing to show.

**2. Subspaces. **The second simplest kind of homogeneous variety is a subspace, or – if we replace a subspace by its intersection with the ball – a ball (or the point ). If are two balls inside , and is a biholomorphism, then and have to be balls of the same dimension, and essentially we have an automorphism (by which I mean self-biholomorphism) of the unit ball for some . Now, is well understood (see Chapter 2 in Rudin’s book “Function Theory in the Unit Ball of “). For every , then the following map defines an automorphism of :

,

where is the orthogonal projection onto , is the orthogonal projection on the complement, and . (It is interesting to spell out what this formula says when .) Note that and that . This immediately implies that is transitive. In particular, if is an automorphism and , then is an automorphism that sends zero to zero. This essentially proves Theorem A for the case where and are both balls.

We note in passing – and this will be used below – that every automorphism of has the form for some and a unitary . Indeed, if and , then is an automorphism mapping to . Hence by Cartan’s Uniqueness Theorem is equal to a unitary , so .

**3.** **General case, first observation.** For a general , we do not know much about the group . If we knew that was in the orbit of under , then we’d be done, because if there was an automorphism such that , then . Our theorem will imply, as expected, that for every point in such that there exists a biholomorphism such that , there does exist an automorphism mapping to , but we do not know this in advance.

**4. A few more examples of homogeneous varieties.** Suppose that and are both a union of lines. (Draw two varieties, each one a union of two lines.) We see that is a special point, from a topological point of view (also in the complex case, removing it leaves a disconnected topological space). Since is a homeomorphism, it must send zero to zero.

Suppose that is a *cone* . (Draw a cone in ). We see that is a special point, a * singular point*. In fact it is the only singular point in (it is hard to imagine what happens in the complex three dimensional space, but we can simply check that vanishes only at the origin). Since preserves smoothness and tangent spaces (remember that is defined in a neighborhood of the variety), it must send to a singular point of . So if also has as a unique singular point, we are done.

We see, that when the varieties both have a unique singular point at the origin, the biholomorphism must preserve this. Not every homogeneous variety has a singular point. A linear subspace is a homogeneous variety, and it has no singular points. The ** singular locus** (set of singular points) of a (complex) homogeneous variety that is

For the simplest example, consider the union of two subspaces . The singular locus of is . This might be a point, but it might be a higher dimensional subspaces (that is, a higher dimensional ball). Didn’t we solve this case in step 2 above? No! We know that the orbit of under must be contained in , but we don’t know that it is equal to all of it.

**5. The disc trick.** Above we saw examples in which it followed from topological or geometric reasons that any map must apriori preserve the origin. Let us consider the next simplest case, in which there exist discs and , such that .

(Draw this.) If then we are done. Otherwise, let . Consider the set

for some biholo .

Since is a biholomorphism for all , we have that contains the circle of radius centered at (inside ), which is given by .

Now let us define

for some automorphism .

Since for every biholomorphism , we have that is an automorphism of , it must hold that contains , which is a simple closed loop (in fact, a circle) passing through . Now, the point is contained in the interior of . But for all . In other words, we can rotate until we hit with a point of the form . Since , it follows that , and we are done.

(Explicitly, we found that there are such that . )

**6. Handling the general case (the singular nucleus).** We now finish the proof by showing that whenever there exists a biholomorphism , then either we can show that ** has to** map zero to zero, or we can show that there exist discs and , such that , in which case we can apply the disc trick and find another biholomorphism , such that is zero preserving.

As we started discussing above, a homogeneous variety is either a linear subspace (or ball, if we restrict to the unit ball), or it has a singular locus which is again a homogeneous variety (the case is a special case of a linear subspace). Therefore the singular locus , if it is not a subspace (or ) also has a singular locus . We define ** the singular nucleus** of a homogeneous variety to be the singular locus of the singular locus of …. of the singular locus of , applied until we get a linear subspace (or ). In case is a subspace, then we define . Note that always contains .

Now, is a biholomorphism, then it maps onto (if one of them exists), and hence it maps onto . If then we are done, because then , and we must have that .

Otherwise, maps the ball onto the ball . They must be of the same positive dimension, and so is essentially an automorphism of a ball . But for any automorphism of , there exists a disc which it mapped onto another disc (indeed, up to a unitary an automorphism has the form , which preserves the disc obtained as the span of intersect the ball). Thus, we can apply the disc trick, and we are done.

He is a nice application due to Michael Hartz.

**Theorem B:** *The group of unitaries is a maximal subgroup of the group of conformal automorphisms of the unit ball. In fact, it is a maxial sub semigroup.*

**Proof: **If I’ll have time I’ll do this in the talk, but it looks like it’s going to take enough time. So. Exercise! You have to prove that every (not necessarily closed) semigroup of which contains is all of . If you don’t feel like solving the exercise, you can find the proof in Hartz’s paper which I link to below.

Since it came up again and again in different but similar situations (isometric isomorphisms between operator algebras, bounded isomorphisms between operator algebras, bi-Lipschitz biholomorphisms between noncommutative varieties), we once tried to formulate it as a general theorem, so that it could be invoked in all possible situations, but we ended up with a very boring theorem. It seems that if ever a situation calling for the disc trick arises in the future, the easiest thing would be to simply use the trick again.

Here is a list of places where it has appeared.

The disc trick appeared first in the paper Subproduct systems, by Baruch Solel and myself (Section 11). It was used to classify, up to isometric isomorphism, the operator algebras in a rather limited class (all the tensor algebras that come from a subproduct systems with and ). Later, in the paper The isomorphism problem for some universal operator algebras, by Davidson, Ramsey and myself, we figured out how we can use the singular nucleus and the properties of automorphisms of the ball, in order to find invariant discs (or points) in the maximal ideal spaces, therefore leading to the classification of ** all** tensor algebras of subproduct systems with finite dimensional Hilbert space fibers. In the same paper, we also applied the idea to obtain the classification of tensor algebras of

The trick was used by Adam Dor-On and Daniel Markiewicz in their beautiful paper Operator algebras and subproduct systems arising from stochastic matrices (Theorem 7.24 there), where operator algebras of subproduct systems with fibers that are of C*-correspondences over a commutative von Neumann algebra were treated.

The trick was also used in the important paper Classification of noncommutative domain algebras by Arias and Latremoliere. This paper was the first example where the trick was used for operator algebras not arising from subproduct systems.

The trick was also used by Michael Hartz, in his lovely paper On the isomorphism problem for multiplier algebras of Nevanlinna-Pick spaces, where the classification results I got with Davidson and Ramsey were extended to a significantly larger class of multiplier algebras. It is in this paper that Hartz presented the neat proof to the fact that the unitary group is a maximal subsemigroup in .

The trick reappeared in the paper Operator algebras of monomial ideals in noncommuting variables by Kakariadis and myself, where it was used to finish off the completely bounded isomorphism problem for operator algebras with finite dimensional Hilbert space fibers, and also in these two papers (one, two) by Salomon, Shamovich and myself, on the classification of operator algebras of bounded noncommutative analytic functions.

]]>(I will repeat some background material from these two (one, two) posts from last year).

If is an operator on a Hilbert space (we will henceforth write this as ), another operator on a larger space (we assume ) is said to be a * dilation* of if

where we have written in block operator form with respect to the decomposition . In this case, is said to be a * compression* of . We then write . If and are -tuples of operators, we say that is a

A -tuple is said to be * normal* if is normal and for all . Normal tuples of operators are the best understood ones, thanks to the spectral theorem. When a normal tuple acts on a finite dimensional space then it is simultaneously unitarily diagonalizable.

For an operator we define its * norm * to be the operator norm of , that is: . An operator is said to be a

If is a tuple of contractions, we write for the minimal constant such that there exists a normal tuple such that and for all .

In other words, is the price we have to pay (in terms of norm) in order to be able dilate to a normal tuple. We call the * dilation constant* for .

Now we can define the universal dilation constant for -tuples of contractions to be

is a -tuple of contractions .

That is, is a dilation constant that works for all -tuples of contractions.

**The complex matrix cube problem:** What is ? In particular, what is ?

In other words, we ask: *what is the minimal constant such that given a tuple of contractions, one can find a normal dilation of such that for all ? *

It was proved that the constant* * always works if are all selfadjoint, and that this constant is optimal for selfadjoints; see this paper by Passer, Solel, and myself. Passer later proved that the constant holds for arbitrary tuples, however it is not known whether Passer’s constant is optimal. In particular we knew that . In last year’s project we found that is (at least slightly) strictly bigger than , and during the last year we improved that to , but there is a large interval where may be and we do not know what it is.

Suppose we are given a -tuple of contractions . We wish to know whether it is true or false that has a normal dilation such that for all .

The first observation is that it is enough to consider only tuples of unitaries. Indeed, if is a contraction (meaning that ) then

is a unitary dilation of . So given a -tuple of contractions, we can find a -tuple of unitaries such that . Thus, we may as well assume that is a tuple of unitaries, and ask whether we can dilate .

In order to be able to carry out calculations on a computer, we approximated a universal normal tuple (which can dilate anything up to a scale factor) by normal tuples of matrices with joint eigenvalues at the vertices of the polytope , where is a regular polygon with vertices that is circumscribed in the unit circle . When is moderately large, the boundary of is very close to . One can analyze easily bounds on the error that arises from this approximation.

Our first approach was to randomly select a tuple of unitaries and to check whether it has a normal dilation of norm at most . Up to a small error that arises from the approximation mentioned in the previous paragraph, the question is whether can be dilated to some ampliation of , where is the tuple of normals constructed above. Basically, modulo some equivalences within the theory, we know that has the required dilation of size at most , if and only if there exists a UCP map sending to for . This, modulo some more equivalences (and as been noted in this paper of Helton, Klep and McCullough) is equivalent to the existence of positive semidefinite matrices such that

for

where , for , and

.

The existence of such semidefinite matrices can be interpreted as the feasibility of a certain semidefinite program (SDP). In fact, we decided to treat the full semidefinite program as follows

minimize

such that

,

,

.

Note that we moved to the right hand side, to make the equality constraint affine in the variables and . Recall that and are all fixed. In the implementation we actually defined this as a maximization problem

maximize

such that

,

,

.

This is the same semidefinite optimization problem that we used last year as well. (Last year we solved it using CVX on MATLAB. This year we used the package cvxpy in Python for the semidefinite problem. Our code is available on Google Colaboratory, see the links below).

Currently the best known lower bound for is , which is obtained from , where and is a pair of unitaries such that . We say that and are ** -commuting unitaries**, or that they

So the first easy to state goal for this year was to search long enough and find an example of a pair of unitaties such that , or at least as big as possible. Let me say straight away that we did not find such an example. In fact, for all the random pairs that we tested, we typically got significantly lower, usually near (see below).

However, already last year we observed that concentration of measure phenomena form an obstruction to finding high dilation constants. Roughly speaking, “concentration of measure” means that a Lipschitz function on a high dimensional probability space will obtain values close to its mean with very high probability.

It turns out that the above heuristic has strong theoretical evidence. A paper of Collins and Male (which relies on an earlier breakthrough paper of Haagerup and Thorbjornsen) states that a tuple of independently sampled -tuples of random unitaries (sampled from the group given the Haar probability measure) converges * strongly* to the tuple given by the generators of the reduced C*-algebra of the free group . I won’t define what strong convergence is (see the attached papers), but it implies that , and we think that we should be able to prove that there is actually equality in the limit.

One chief outcome of our project is to have collected significant evidence that for a random (Haar distributed) pair of unitaries , we have that tends to be close to (up to the error bounds in our finite dimensional approximation). As remarked at the end of the previous section, this is also strong evidence that . Indeed we found that as the size of the matrices increases, the standard deviation from the mean goes down. For example, during one of the nights of the project-week, we ran an experiment with the following outcome. (Recall that is the number of vertices in the polygon approximating the disc, so that the dilation constant that we calculate has a relative error of at most . This means that for every pair for which we compute a dilation constant , the true dilation constant lies between and . Also, is just the number of operators in our -tuples, and is the size of the sampled unitaries).

k = 23, d = 2, no. of random samples for each n is 50 ----------------------------------------------------- n = 2 max_C = 1.397229, mean_C = 1.252607, std = 0.089505

n = 3 max_C = 1.435168, mean_C = 1.338410, std = 0.062456

n = 4 max_C = 1.442803, mean_C = 1.377033, std = 0.049362

n = 5 max_C = 1.431040, mean_C = 1.398623, std = 0.015876

n = 6 max_C = 1.433795, mean_C = 1.404585, std = 0.012104

n = 7 max_C = 1.428610, mean_C = 1.408119, std = 0.012655

n = 8 max_C = 1.439640, mean_C = 1.410235, std = 0.010174

n = 9 max_C = 1.425398, mean_C = 1.406535, std = 0.008882

n = 10 max_C = 1.427183, mean_C = 1.408760, std = 0.006846

n = 11 max_C = 1.425355, mean_C = 1.409243, std = 0.007134

Here are some more results, with smaller value of , which allows us to run more experiments with larger . Here we use the value , which gives a relative error of . The result below suggest that for large tends to be between and (and we believe it is ).

k = 12, d = 2, no. of random samples for each n is 50 --------------------------------------------------- n = 2 max_C = 1.389380, mean_C = 1.216731, std = 0.095579

n = 3 max_C = 1.426683, mean_C = 1.337595, std = 0.047033

n = 4 max_C = 1.417624, mean_C = 1.356411, std = 0.031138

n = 5 max_C = 1.418676, mean_C = 1.362433, std = 0.027081

n = 6 max_C = 1.397214, mean_C = 1.372076, std = 0.014774

n = 7 max_C = 1.406423, mean_C = 1.372238, std = 0.012304

n = 8 max_C = 1.412893, mean_C = 1.376826, std = 0.010334

n = 9 max_C = 1.395714, mean_C = 1.377344, std = 0.008770

n = 10 max_C = 1.390549, mean_C = 1.377275, std = 0.006920

n = 11 max_C = 1.395763, mean_C = 1.377383, std = 0.009049

n = 12 max_C = 1.398410, mean_C = 1.379734, std = 0.006276

n = 13 max_C = 1.398447, mean_C = 1.379338, std = 0.005721

n = 14 max_C = 1.390360, mean_C = 1.379050, std = 0.005425

n = 15 max_C = 1.393251, mean_C = 1.380928, std = 0.005420

n = 16 max_C = 1.389973, mean_C = 1.379776, std = 0.004730

n = 17 max_C = 1.393275, mean_C = 1.379678, std = 0.005481

n = 18 max_C = 1.392536, mean_C = 1.380155, std = 0.004127

n = 19 max_C = 1.389623, mean_C = 1.379491, std = 0.003916

n = 20 max_C = 1.390470, mean_C = 1.379793, std = 0.004657

n = 21 max_C = 1.391533, mean_C = 1.379091, std = 0.004251

n = 22 max_C = 1.390274, mean_C = 1.379855, std = 0.004138

n = 23 max_C = 1.385381, mean_C = 1.379334, std = 0.002905

Note that the maximal values of the dilation constant are (perhaps counter intuitively) attained for small values of . This is because the standard deviation diminishes with , and one is *less* likely to find a large counter example by chance. However, since we see that the standard deviation becomes smaller as increases, this means that to get a good feel of what happens we don’t need to run many tests. We can sample a * single* pair of random unitaries of large size (say ) and compute their dilation constant. With very high probability, you will get something that is equal to up to the error due to the fact that we are using finite . You can try it yourself, with the code linked to at the end of this post.

Here is a quick test I ran, with and one sampled pair of unitaries for every value of (this took one hour to run):

`n = 10, C = 1.3991621945198882 `

`n = 20, C = 1.4022442529059544 `

`n = 30, C = 1.3974791375625744 `

`n = 40, C = 1.4001629040203714 `

`n = 50, C = 1.400105201371202 `

Although we have not found new examples with large dilation constants, the results we obtained have implications also for the dilation constant , because in joint work-in-progress with Gerhold, Pandey and Solel, we proved that , thus in particular would imply that , which is an improvement to the best known upper bound .

The numerical results mentioned above made us gain confidence in the conjecture , and hence . We were happy to quickly guess that . We then ran some tests for the case and found out that this is probably not true, and that probably (for the reader’s convenience: ):

d = 3, k = 12 (error margin = 0.035276), no. samples for each n = 20 ------------------------------------------------------------------- n = 2 max_C = 1.596420, mean_C = 1.410624, std = 0.100716

n = 12 max_C = 1.603867, mean_C = 1.589800, std = 0.011682

n = 20 max_C = 1.605948, mean_C = 1.591090, std = 0.007812

Finally, instead of randomly sampling pairs of Haar distributed unitaries, we constructed finite dimensional compressions of (we took the subspace of generated by all words of length less than or equal to some ) and computed the dilation constant of these compressions. The idea was that if , then we should be able to find this by checking a certain (large enough, but finite) compression. However, for very small the running time and memory requirements blow up, and the results do not teach us much:

m (maximal word length) = 1, n (dimension) = 5, k = 10 ------------------------------------------------------- C = 0.951057 (runtime: 1 seconds) m (maximal word length) = 2, n (dimension) = 17, k = 10 ------------------------------------------------------- C = 1.164818 (runtime: 4 seconds) m (maximal word length) = 3, n (dimension) = 53, k = 10 ------------------------------------------------------- C = 1.242592 (runtime: 45 seconds) m (maximal word length) = 4, n (dimension) = 161, k = 10 ------------------------------------------------------- C = 1.279138 (runtime: 657 seconds)

For the computer chokes. Perhaps it would be worth trying this with enlarged resources.

A final service that Matan and Ofer did for us, was to help us understand the shape of a certain intriguing geometrical object that arise in the theory.

Given two operators , their ** numerical range** is the set

is a state .

This set contains significant information on the structure of the operator space generated by and . We can prove (work-in-progress with Gerhold) that for pairs of unitaries sampled randomly and independently from the Haar measure on , the numerical range converges in the Hausdorff metric to almost surely. Thus, is what a random matrix range of looks like. A paper of Collins, Gawron, Litvak and Zyczkowski showed that if one samples a series of two independent matrices from the Gaussian Unitary Ensemble (for example), then converges almost surely to the ball. We asked: what does converge to (almost surely)?

My first guess was that the limit (which we can prove is ) is a bidisc. But rather quickly, using formulas from a paper of Lehner, we realized that it is not a bidisc. What does it look like? Is it a ball?

This was computed for us by Matan and Ofer (who used the formulas from Lehner’s paper in the computation). Here is the projection onto the real parts:

This looks like the unit ball in the norm for (the “TV screen”), so they tested to see what ball this is closest to. It seems to be somewhere between and . However, we don’t really expect this to be an ball.

Actually, Lehner’s formulas do not directly give us but only its * polar dual* for all . So first Matan and Ofer computed , (the real part of) which is illustrated below, and then they computed the polr dual.

The outcomes (and non-outcomes) of the project are:

- We have not found with larger than , and in particular we do not have a bigger lower bound for than what we already knew.
- On the other hand, we have evidence that converges to as . Repeating our experiments gives the same result. Therefore,
- Using theoretically-based heuristic, we have evidence that . Therefore,
- Using proved theoretical results, we have evidence that .
- We have not been able to obtain non-random lower bounds for by compression to the subspace of words of length at most . The best we get from this, adding the error, is a lower bound of at most .
- We have a program to draw to any precision, and we know that it looks kind of like a “tv screen”, or “Android app”.
- We have open source (courtesy of Matan and Ofer) code that is able to reproduce our results and expand our data.

A short presentation of the project by Matan and Ofer can be found here (the presentation they used made a combination of slides and whiteboard, so the above text hopefully makes up for the missing oral presentation).

Our code for calculating the dilation constants can be found on colaboratory here. Our code for calculating the numerical range of the free Haar unitaries can be found on colaboratory here. NOTE: in both files one has to play with the parameters to see significant results, in the code files the values of , for example, are small, so that one can get it running fast. (The same remarks hold for the resolution used to draw the numerical range in the second link.) If you want to carry out serious experimentation, you need to give thought to the parameters

]]>To explain, I will need some notation.

Let be a field. We write – the algebra of all polynomials in (commuting) variables over the field .

Fix . For , let denote the set of all commuting -tuples of matrices over . We let . Now we are looking at all commuting -tuples of commuting matrices of all sizes.

Points in can be plugged into any polynomial . In fact, points in can be naturally identified with the space of finite dimensional representations of , by

.

(We shall use the word “representation” to mean a linear algebraic homomorphism of an algebra into for some ).

Now, given an ideal , we can consider its zero set in :

for all .

(We will omit the subscript for brevity.) In the other direction, given a subset , we can define the ideal of functions that vanish on it

for all .

Clearly, for every . The interesting statement is the converse.

The following theorem appears as Corollary 11.7 from the paper “Algebras of bounded noncommutative analytic functions on subvarieties of the noncommutative unit ball” by Guy Salomon, Eli Shamovich and myself (though, as I explained already in several previous posts, this result has already been known to algebraists for quite some time).

**Theorem (free commutative Nullstellensatz): ***For every , *

*.*

**Proof:** Originally, We intended to prove it only over the complex numbers, I didn’t guess that it could hold over the reals, for example. But an examination of the proof shows that it works when the field is replaced by an arbitrary field . For our proof, see this blog post. The only problematic point might be the lemma (there is only one lemma there), where one will need something called “Zariski’s lemma” which says that a field extension of which is finitely generated **as an algebra**, is actually finite dimensional over the base field, that is, it is a finite extension of . This fact is needed for the proof of the lemma, in order to see that the quotient of a commutative unital Noetherian algebra over k by a maximal ideal is a finite dimensional field extension of . Happily, Zariski’s lemma works for every field . Thus the original proof holds and we are done!

Since the statement of the theorem is interesting also for and , I will concentrate now on explaining why this case is true. In two words, the proof for the case over the complex numbers can be summarized as: Jordan form (at the end of this older post, why the theorem follows from the Jordan form). Now I will finish by explaining why the real theorem follows form the complex theorem.

So, assume that the theorem is proved over the complex numbers. One can see that the theorem holds over the reals quite easily, by using the fact that complex numbers can be modelled as matrices over the reals, so complex matrices can be thought of as matrices over the reals with blocks.

For example, in the one variable case, if are if polynomials with real coefficients, and if vanishes at every real matrix zero of , then vanishes at every complex matrix zero of , so by the complex version of the theorem we have where is a polynomial with perhaps complex coefficients. But then you can see that actually has to be a polynomial with real coefficients. (This proves ).

I still can’t quite believe the theorem holds over any field, so if any readers finds a bug in our proof I will be happy to hear.

]]>[Several years ago I went to a conference in China and came back with the insight that in international conferences I should give a computer presentation and not a blackboard talk, because then people who cannot understand my accent can at least read the slides. It’s been almost six years since then and indeed I gave only beamer-talks since. My English has not improved over this period, I think, but I have several reasons for allowing myself to give an old fashioned lecture – the main ones are the nature of the workshop, the nature of the audience and the kind of things I have to say].

In the workshop Guy Salomon, Eli Shamovich and I will give a series of talks on our two papers (one and two). These two papers have a lot of small auxiliary results, which in usual conference talk we don’t get the chance to speak about. This workshop is a wonderful opportunity for us to highlight some of these results and the ideas behind them, which we feel might be somewhat buried in our paper and have gone unnoticed.

I want to begin by discussing something very general that we mathematicians do. I want to say something about how we give birth to problems and how they then start having a life of their own.

Suppose we have an infinite countable discrete group . Then there is a very natural and well known construction of the ** group von Neumann algebra** , which is an operator algebra on (for the construction see, e.g., the second section of this post). The operator algebra contains information on the group: for example the group has the ICC property if and only if is a factor; further, is amenable if and only if is hyperfinite. However, these fancy results are somewhat sophisticated, and the mathematical child is prone to ask a more basic question:

Even though this is the first and most naive question that a mathematical child might ask, it very hard to solve, even for mathematical grown-ups. Notoriously, it is an open question whether and are isomorphic, where denotes the free group on generators (even though the question is open for such a basic pair of groups, there do exist known pairs of non-isomorphic groups and which give rise to isomorphic von Neumann algebras).

A mathematician might obsess over such a question for an entire lifetime. To solve this question one might introduce all kinds of strategies, involving disparate fields of mathematics. To carry out the strategies one will need to introduce new tools, and then one will have to develop and refine these tools. Soon enough, the development and refinement of the tools raise new questions, and answering these questions gives rise to the need for more new tools. Some very interesting results might be proved along the way, some very surprising applications to completely different problems are sometimes discovered. The original question was asked by the mathematical child out of curiosity and wonder more than anything else, but is never forgotten, even though it is sometimes regarded as a naive fancy and not as something that one can openly confess to be aiming at. The various technical issues that arise in analyzing the fine structure of the constructions defining the question in many cases reveal themselves as beautiful problems in their own right, filling the mathematician with new wonder and ambition.

The process described above is what I do for a living. I do not work on the free group factor problem nor on the problems in free probability that it gave birth to, nor on the problems that free probability gave birth to, etc. That example was given only for the sake of illustration.

As a mathematical child (and I still am) I asked a different question. But it is in the same spirit. If you want to get the gist of what I do – this is it. But to really tell you about my work, I need to get say more.

Sometimes I feel the need to defend the choice of my own questions. The BIG PEOPLE asked different questions – maybe those are the important questions?

We observe nature and wish to understand it. We see animals. They are beautiful, they are interesting, they excite us. We wish to understand these animals. What does it mean to understand?

One aspect of understanding is classification. We know that animal A is different from animal B, because one is standing here and the other is sitting there. So they are not the same animal. This classification is somewhat *too fine* to be interesting. A coarser classification might be more interesting (but not too coarse, we are not satisfied by noting that they are both “animals”).

To be able to classify in a meaningful way, we need to study the properties of the animals. For example, suppose that we have never seen African animals before, but we walk into a savanna where there are two kinds: elephants and giraffes. If we have never seen and never heard of elephants and giraffes, we do not know that these are elephants and giraffes, but maybe we are clever enough to notice that there are two distinct *kinds of animals *in sight. Now, different people will find different properties interesting, and this will affect their classification scheme. Some will tell the elephants and the giraffes apart by noting that one kind is grey and the other is yellow with brown spots. Some will note that one kind has a long nose (or whatever that is!) and the other has a long neck! Someone else might notice that one has four knees while the other has two knees and two elbows. They are all classifying but they are using different kinds of properties.

Anyone who is studying the world in a creative way will have to ask their own questions, and will have to define to themselves what constitutes an answer.

Let , let be the open unit ball in . We let be the Hilbert space of all analytic functions such that the coefficients of the power series for given by satisfies . This space is known as the ** Drury-Arveson space** (see, e.g., this post or this survey). Let be the multiplier algebra of , that is:

for all .

We can identify every multiplier with the multiplication operator that it gives rise to . In this way, becomes an operator algebra.

Now given an analytic variety , we can consider by which we mean all functions , for which there exists such that . It can be shown that is also a multiplier algebra, acting on the space of functions . It can also be shown that the algebra of multiplication operators is completely isometrically isomorphic to , where

for all .

We ask the mathematical child’s question: *do these algebras know the variety from which they came? *

With Davidson and Ramsey we studied this question, and found the following answer:

**Theorem:** * and are completely isometrically isomorphic if and only is the image of under a conformal automorphism of the ball. *

However, it is more fun to ask questions in the wrong category. We ask: these algebras have a coarser structure that depends on the variety and is simply there. For example, there is the raw algebraic structure. Does the algebraic structure hold the variety in its memory? Following my work with Davidson and Ramsey, one can come up with the following statement.

**“Theorem”:** * and are isomorphic (as algebras) if and only and biholomorphically equivalent via a multiplier map. *

There are two problems with the above theorem.

The first one is that it only treats quotients of the form where is a “radical” ideal.

The second problem is that it is false! The forward implication is proved only under additional assumptions, and we don’t really know if it is true in general (though I suspect that it is). As for the backward implication, there are counter-examples showing that there are pairs of varieties and which are multiplier biholomorphic but the algebras are non-isomorphic.

So it’s not really a theorem.

As I learned from George Elliott, when your candidate for a classification fails, there are two things you can do: 1) you can try to work in a restricted class of algebras or varieties, in hope that the invariant is a complete invariant in that setting; 2) you can refine the invariant.

We have papers dealing with restricted classes of varieties (see this paper with Kerr and McCarthy where we show that the “theorem” is true for reasonable one dimensional varieties; this paper with Davidson and Hartz studies the extent of the failure of “theorem” when we consider discs embedded in an finite dimensional ball). More recently I have been concentrating on studying the isomorphism problem through a refined invariant. The refined invariants are noncommutative (nc) varieties and their derivatives. These were described in depth in the talks by Eli Shamovich and by Guy Salomon.

To illustrate how noncommutative varieties arise even for commutative problems, I will describe a Nullstellensatz that we obtained apropos our work on the isomorphism problem for algebras of bounded nc analytic functions. (BTW, I blogged about this in the past, and what follows has some overlap with the previous post).

Let be the algebra of polynomials in commuting variables.

Let us define the zero locus as follows:

for all .

We also introduce the following notation: given , we write

for all .

The question that every mathematical child will ask immediately, is: *(to what extent) can we recover from ?*

Hilbert answered this question thusly:

**Theorem (Hilbert’s Nullstellensatz): **For every ,

.

Recall that the ** radical **of is the ideal

there exists some such that .

To describe the Nullstellensatz that appears in my paper with Guy and Eli I will need to introduce some notation (after we proved it, we found that it can be dug out of a paper of Eisenbud and Hochester – but does it does not seem to be well known, at least not in our transparent formulation).

Let denote the set of **all** -tuples of matrices. We let be the disjoint union of all -tuples of matrices, where runs from to . That is, we are looking at all -tuples of commuting matrices of all sizes. Elements of can be plugged into polynomials in noncommuting variables.

Similarly, we let denote the set of all commuting -tuples of matrices. We let . Now we are looking at all commuting -tuples of commuting matrices of all sizes. This can be considered as the “noncommutative variety” cut out in by the equations (in noncommuting variables)

, .

Points in can be plugged into any polynomial .

In fact, points in can be naturally identified with the space of finite dimensional representations of , by

.

(We shall use the word “representation” to mean a homomorphism of an algebra or ring into for some ).

Now, given an ideal , we can consider its zero set in :

for all .

(We will omit the subscript for brevity.) In the other direction, given a subset , we can define the ideal of functions that vanish on it:

for all .

Tautologically, for every ideal ,

,

because every polynomial in annihilates every tuple on which every polynomial in is zero, right? The beautiful (and maybe surprising) fact is the converse.

The following formulation is taken from Corollary 11.7 from the paper “Algebras of bounded noncommutative analytic functions on subvarieties of the noncommutative unit ball” by Guy Salomon, Eli Shamovich and myself.

**Theorem (free commutative Nullstellensatz): ***For every ,*

*.*

For a proof, see the paper, or if you want a version for dummies (== analysts) see this blog post.

The first thing to note about this theorem is that it is not trivial. One might say: “yeah sure, is finite dimensional so can guess that everything is determined by the finite dimensional representations”, but please note that this theorem fails if we replace by the (finitely generated) algebra of polynomials in **non**-commuting variables. It also fails in the algebra of bounded analytic functions on the disc, even if one restricts attention to weak-* closed ideals (a setting in which becomes a (commutative) principal ideal domain).

Consider the following theorem.

**Theorem:** *Let be such that for every representation of that annihilates . Then . *

In a sense, this is the correct theorem, which holds for *every* algebra, not just for the algebra . Isn’t it a better theorem?

NO! Because this theorem is **too good to be wrong!** It is stated in such a way that it is true automatically. Indeed, just consider the representation . By assumption, goes to zero under this representation, and hence .

The commutative free Nullstellensatz is interesting precisely because it does not hold for all algebras, it is a special theorem true for the polynomial algebra .

Besides being non-trivial, the free commutative Nullstellensatz is pleasing and beautiful. Let us now see what we can do with it.

**Theorem: ***. *

**Proof: **The map is surjective, and by the commutative free Nullstellensatz its kernel is .

From here it is easy to show:

**Theorem:** * iff there exists a polynomial isomorphism of varieties . *

**Proof:** The existence of such a map implies the existence of an isomorphism between and . Conversely, and isomorphism gives rise to a map between the spaces of finite dimensional representations. But it is easy to see that the finite dimensional representations of are in bijection with via

.

That completes the proof.

We now return to operator algebras. The ideas in the last section help us find the correct invariant for quotients of by non-radical ideals. The completely general problem is still not solved – we need to assume that the ideals are homogeneous.

For a homogeneous ideal , we write . We write for the nc set of all commuting tuples which are also strict row contractions. Then we have the following result, which identifies our quotient algebras as algebras of nc functions on an nc variety.

**Theorem: **.

Using this, we showed:

**Theorem: *** if and only if there exists an appropriate nc analytic “isomorphism” . *

In my talk, the emphasis is on that an isomorphism of the algebras gives rise to a map between the varieties. To make this work, we needed to show that if then the induced map is regular. The case of radical homogeneous ideals was previously solved in works with Davidson and Ramsey, and in a work of Hartz. The case of non-radical homogeneous ideals is solved by an argument that makes use of the proof of the radical case, essentially reducing things to the radical case using the following Nullstellensatz:

**Theorem: ***Let be a homogeneous ideal. There exists an such that for every , if , then . *

Recall that the in Bernstein’s proof of the Weierstrass polynomial approximation theorem, one associates with every continuous function a *Bernstein polynomial *

.

The operators are clearly linear, positive and unital. It can be shown that and . Therefore

(*) uniformly for every .

To prove Weierstrass’ approximation theorem, one needs to show that uniformly for all . One can give a non-probabilistic proof of this fact, using just that is a sequence of positive and unital maps satisfying (*) (see Chapter 10.3 in Davidson and Donsig’s book “Real Analysis and Applications“).

In fact, Korovkin proved that given a sequence of positive linear and unital operators on such that uniformly for , then uniformly for all . This implies that the generating set has a very “rigid” hold on the (closed) unital algebra that it generates, .

The above discussion should serve as background and motivation for the following definition.

**Definition 1:** Let be a generating subset of a C*-algebra . We say that is ** hyperrigid in ** if for every faithful nondegenerate representation and every sequence of UCP maps,

.

In the above definition I forced myself to overcome my pedantic self and use Arveson’s notation for different faithful (and nondegenerate) representations of . What we really mean is that is a certain fixed C*-algebra (either abstract or represented faithfully on *some* space, that’s not the point here) and that for every faithful and nondegenerate representation and sequence of UCP maps , the condition

(*) for all

implies the consequence

(**) for all .

**Conventions:** Unless emphasized otherwise, our C*-algebras will be unital and all the representations will be nondegenerate (hence unital). In his paper Arveson discussed the nonunital setting to some extent, and time permitting I will touch upon this briefly in class (for a additional discussion of hyperrigidity in the context of nonunital algebras see this paper by Guy Salomon). We also note that a generating set is hyperrigid if and only if the operator system that it generates is. Although it is more precise to say that a set of generators (or the operator algebra that it generates ) is “*hyperrigid in B*“, we will sometimes just say that it is “*hyperrigid*“.

In Arveson’s paper it is always assumed that the generating set is finite or countably infinite, and that all Hilbert spaces are separable. I think the reason is that at the time, the existence of boundary representations was known only for separable operator systems. I will not make any countability assumptions here, I think they are not needed now that we know that boundary representations always exist (I will be grateful if an alert reader finds a gap in these notes). On the other hand, we may always stay within the realm of separable Hilbert spaces if our C*-algebras are separable – this is left as an exercise.

The definition of hyperrigidity (Definition 1) has an approximation theoretic flavor. The following theorem connects hyperrigidity to the unique extension property, which we studied in previous lectures.

**Theorem 2:** *Let be an operator system generating a C*-algebra . The following conditions are equivalent. *

*is hyperrigid in .**For every nondegenerate representation and every sequence , if for every , then for all .**For every nondegenerate representation , the UCP map has the unique extension property.**For every unital C*-algebra , every unital -homomorphism and every UCP map*

for all for all .

**Proof:** 1 2: Condition 2 is very similar in appearance to the definition of hyperrigidity given by “(*) implies (**)” in the previous section. Indeed, it follows readily by assuming that is represented faithfully as , and then considering the faithful representation

If for all , then for all . By the definition of hyperrigidity (summoned only after invoking Arveson’s extension theorem, so that we will be discussing UCP maps on ), we find that

for all ,

which implies that . Note how we really needed the assumption of hyperrigidity to address **every** faithful nondegenerate representation of .

2 3: This follows by taking for all .

3 4: Let and be as in Condition 4. Represent faithfully (and non-degenerately) on a Hilbert space as , and extend to a UCP map . Then can be considered as representation of on . If does not fix , then then is an extension of which is different from , in contradiction to Condition 3.

4 1: Suppose that faithfully and non-degenerately, and let be a series of UCP maps. Assume, as in the definition of hyperrigidity, that

for all .

Assuming that Condition 4 holds, we wish to prove that

for all .

Construct the C*-algebra of bounded sequences with values in with the obvious structure, and consider the UCP map given by

.

If is the ideal of all sequence tending to zero, then . Write and so induces a UCP map . By defining a *-homomorphism by

we are precisely in the situation of Condition 4. It follows that fixes , and this is the same as for all .

That concludes the proof.

The following is a simple corollary that is worth recording:

**Corollary 3:** *Let be a hyperrigid operator system in . Then is the C*-envelope of .*

**Proof:** Since for every representation , the restriction has the UEP, the Shilov ideal – which we know is the intersection of all the kernels of boundary representations – is trivial.

**Corollary 4:** Let be a hyperrigid operator system in , let be an ideal, and let be the quotient map. Then is hyperrigid in .

**Proof:** Condition 2 in the theorem is preserved under taking quotients.

It is important to note that the converse to Corollary 3 does not hold: an operator system with trivial Shilov ideal in the C*-algebra it generates need not be hyperrigid. Moreover, in the context of Corollary 4, we note that having trivial Shilov ideal is not preserved by quotients. Here is an example (due to Davidson; personal communication) that illustrates both of these statements.

**Example 1: **Let be an orthonormal basis for a Hilbert space , and let be a sequence of complex numbers such that and is a dense sequence in . Define and let be the unital (norm closed) operator algebra generated by . (We are suddenly discussing an operator algebra and not an operator system, but we can always pass from to and back).

One can check that is an irreducible operator algebra containing the compacts. The Calkin map is not isometric on , so by the boundary theorem, the Shilov ideal is trivial and .

On the other hand, is a normal operator with spectrum . It follows that and is the disc algebra. Thus, after passing to the quotient, the Shilov boundary ideal is not trivial.

This example shows that – unlike hyperrigidity – trivial Shilov ideal is a property that does not pass to quotients. It also shows that a trivial Shilov boundary does not imply hyperrigidity (why?).

By Theorem 2, if is hyperrigid, then, in particular, every irreducible representation is a boundary representation. Arveson conjectured that the converse also holds true.

**Arveson’s Hyperrigidity Conjecture:** * is hyperrigid in if and only if every irreducible representation of is a boundary representation for .*

**Example 2:** Suppose that is a selfadjoint operator with at least three points in the spectrum, and let be the (unital, as always) C*-algebra generated by . We will show that:

- is hyperrigid in .
- is not hyperrigid in .

(The assumption on the spectrum is no biggy: if the spectrum has two or less points, then is the C*-algebra generated by , and of course it is hyperrigid.)

Note that if we let be the multiplication operator by the identity function on , and if we identify as a C*-subalgebra of , then we obtain a **strengthened version** of Korovkin’s theorem that we mentioned in the first section. Indeed, in Korovkin’s theorem, the conclusion for all follows from this convergence only of elements only for sequences of (completely) positive maps . Hyperrigidity shows that follows from the same assumption, but now for any sequence of UCP maps .

Let us first show that is not hyperrigid. Suppose that , , and is a point in the spectrum that lies strictly between and . Then point evaluation at is representation of . However, does not have the unique extension property, since is a convex combination of and .

Now let . To show that is hyperrigid, we need to show that for every nondegenerate , the UCP map has the UEP.

To this end, suppose that is a UCP map that extends . This just means that and . We have to show that is multiplicative – this will imply that .

Let be a Stinespring representation of , where is a *-representation and is a space containing . We write , and compute:

.

This means that , or put differently, that is invariant under . It follows that is a reducing subspace for , and so is multiplicative, as required.

It is worth noting that under the assumption above on , Arveson proved that if is hyperrigid (where is continuous on the spectrum of ) then must be either strictly convex or strictly concave. He also proved that the converse holds, assuming that the Hyperrigidity Conjecture is true.

**Example 3:** Let be isometries generating a unital C*-algebra . The generating set

.

is hyperrigid in .

In particular, if are Cuntz isometries (i.e., isometries with pairwise orthogonal ranges, such that ), then it is well known that they generate the Cuntz algebra . Then we have that the operator system generated by is hyperrigid in .

Let’s prove the claimed hyperrigidity in the special case of Cuntz isometries.; the general case is based on the same idea but slightly more tedious. Write for the operator system generated by . Let be a (nondegenerate, as always) representation. Let be a UCP map such that , and let be a representation that is a dilation of . Define for . Then for every ,

.

is a *-representation, so

.

Comparing the (1,1)-entry, we find . Since , we must have that , so that . On the other hand

.

As before, this implies that , which implies that for all .

We conclude that all reduce , and (since these operators generate ) it follows that is a reducing subspace for . As a consequence, is its own minimal Stinespring dilation, so it is multiplicative. Hence , as required.

Arveson established his hyperrigidity conjecture for the special case of *countable spectrum*. Recall that the ** spectrum** of a C*-algebra is the set of all unitary equivalence classes of irreducible representations (if you are the kind of person who -like me – always worries about such things, let me tell you that it is OK to use the word “set” for . Hint: we are speaking about

**Theorem 5:** *Let be an operator system such that the generated C*-algebra has countable spectrum. If every is a boundary representation, then is hyperrigid in .*

**Proof:** Assume that . To prove that is hyperrigid, we need to prove that for **every** representation , the UCP map has the UEP. But every representation of is the direct sum of irreducible representations (here we are using the fact that is countable, and some non-trivial facts from the representation theory of (type I) C*-algebras). Since the direct sum of UCP maps with the UEP has the UEP (yet another fact that requires proof, but not as deep as the previous fact we used), it follows that has the UEP.

In this section I wish to discuss the connection between the notion of hyperrigidity and another conjecture of Arveson – the essential normality conjecture.

The problem of essential normality can take place in many spaces, but I like to view in the Drury-Arveson space . See this old post (mostly Section 1) and this old post (mostly Sections 2 and 3) where I already discussed this space. In this old post (Section 1) I discuss the essential normality problem.

In class I plan to discuss the paper “*Essential normality, essential norms, and hyperrigidity“* by Kennedy and myself (here is a link to an arxiv version; here is a link to the corrigendum). I wrote about this problem and about this paper a few times before (for example, when announcing the preprint), so I with the above pointers and links in place, we end these notes!

Thanks for listening! You all get a grade 100 in the course!

]]>