Noncommutative Analysis

Category: Noncommutative function theory

The perfect Nullstellensatz just got more perfect

After giving a talk about the perfect Nullstellensatz (the commutative free Nullstellensatz) at the Technion Math department’s pizza and beer seminar, I had a revelation: I think it holds over other fields as well, not just over the complex numbers! (And in particular, contrary to what I thought before, it holds over the reals. It seems to hold over other fields as well). 

To explain, I will need some notation. 

Let k be a field. We write A = k[z_, \ldots, z_d] – the algebra of all polynomials in d (commuting) variables over the field k

Read the rest of this entry »

Around and under my talk at Fields

This week I am attending a Workshop on Developments and Technical Aspects of Free Noncommutative Functions at the Fields Institute in Toronto. Since I plan to give a chalk-talk, I cannot post my slides online (and I cannot prepare for my talk by preparing slides), so I will write here what some ideas around what I want to say in my talk, and also some ramblings I won’t have time to say in my talk.

[Several years ago I went to a conference in China and came back with the insight that in international conferences I should give a computer presentation and not a blackboard talk, because then people who cannot understand my accent can at least read the slides. It’s been almost six years since then and indeed I gave only beamer-talks since. My English has not improved over this period, I think, but I have several reasons for allowing myself to give an old fashioned lecture – the main ones are the nature of the workshop, the nature of the audience and the kind of things I have to say]. 

In the workshop Guy Salomon, Eli Shamovich and I will give a series of talks on our two papers (one and two). These two papers have a lot of small auxiliary results, which in usual conference talk we don’t get the chance to speak about. This workshop is a wonderful opportunity for us to highlight some of these results and the ideas behind them, which we feel might be somewhat buried in our paper and have gone unnoticed. 

Read the rest of this entry »

Topics in Operator Theory, Lecture 8: matrix convexity

In this lecture we will encounter the notion of matrix convexity. Matrix convexity is an active area of research today, and an important tool in noncommutative analysis. We will define matrix convex sets, and we will see that closed matrix convex sets have matrix extreme points which play a role similar to extreme points in analysis. As an example of a matrix convex set, we will study the set of all matrix states. We will use these notions to outline the proof that there are sufficiently many pure UCP maps, something that was left open from the previous lecture. 

Read the rest of this entry »

New paper “Compressions of compact tuples”, and announcement of mistake (and correction) in old paper “Dilations, inclusions of matrix convex sets, and completely positive maps”

Ben Passer and I have recently uploaded our preprint “Compressions of compact tuples” to the arxiv. In this paper we continue to study matrix ranges, and in particular matrix ranges of compact tuples. Recall that the matrix range of a tuple A = (A_1, \ldots, A_d) \in B(H)^d is the the free set \mathcal{W}(A) = \sqcup_{n=1}^\infty \mathcal{W}_n(A), where

\mathcal{W}_n(A) = \{(\phi(A_1), \ldots, \phi(A_d)) : \phi : B(H) \to M_n is UCP \}.

A tuple A is said to be minimal if there is no proper reducing subspace G \subset H such that \mathcal{W}(P_G A\big|_G) = \mathcal{W}(A). It is said to be fully compressed if there is no proper subspace whatsoever G \subset H such that \mathcal{W}(P_G A\big|_G) = \mathcal{W}(A).

In an earlier paper (“Dilations, inclusions of matrix convex sets, and completely positive maps”) I wrote with other co-authors, we claimed that if two compact tuples A and B are minimal and have the same matrix range, then A is unitarily equivalent to B; see Section 6 there (the printed version corresponds to version 2 of the paper on arxiv). This is false, as subsequent examples by Ben Passer showed (see this paper). A couple of other statements in that section are also incorrect, most obviously the claim that every compact tuple can be compressed to a minimal compact tuple with the same matrix range. All the problems with Section 6 of that earlier paper “Dilations,…” can be quickly  fixed by throwing in a “non-singularity” assumption, and we posted a corrected version on the arxiv. (The results of Section 6 there do not affect the rest of the results in the paper, and are somewhat not in the direction of the main parts of that paper).

In the current paper, Ben and I take a closer look at the non-singularity assumption that was introduced in the corrected version of “Dilations,…”, and we give a complete characterization of non-singular tuples of compacts. This characterization involves the various kinds of extreme points of the matrix range \mathcal{W}(A). We also make a serious invetigation into fully compressed tuples defined above. We find that a matrix tuple is fully compressed if and only if it is non-singular and minimal. Consequently, we get a clean statement of the classification theorem for compacts: if two tuples A and B of compacts are fully compressed, then they are unitarily equivalent if and only if \mathcal{W}(A) = \mathcal{W}(B).

 

Souvenirs from the Red River

Last week I attended the annual Canadian Operator Symposium, better known in its nickname: COSY. This conference happens every year and travels between Canadian universities, and this time it was held in the University of Manitoba, in Winnipeg. It was organized by Raphaël Clouâtre and Nina Zorboska, who altogether did a great job.

My first discovery: Winnipeg is not that bad! In fact I loved it. Example: here is the view from the window of my room in the university residence:

20180604_053844

Not bad, right? A very beautiful sight to wake up to in the morning. (I got the impression, that Winnipeg is nothing to look forward to, from Canadians. People of the world: don’t listen to Canadians when they say something bad about any place that just doesn’t quite live up to the standard of Montreal, Vancouver, or Banff.) Here is what you see if you look from the other side of the building: 

20180606_185909

The conference was very broad and diverse in subjects, as it brings together people working in Operator Theory as well as in Operator Algebras (and neither of these fields is very well defined or compact). I have mixed feelings about mixed conferences. But since I haven’t really decided what I myself want to be working on when I grow up, I think they work for me.

I was invited to give a series of three talks that I devoted to noncommutative function theory and noncommutative convexity. My second talk was about my joint work with Guy Salomon and Eli Shamovich on the isomorphism problem for algebras of bounded nc functions on nc varieties, which we, incidentally, posted on the arxiv on the day that the conference began. May I invite you to read the introduction to that paper? (if you like it, also take a look at the previous post).

On this page you can find the schedule, abstract, and the slides of most of the talks, including mine. Some of the best talks were (as it happens so often) whiteboard talks, so you won’t find them there. For example, the beautiful series by Aaron Tikuisis was given like that and now it is gone (George Elliott remarked that a survey of the advances Tikuisis describes would be very desirable, and I agree).

1. The “resolution” of Elliott’s conjecture

Aaron Tikuisis gave a beautiful series of talks on the rather recent developments in the classification theory of separable-unital-nuclear-simple C*-algebras (henceforth SUNS C*-algebras, the algebra is also assumed infinite dimensional, but let’s make that a standing hypothesis instead of complicating the acronym). I think it is fair to evaluate his series of talks as the most important talk(s) in this conference. In my opinion the work (due to many mathematicians, including himself) that Tikuisis presented can be described as the resolution of the Elliott conjecture; I am sure that some people will disagree with the last statement, including George Elliott himself.

Given a SUNS C*-algebra A, one defines its Elliott invariant, E\ell \ell(A), to be the K-theory of A, together with some additional data: the image of the unit of A in K_0(A), the space of traces T(A) of A, and the pairing between the traces and K-theory. It is clear, once one knows a little K-theory, that if A and B are isomorphic C*-algebras, then their Elliott invariants are isomorphic, in the sense that K_i(A) is isomorphic to K_i(B) for i=0,1 (in a unit preserving way), and that T(A) is affinely homeomorphic with T(B) in a way that preserves the pairing with the K-groups. Thus, if two C*-algebras are known to have a different K-group, or a different Elliott invariant, then these C*-algebras are not isomorphic. This observation was used to classify AF algebras and irrational rotation algebras (speaking of which, I cannot help but recommend my friend Claude Schochet’s recent “Notice’s” article on the irrational rotation algebras).

In the 1990s, George Elliott made the conjecture that two SUNS C*-algebras are *-isomorphic if and only if E \ell \ell (A) \cong E \ell \ell (B). This conjecture became one of the most important open problems in the theory of operator algebras, and arguably THE most important open problem in C*-algebras. Dozens of people worked on it. There were many classes of C*-algebras that were shown to be classifiable – meaning that they satisfy the Elliott conjecture – but eventually this conjecture was shown to be false in 2002 by Rordam, who built on earlier work by Villadsen.

Now, what does the community do when a conjecture turns out to be false? There are basically four things to do:

  1. Work on something else.
  2. Start classifying “clouds” of C*-algebras, for example, show that crossed products of a certain type are classifiable within this family (i.e. two algebras within a specified class are isomorphic iff their Elliott invariants are), etc.
  3. Make the class of algebras you are trying to classify smaller, i.e., add assumptions.
  4. Make the invariant bigger. For example, K_0(A) is not enough, so people used K_1(A). When that turned out to be not enough, people started looking at traces. So if the current invariant is not enough, maybe add more things, the natural candidate (I am told) being the “Cuntz Semigroup”.

The choice of what to do is a matter of personal taste, point of view, and also ability. George Elliott has made the point that choosing 4 requires one to develop new techniques, whereas choosing 3 is kind of focused around the techniques, making the class of C*-algebras smaller until the currently known techniques can tackle them.

Elliott’s objections notwithstanding, the impression that I got from the lecture series was that most main forces in the field agreed that following the third adaptation above was the way to go. That is, they tried to prove the conjecture for a slightly more restricted class of algebras than SUNS. Over the past 15 years or so (or a bit more), they identified an additional condition – let’s call it Condition Z – that, once added to the standard SUNS assumptions, allows classification. And it’s not that adding the additional assumptions made things really easy, it only made the proof possible – still it took first class work to even identify what assumption needs to be added, and more work to prove that with this additional assumptions the conjecture holds. They proved:

Theorem (lot’s of people): If A and B are infinite dimensional SUNS C*-algebras, which satisfy the Universal Coefficient Theorem and an additional condition Z, then E\ell \ell (A) \cong E \ell \ell (B) if and only if A \cong B.

I consider this as the best possible resolution of the Elliott conjecture possible, given that it is false!

A major part of Aaron’s talks was to explain to us what this additional condition Z is. (What the Universal Coefficient Theorem though, was not explained and, if I understand correctly, it is in fact not known whether this doesn’t follow immediately for such algebras). In fact, there are two conditions that one can take for “condition Z”: (i) Finite nuclear dimension, and (ii) Z-stability. The notion of nuclear dimension corresponds to the regular notion of dimension (of the spectrum) in the commutative case. Z-stability means that the algebra in question absorbs the Jiang-Su algebra under tensor products in a very strong sense. Following a very long tradition in talks about the Jiang-Su algebra – Aaron did not define the Jiang-Su algebra. This is not so bad, since he did explain in detail what finite nuclear dimension means, and said that Z-stability and finite nuclear dimension are equivalent for infinite dimensional C*-algebras (this is the Toms-Winter conjecture).

What was very nice about Aaron’s series of talks was that he gave von Neumann algebraic analogues of the theorems, conditions, and results, and explained how the C*-algebra people got concrete inspiration from the corresponding results and proofs in von Neumann algebras. In particular he showed the parallels to Connes’s theorem that every injective type II_1 factor with separable predual is isomorphic to the hyperfinite II_1 factor. He made the point that separable predual in the von Neumann algebra world corresponds to separability for C*-algebras, hyperfiniteness corresponds to finite nuclear dimension, and factor corresponds to a simple C*-algebra. He then sketched the lines of the proof of the part of Connes’s theorem that says that injectivity of a II_1 factor M implies hyper-finiteness of M (which by Murray and von Neumann’s work implies  that M is the hyperfinite II_1 factor). After that he repeated a similar sketch for the proof that Z-stability implies finite nuclear dimension.

This lecture series was very inspiring and I think that the organizers made an excellent choice inviting Tikuisiss to give this lecture series.

 

2. Residually finite operator algebras and a new trick

Christopher Ramsey gave a short talk on “residually finite dimensional (RFD) operator algebras”. This talk is based on the paper that Chris and Raphael Clouatre recently posted on the arxiv. The authors take the notion of residual finite dimensional, which is quite well studied and understood in the case of C*-algebras, and develop it in the setting of nonselfadjoint operator algebras. It is worth noting that even a finite dimensional nonselfadjoint operator algebra might fail to be representable as a subalgebra of a matrix algebra. So it is worth specifying that an operator algebra is said to be RFD if it can be completely isometrically embedded in a direct sum of matrix algebras (and so it is not immediate that a finite dimensional algebra is RFD, though they prove that it is).

What I want to share here is a neat and simple observation that Chris and Raphael made, which seemed to have been overlooked by the community.

When we study operator algebras, there are several natural relations by which to classify them: completely isometric isomorphism, unitary equivalence, completely bounded isomorphism, and similarity. Clearly, unitary equivalence implies completely isometric isomorphism, and similarity implies completely bounded isomorphism. The converses do not hold. However, in practice, many times (for example in my recent paper with Guy and Eli) operator algebras are shown to be completely boundedly isomorphic by exhibiting a similarity between them. That happens because we are many times interested in the “multiplicity free case”.

[Added in June 11, following Yemon’s comment: We say that A \subset B(H) is similar to B \subseteq B(K) if there is an invertible T \in B(H,K) such that A = T^{-1}BT. Likewise, two maps \rho : A \to B(H) and \phi: A \to B(K) are said to be similar if there is an invertible T \in B(H,K) such that \rho(a) = T^{-1} \phi(a) T for all a \in A. Paulsen’s theorem says that if \rho : A \to B(H) is a completely bounded representation then it is similar to a completely contractive representation \phi : A \to B(H). ]

Raphael and Chris observed that, in fact, completely bounded isomorphism is the same as similarity, modulo completely isometric isomorphisms. To be precise, they proved:

Theorem (the Clouatre-Ramsey trick): If A and B are completely boundedly isomorphic, then A and B are both completely isometrically isomorphic to algebras that are similar. 

Proof: Suppose that A \subseteq B(H) and B \subseteq B(K). Let \phi : A \to B be a c.b. isomorphism. By Paulsen’s theorem, \phi is similar to a completely contractive isomorphism \psi. So we get that the map

a \mapsto a \oplus \psi(a) \mapsto a \oplus \phi(a) \in B(H) \oplus B(K)

decomposes as a product of a complete isometry and a similarity. Likewise, the completely bounded isomorphism \phi^{-1} is similar to a complete contraction \rho, and we have that

\phi^{-1}(b) \oplus b \mapsto \rho(b) \oplus b \mapsto b

decomposes as the product of a similarity and a complete isometry. Since the composition of all these maps is \phi, the proof is complete.