## Category: Conference

### My talks at NCGA Bhubaneswar

This week I am giving a series of five lectures on dilation theory in the workshop Noncommutative Geometry and its Applications , at NISER Bhubaneswar (India). I am putting my talks up here in case anybody would like to see them (and also as backup, in case my stick doesn’t work).

(Lectures 1,2,3 were board talks).

### Souvenirs from the Red River

Last week I attended the annual Canadian Operator Symposium, better known in its nickname: COSY. This conference happens every year and travels between Canadian universities, and this time it was held in the University of Manitoba, in Winnipeg. It was organized by Raphaël Clouâtre and Nina Zorboska, who altogether did a great job.

My first discovery: Winnipeg is not that bad! In fact I loved it. Example: here is the view from the window of my room in the university residence:

Not bad, right? A very beautiful sight to wake up to in the morning. (I got the impression, that Winnipeg is nothing to look forward to, from Canadians. People of the world: don’t listen to Canadians when they say something bad about any place that just doesn’t quite live up to the standard of Montreal, Vancouver, or Banff.) Here is what you see if you look from the other side of the building:

The conference was very broad and diverse in subjects, as it brings together people working in Operator Theory as well as in Operator Algebras (and neither of these fields is very well defined or compact). I have mixed feelings about mixed conferences. But since I haven’t really decided what I myself want to be working on when I grow up, I think they work for me.

I was invited to give a series of three talks that I devoted to noncommutative function theory and noncommutative convexity. My second talk was about my joint work with Guy Salomon and Eli Shamovich on the isomorphism problem for algebras of bounded nc functions on nc varieties, which we, incidentally, posted on the arxiv on the day that the conference began. May I invite you to read the introduction to that paper? (if you like it, also take a look at the previous post).

On this page you can find the schedule, abstract, and the slides of most of the talks, including mine. Some of the best talks were (as it happens so often) whiteboard talks, so you won’t find them there. For example, the beautiful series by Aaron Tikuisis was given like that and now it is gone (George Elliott remarked that a survey of the advances Tikuisis describes would be very desirable, and I agree).

#### 1. The “resolution” of Elliott’s conjecture

Aaron Tikuisis gave a beautiful series of talks on the rather recent developments in the classification theory of separable-unital-nuclear-simple C*-algebras (henceforth SUNS C*-algebras, the algebra is also assumed infinite dimensional, but let’s make that a standing hypothesis instead of complicating the acronym). I think it is fair to evaluate his series of talks as the most important talk(s) in this conference. In my opinion the work (due to many mathematicians, including himself) that Tikuisis presented can be described as the resolution of the Elliott conjecture; I am sure that some people will disagree with the last statement, including George Elliott himself.

Given a SUNS C*-algebra $A$, one defines its Elliott invariant, $E\ell \ell(A)$, to be the K-theory of $A$, together with some additional data: the image of the unit of $A$ in $K_0(A)$, the space of traces $T(A)$ of $A$, and the pairing between the traces and K-theory. It is clear, once one knows a little K-theory, that if $A$ and $B$ are isomorphic C*-algebras, then their Elliott invariants are isomorphic, in the sense that $K_i(A)$ is isomorphic to $K_i(B)$ for $i=0,1$ (in a unit preserving way), and that $T(A)$ is affinely homeomorphic with $T(B)$ in a way that preserves the pairing with the K-groups. Thus, if two C*-algebras are known to have a different K-group, or a different Elliott invariant, then these C*-algebras are not isomorphic. This observation was used to classify AF algebras and irrational rotation algebras (speaking of which, I cannot help but recommend my friend Claude Schochet’s recent “Notice’s” article on the irrational rotation algebras).

In the 1990s, George Elliott made the conjecture that two SUNS C*-algebras are *-isomorphic if and only if $E \ell \ell (A) \cong E \ell \ell (B)$. This conjecture became one of the most important open problems in the theory of operator algebras, and arguably THE most important open problem in C*-algebras. Dozens of people worked on it. There were many classes of C*-algebras that were shown to be classifiable – meaning that they satisfy the Elliott conjecture – but eventually this conjecture was shown to be false in 2002 by Rordam, who built on earlier work by Villadsen.

Now, what does the community do when a conjecture turns out to be false? There are basically four things to do:

1. Work on something else.
2. Start classifying “clouds” of C*-algebras, for example, show that crossed products of a certain type are classifiable within this family (i.e. two algebras within a specified class are isomorphic iff their Elliott invariants are), etc.
3. Make the class of algebras you are trying to classify smaller, i.e., add assumptions.
4. Make the invariant bigger. For example, $K_0(A)$ is not enough, so people used $K_1(A)$. When that turned out to be not enough, people started looking at traces. So if the current invariant is not enough, maybe add more things, the natural candidate (I am told) being the “Cuntz Semigroup”.

The choice of what to do is a matter of personal taste, point of view, and also ability. George Elliott has made the point that choosing 4 requires one to develop new techniques, whereas choosing 3 is kind of focused around the techniques, making the class of C*-algebras smaller until the currently known techniques can tackle them.

Elliott’s objections notwithstanding, the impression that I got from the lecture series was that most main forces in the field agreed that following the third adaptation above was the way to go. That is, they tried to prove the conjecture for a slightly more restricted class of algebras than SUNS. Over the past 15 years or so (or a bit more), they identified an additional condition – let’s call it Condition Z – that, once added to the standard SUNS assumptions, allows classification. And it’s not that adding the additional assumptions made things really easy, it only made the proof possible – still it took first class work to even identify what assumption needs to be added, and more work to prove that with this additional assumptions the conjecture holds. They proved:

Theorem (lot’s of people): If $A$ and $B$ are infinite dimensional SUNS C*-algebras, which satisfy the Universal Coefficient Theorem and an additional condition Z, then $E\ell \ell (A) \cong E \ell \ell (B)$ if and only if $A \cong B$.

I consider this as the best possible resolution of the Elliott conjecture possible, given that it is false!

A major part of Aaron’s talks was to explain to us what this additional condition Z is. (What the Universal Coefficient Theorem though, was not explained and, if I understand correctly, it is in fact not known whether this doesn’t follow immediately for such algebras). In fact, there are two conditions that one can take for “condition Z”: (i) Finite nuclear dimension, and (ii) Z-stability. The notion of nuclear dimension corresponds to the regular notion of dimension (of the spectrum) in the commutative case. Z-stability means that the algebra in question absorbs the Jiang-Su algebra under tensor products in a very strong sense. Following a very long tradition in talks about the Jiang-Su algebra – Aaron did not define the Jiang-Su algebra. This is not so bad, since he did explain in detail what finite nuclear dimension means, and said that Z-stability and finite nuclear dimension are equivalent for infinite dimensional C*-algebras (this is the Toms-Winter conjecture).

What was very nice about Aaron’s series of talks was that he gave von Neumann algebraic analogues of the theorems, conditions, and results, and explained how the C*-algebra people got concrete inspiration from the corresponding results and proofs in von Neumann algebras. In particular he showed the parallels to Connes’s theorem that every injective type $II_1$ factor with separable predual is isomorphic to the hyperfinite $II_1$ factor. He made the point that separable predual in the von Neumann algebra world corresponds to separability for C*-algebras, hyperfiniteness corresponds to finite nuclear dimension, and factor corresponds to a simple C*-algebra. He then sketched the lines of the proof of the part of Connes’s theorem that says that injectivity of a $II_1$ factor $M$ implies hyper-finiteness of $M$ (which by Murray and von Neumann’s work implies  that $M$ is the hyperfinite $II_1$ factor). After that he repeated a similar sketch for the proof that $Z$-stability implies finite nuclear dimension.

This lecture series was very inspiring and I think that the organizers made an excellent choice inviting Tikuisiss to give this lecture series.

#### 2. Residually finite operator algebras and a new trick

Christopher Ramsey gave a short talk on “residually finite dimensional (RFD) operator algebras”. This talk is based on the paper that Chris and Raphael Clouatre recently posted on the arxiv. The authors take the notion of residual finite dimensional, which is quite well studied and understood in the case of C*-algebras, and develop it in the setting of nonselfadjoint operator algebras. It is worth noting that even a finite dimensional nonselfadjoint operator algebra might fail to be representable as a subalgebra of a matrix algebra. So it is worth specifying that an operator algebra is said to be RFD if it can be completely isometrically embedded in a direct sum of matrix algebras (and so it is not immediate that a finite dimensional algebra is RFD, though they prove that it is).

What I want to share here is a neat and simple observation that Chris and Raphael made, which seemed to have been overlooked by the community.

When we study operator algebras, there are several natural relations by which to classify them: completely isometric isomorphism, unitary equivalence, completely bounded isomorphism, and similarity. Clearly, unitary equivalence implies completely isometric isomorphism, and similarity implies completely bounded isomorphism. The converses do not hold. However, in practice, many times (for example in my recent paper with Guy and Eli) operator algebras are shown to be completely boundedly isomorphic by exhibiting a similarity between them. That happens because we are many times interested in the “multiplicity free case”.

[Added in June 11, following Yemon’s comment: We say that $A \subset B(H)$ is similar to $B \subseteq B(K)$ if there is an invertible $T \in B(H,K)$ such that $A = T^{-1}BT$. Likewise, two maps $\rho : A \to B(H)$ and $\phi: A \to B(K)$ are said to be similar if there is an invertible $T \in B(H,K)$ such that $\rho(a) = T^{-1} \phi(a) T$ for all $a \in A$. Paulsen’s theorem says that if $\rho : A \to B(H)$ is a completely bounded representation then it is similar to a completely contractive representation $\phi : A \to B(H)$. ]

Raphael and Chris observed that, in fact, completely bounded isomorphism is the same as similarity, modulo completely isometric isomorphisms. To be precise, they proved:

Theorem (the Clouatre-Ramsey trick): If $A$ and $B$ are completely boundedly isomorphic, then $A$ and $B$ are both completely isometrically isomorphic to algebras that are similar.

Proof: Suppose that $A \subseteq B(H)$ and $B \subseteq B(K)$. Let $\phi : A \to B$ be a c.b. isomorphism. By Paulsen’s theorem, $\phi$ is similar to a completely contractive isomorphism $\psi$. So we get that the map

$a \mapsto a \oplus \psi(a) \mapsto a \oplus \phi(a) \in B(H) \oplus B(K)$

decomposes as a product of a complete isometry and a similarity. Likewise, the completely bounded isomorphism $\phi^{-1}$ is similar to a complete contraction $\rho$, and we have that

$\phi^{-1}(b) \oplus b \mapsto \rho(b) \oplus b \mapsto b$

decomposes as the product of a similarity and a complete isometry. Since the composition of all these maps is $\phi$, the proof is complete.

### Souvenirs from San Diego

Every time that I fly to a conference, I think about the airport puzzle that I once read in Terry Tao’s blog. Suppose that you are trying to get quickly from point A to point B in an airport, and that part of the way has moving walkways, and part of it doesn’t. Suppose that you can either walk or run, but you can only run for a certain small amount of the time. Where is it better to spend that amount of time running: on the moving walkways or in between the moving walkways? Does it matter?

### Souvenirs from Haifa

The “Multivariable operator theory workshop at the Technion, on occasion of Baruch Solel’s 65th birthday”, is over. Overall I think it was successful, and I enjoyed meeting old and new friend, and seeing the plan materialize. Everything ran very smoothly – mostly thanks to the Center for Mathematical Sciences and in particular Maya Shpigelman. It was a pleasure to have an occasion to thank Baruch, and I was proud to see my colleagues acknowledge Baruch’s contribution and wish him the best.

If you are curious about the talks, here is the book of abstracts. Most of the presentations can be found at the bottom of the workshop webpage. Here is a bigger version of the photo.

I will not blog about the workshop any further – I don’t feel like I participated as a mathematician. I miss being a regular participant! Luckily I don’t have to wait long: Next week, I am going to Athens to participate in the Sixth Summer School in Operator Theory in Athens.

### Souvenirs from Bangalore 2015

Last week I attended the conference “Complex Geometry and Operator Theory” in Indian Statistical Institute, Bangalore. The conference was also an occasion to celebrate Gadadhar Misra‘s 60s birthday.

As usual for me in conferences, I played a game with myself in which my goal was to find the most interesting new thing I learned, and then follow up on it to some modest extent. Although every day of the three day conference had at least two excellent lectures that I enjoyed, I have to pick one or two things, so here goes.

#### 1. Noncommutative geometric means

The most exciting new-thing-I-learned was something that I heard not in a lecture but rather in a conversation I had with Rajendra Bhatia in one of the generously long breaks.

A very nice exposition of what I will briefly discuss below appears in this expository paper of Bhatia and Holbrook.

The notion of arithmetic mean generalizes easily to matrices. If $A,B$ are matrices, then we can define

$M_a(A,B) = \frac{A+B}{2}$.

When restricted to hermitian matrices, this mean has some expected properties of a mean. For example,

1. $M_a(A,B) = M_a(B,A)$,
2. If $A \leq B$, then $A \leq M_a(A,B) \leq B$,
3. $M_a(A,B)$ is monotone in its variables.

A natural question – which one may ask simply out of curiosity – is whether the geometric mean $(x,y) \mapsto \sqrt{xy}$ can also be generalized to pairs of positive definite matrices. One runs into problems immediately, since if $A$ and $B$ are positive definite, one cannot extract a “positive square root” from $AB$, since when $A$ and $B$ do not commute then their product $AB$ need not be a positive matrix.

It turns out that one can define a geometric mean as follows. For two positive definite matrices $A$ and $B$, define

(*) $M_g(A,B) = A^{1/2} \sqrt{A^{-1/2} B A^{-1/2}} A^{1/2}$ .

Note that when $A$ and $B$ commute (equivalently, when they are scalars) then $M_g(A,B)$ reduces to $\sqrt{AB}$, so this is indeed a generalisation of the geometric mean. Not less importantly, it has all the nice properties of a mean, in particular properties 1-3 above (it is not evident that it is symmetric (the first condition), but assuming that the other two properties follow readily).

Now suppose that one needs to consider the mean of more than two – say, three – matrices. The arithmetic mean generalises painlessly:

$M_a(A,B,C) = \frac{A + B + C}{3}$.

As for the geometric mean, there has not been found an appropriate algebraic expression that generalises equation (*) above. About a decade ago, Bhatia, Holbrook and (separately) Moakher, found a geometric way to define the geometric mean of any number of positive definite matrices.

They key is that they view the set $\mathbb{P}_n$ of positive definite $n \times n$ matrices as a Riemannian manifold, where the length of a curve $\gamma : [0,1] \rightarrow \mathbb{P}_n$ is given by

$L(\gamma) = \int_0^1 \|\gamma(t)^{-1/2} \gamma'(t) \gamma(t)^{-1/2}\|_2 dt$,

where $\|\cdot\|_2$ denotes the Hilbert-Schmidt norm $\|A\|_2 = trace(A^*A)$. The length of the geodesic (i.e., curve of minimal length) connecting two matrices $A, B \in \mathbb{P}_n$ then defines a distance function on $\mathbb{P}_n$, $\delta(A,B)$.

Now, the connection to the geometric mean is that $M_g(A,B)$ turns out to be equal to the midpoint of the geodesic connecting $A$ and $B$! That’s neat, but more importantly, this gives an insight how to define the geometric mean of three (or more) positive definite matrices: simply define $M_g(A,B,C)$ to be the unique point $X_0$ in the manifold $\mathbb{P}_n$ which minimises the quantity

$\delta(A,X)^2 + \delta(B,X)^2 + \delta(C,X)^2$.

This “geometric” definition of the geometric mean of positive semidefinite matrices turns out to have all the nice properties that a mean should have (the monotonicity was an open problem, but was resolved a few years ago by Lawson and Lim).

This is a really nice mathematical story, but I was especially happy to hear that these noncommutative geometric means have found highly nontrivial (and important!) applications in various areas of engineering.

In various engineering applications, one makes a measurement such that the result of this measurement is some matrix. Since measurements are noisy, a first approximation for obtaining a clean estimate of the true value of the measured matrix, is to repeat the measurement and take the average, or mean of the measurements. In many applications the most successful (in practice) mean turned out to be the geometric mean as described above. Although the problem of generalising the geometric mean to pairs of matrices and then to tuples of matrices was pursued by Bhatia and his colleagues mostly out of mathematical curiosity, it turned out to be very useful in practice.

#### 2. The Riemann hypothesis and a Schauder basis for $\ell^2$.

I also have to mention Bhaskar Bagchi’s talk, which stimulated me to go and read his paper “On Nyman, Beurling and Baez-Duarte’s Hilbert space reformulation of the Riemann hypothesis“. The main result (which is essentially an elegant reformulation of a quite old result of Nyman and Beurling, see this old note of Beurling)  is as follows. Let $H$ be the weighted $\ell^2$ space given by all sequence $(x_n)_{n=1}^\infty$ such that

$\sum_n \frac{|x_n|^2}{n^2} < \infty$.

In $H$ consider the sequence of vectors:

$\gamma_2 = (1/2, 0, 1/2, 0, 1/2, 0,\ldots)$

$\gamma_3= (1/3, 2/3, 0, 1/3, 2/3, 0, 1/3, 2/3, 0,\ldots)$

$\gamma_4 = (1/4, 2/4, 3/4, 0, 1/4, 2/4, 3/4, 0, \ldots)$

$\gamma_5 = (1/5, 2/5, 3/5, 4/5, 0, 1/5, \ldots)$,

etc. Then Bagchi’s main result is

Theorem: The Riemann Hypthesis is true if and only if the sequence $\{\gamma_2, \gamma_3, \ldots \}$ is total in $H$

This is interesting, though such results can always be interpreted simply as a claim that the necessary and sufficient condition is now provenly hard. Clearly, nobody expects this to open up a fruitful path by which to approach the Riemann hypothesis, but it gives a nice perspective, as Bagchi writes in his paper:

[The theorem] reveals the Riemann hypothesis as a version of the central theme of harmonic analysis: that more or less arbitrary sequences (subject to mild growth restrictions) can be arbitrarily well approximated by superpositions of a class of simple periodic sequences (in this instance, the sequences $\gamma_k$).