### Where have all the functional equations gone (part II)

I’ll start off exactly where I stopped in the previous post: I will tell you my solution to the problem my PDEs lecturer (and later master’s thesis advisor) Paneah gave us:

Problem: Find all continuously differentiable solutions to the following functional equation:

(FE) $f(t) = f\left(\frac{t+1}{2} \right) + f \left(\frac{t-1}{2} \right) \,\, , \,\, t \in [-1,1] .$

Before writing a solution, let me say that I think it is a fun exercise for undergraduate students, and only calculus is required for solving it, so if you want to try it now is your chance.

#### 1. Solution of Problem

Solution: Well, the assumption of continuously differentiability (which we were left to impose ourselves) begs us to differentiate the equation to get

(*) $g(t) = \frac{1}{2}g\left(\frac{t+1}{2}\right) + \frac{1}{2}g\left(\frac{t-1}{2}\right) \,\, , \,\, t \in [-1,1] .$

where $g = f'$, so $g$ is continuous. This can be considered as a functional equation in its on right, and our goal is now to find all continuous solutions to (*).

Claim 1: If $g$ is a continuous solution of (*) then $g = const.$

We will prove the claim below. Assuming the claim for the moment, deduce that $f' = c$ for some $c \in \mathbb{R}$, thus $f(x) = cx + b$. Plugging in the original functional equation (FE) we find that $b = 0$. Thus, all continuously differentiable solutions of (FE) are of the form $f(x) = cx$, for some $c \in \mathbb{R}$. That any function of this form satisfies (FE) is obvious. Thus it remains to prove Claim 1. At this point it is convenient to introduce some terminology.

#### 2. Brief on dynamical systems

dynamical system is a topological space $X$ and a family of continuous maps $\delta_1, \ldots, \delta_N : X \rightarrow X$. The main problem in topological dynamics is: given a point $x \in X$, how does it move around $X$ under the influence of the maps $\delta_1, \ldots, \delta_N$

Definition 1: Given a dynamical system $(X, \delta_1, \ldots, \delta_N)$ and a point $x \in X$the orbit of $x$ is the set

$O(x) = \{x\} \cup \{\delta_{i_1} \circ \cdots \delta_{i_k} (x) : k \in \mathbb{N}, i_1, \ldots, i_k \in \{1, \ldots, N\}\}.$

Thus, the orbit of $x$ is the set of all points in $X$ which can be reached from $x$ by iterating the maps $\delta_1, \ldots, \delta_N$.

Definition 2: A dynamical system $(X, \delta_1, \ldots, \delta_N)$ is said to be minimal if for all $x \in X$, the orbit of $x$ is dense in $X$, i.e.,

$\overline{O(x)} = X.$

Minimal dynamical systems turn out to be very useful for proving uniqueness of solutions to certain functional equations, as we shall see.

Example: Consider the dynamical system $([-1,1], \delta_1, \delta_2)$, where

$\delta_1(t) = \frac{t+1}{2}$  and  $\delta_2(t) = \frac{t-1}{2}$.

One can use induction on $k$ to show that given any points $t_0,t_1 \in [-1,1]$, there are $i_1\ldots, i_k \in \{1,2\}$ such that

$|\delta_{i_k} \circ \cdots \circ \delta_{i_1}(t_0) - t_1| \leq 2^{k-1}$.

Thus this dynamical system is minimal.

#### 3. Proof of Claim 1

Proof of Claim 1: It will be useful to consider the dynamical system $([-1,1],\delta_1, \delta_2)$ of the above Example. Let $t_0 \in [-1,1]$ be a point where $g$ attains its maximum. Then the functional equation (*) can hold at $t = t_0$ only if $g$ attains its maximum at $\delta_1(t_0)$ and at $\delta_2(t_0)$ — this readily follows from the fact that the functional equation merely says that $g(t_0)$ is the mean of $g(\delta_1(t_0))$ and $g(\delta_2(t_0))$. Repeating the argument, we find that the maximum of $g$ is attained at the points $\delta_1\circ \delta_1 (t_0), \delta_1 \circ \delta_2(t_0), \delta_2 \circ \delta_1(t_0)$ and  $\delta_2 \circ \delta_2(t_0)$. By induction, the maximum of $g$ is attained at all points in the orbit of $t_0$. But the Example above, the orbit of $t_0$ is dense in $[-1,1]$, thus $g$ attains this maximum on a dense set. Since $g$ is continuous, it must therefore be a constant. That concludes the proof of the claim, and therefore also concludes the solution of the problem.

#### 4. Alternative solution to Problem

Here is a somewhat different solution to the problem, which (I later learned) is a simplification of Paneah’s approach to uniqueness in $P$-configurations.

As before, let $t_0$ be a point where $g$ attains its maximum. Then $g$ must attain its maximum also at $\delta_1(t_0)$. It follows inductively that $g$ attains its maximum at the sequence of points $\{ \delta_1^{(n)}(t_0)\}_{n=1}^\infty$, where $\delta_1^{(n)}$ denotes $\delta_1$ composed with itself $n$ times. But clearly $\delta_1^{(n)}(t_0) \rightarrow 1$, thus (as $g$ is continuous) the maximum of $g$ is achieved at the point $1$. But the same argument works for the minimum of $g$, so the minimum of $g$ must also be achieved at $1$. This is possible only of $g$ is constant.

#### 5. Complications and guided dynamical systems

Before moving forward, let me briefly indicate what is the trickier problem that Paneah treated in his papers on this. Suppose that the functional equation (*) is replaced with

(**)  $g(t) = a_1(t) g\left( \frac{t+1}{2} \right) + a_2(t) g\left(\frac{t-1}{2} \right)$,

where $a_1,a_2$ are two non-negative functions satisfying $a_1(t) + a_2(t) = 1$. Then both of the above approaches fail to prove that the only solutions to (**) are constants, because if, say $a_1(t_0) = 0$ (where $g(t_0) = \max g$) then we cannot conclude that the maximum of $g$ is attained at $\delta_1(t_0)$, in fact it might not. In fact, there could be non-constant solutions to (**). To decide when this happens one has to consider guided dynamical systems (as I call them), which were introduced by Paneah for this purpose. In a nutshell, a guided dynamical system is a dynamical system $(X,\delta_1, \ldots, \delta_N)$ together with closed sets $\Lambda_1, \ldots, \Lambda_N \subset X$ such that “one may use $\delta_i$ only on points $x \notin \Lambda_i$“. In other words, a guided dynamical system whose evolution has some obstructions: in a sense the evolution is not generated by a semigroup, but rather by something more like a semigroupoid. As I’ve said, Paneah used these systems to study functional equations (and he was able to apply these systems to problems in integral geometry and PDEs), and I also developed the theory to a small extent in my master’s thesis. As far as I know this interesting notion has not been studied by others (though I have seen once a work in a similar spirit, see this paper). I am not going to talk about generalizations in this direction any further. I will go on, but in another direction.

#### 6. Another problem

Let me stop with mathematics and return to my story telling. After I showed Paneah my solution to the Problem above I was very pleased. To recap, the answer to the problem is

All continuously differentiable solutions $f$ to the equation

(FE) $f(t) = f\left(\frac{t+1}{2}\right) + f\left( \frac{t-1}{2}\right) \,\, , \,\, t \in [-1,1]$

are of the form $f(x) = cx$.

Then Paneah challenged me further, and asked: well, what about the continuous solutions?

Well, what about them? Could there be continuous solutions to the functional equation (FE) which are not of the form $f(x) = cx$?