## Duration of a trajectory

Suppose we launch an object straight up. We wish to find how long it will take to return. Suppose we launch it up at a speed $v_0$. It is well known that the classical escape velocity is given by $v_e=\sqrt{\frac{2MG}{R}}$ By examining the energy equation, we find that the speed when the object is a distance r from the center of the planet is given by: $v(r)=v_e\sqrt{\frac{R}{r}-\gamma}$ Where $\gamma=1-\frac{v_0^2}{v_e^2}$ To find the travel time, we integrate: $T=2\int_{R}^{R/\gamma}\frac{dr}{v(r)}=\frac{2}{v_e}\int_{R}^{R/\gamma}\frac{dr}{\sqrt{\frac{R}{r}-\gamma}}=\frac{2R}{v_e}\int_{\gamma}^{1}\frac{du}{u^2\sqrt{u-\gamma}}$ $T=\frac{2R}{v_e}\frac{\tan^{-1}\left ( \sqrt{\frac{1}{\gamma}-1} \right )+\sqrt{\gamma-\gamma^2}}{\gamma^{3/2}}=\frac{2R}{v_e}\frac{\sin^{-1}(u)+u\sqrt{1-u^2}}{(1-u^2)^{3/2}}$ Where $u=v_0/v_e$.

## Optimal Path through a Planet

We want to find the best path through a planet of radius R, connecting two points $2\alpha$ radians apart (great circle angle). We assume the planet is of uniform density. As is well known, the acceleration due to gravity a radius r from the center of the planet is given by: $a=-gr/R$ Where $g$ is the surface gravitational acceleration. Thus, if it falls from the surface along a path through the planet, its speed at a distance r from the center will be given by $\tfrac{1}{2}mv^2=\tfrac{1}{2}m\frac{g}{R}\left ( R^2-r^2 \right )$ $v(r)=\sqrt{\frac{g}{R}} \sqrt{R^2-r^2}$ Let us suppose it falls along the path specified by the function $r(\theta)$, where r is even and $r(\pm\alpha)=R$. The total time is given by $T=2\int_{0}^{\alpha}\frac{d\ell}{v}=2\sqrt{\frac{R}{g}}\int_{0}^{\alpha}\frac{\sqrt{r^2(\theta)+r'^2(\theta)}}{\sqrt{R^2-r^2(\theta)}}d\theta$ In order to obtain conditions for the optimal path, then, we use calculus of variations. The Lagrangian is $L(r,r',\theta)=\frac{\sqrt{r^2+r'^2}}{\sqrt{R^2-r^2}}$ Using the Beltrami Identity, we find: $\frac{\sqrt{r^2+r'^2}}{\sqrt{R^2-r^2}}-\frac{r'^2}{{\sqrt{r^2+r'^2}}{\sqrt{R^2-r^2}}}=\frac{r^2}{{\sqrt{r^2+r'^2}}{\sqrt{R^2-r^2}}}=C$ Let $1+ \tfrac{1}{C^2}=1/q^2$. Rearranging, we find: $r'=r\sqrt{\frac{\left (1+ \tfrac{1}{C^2} \right )r^2-R^2}{R^2-r^2}}=\frac{r}{q}\sqrt{\frac{r^2-R^2q^2}{R^2-r^2}}$ As $r'(0)=0$, this implies that $r(0)=Rq$ $r=\frac{r(0)}{R}$ Let us make the change of variables: $u=r^2/R^2$. This then gives: $u'=2u\sqrt{\frac{\tfrac{1}{q^2}u-1}{1-u}}$ $u(0)=q^2$ In order to determine this value, we can integrate the differential equation: $\frac{1}{2u} \sqrt{\frac{1-u}{\tfrac{1}{q^2}u-1}}du=d\theta$ $\int_{q^2}^{1}\frac{1}{2u} \sqrt{\frac{1-u}{\tfrac{1}{q^2}u-1}}du=\frac{\pi}{2}(1-q)=\int_{0}^{\alpha}d\theta=\alpha$ Thus $q=1-\frac{2\alpha}{\pi}$ We can then find the total travel time: $T=2\sqrt{\frac{R}{g}}\int_{0}^{\alpha}\frac{\sqrt{r^2(\theta)+r'^2(\theta)}}{\sqrt{R^2-r^2(\theta)}}d\theta=2\sqrt{\frac{R}{g}}\int_{Rq}^{R}\frac{\sqrt{r^2(\theta)+r'^2(\theta)}}{\sqrt{R^2-r^2(\theta)}}\frac{1}{r'}dr$ $T=2\sqrt{\frac{R}{g}}\int_{Rq}^{R} \frac{q}{r} \sqrt{r^2+\frac{r^2}{q^2}{\frac{r^2-R^2q^2}{R^2-r^2}}}\frac{dr}{\sqrt{r^2-R^2q^2}}$ $T=\sqrt{\frac{R}{g}}\sqrt{1-q^2}\int_{Rq}^{R} \frac{2rdr}{\sqrt{R^2-r^2} \sqrt{r^2-R^2q^2}}$ $T=\sqrt{\frac{R}{g}}\sqrt{1-q^2}\int_{q^2}^{1} \frac{dx}{\sqrt{1-x^2} \sqrt{x^2-q^2}}$ $T=\pi \sqrt{\frac{R}{g}}\sqrt{1-q^2}=2\sqrt{\frac{R}{g}}\sqrt{\pi\alpha-\alpha^2}$ Below we show several trajectories along the optimal path for several values of alpha:

In fact, these solutions are hypocycloids.

## Definition

Golomb's sequence, named after Solomon Golomb, is a curious sequence of whole numbers that describes itself. It is defined in the following way: it is a non-decreasing sequence of whole numbers where the nth term gives the number of times n occurs in the sequence, and the first term is 1. From this we can begin constructing it: The second element must be greater than 1 as there is only one 1. It must be 2, and so must be the third element. Given this, there must be 2 threes, and from here on we may merely refer to the terms in the sequence and continue from there. The first several terms of the sequence are: $1, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, \\ 9, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12,...$

## Recurrence Relation

The sequence can be given an explicit recurrence relation by stating it in the following way, using the self-describing property: To determine the next term in the sequence, go back the number of times that the previous term occurred (this will put you at the next-smallest value), then add one. For example, to determine the 12 term (6), count the number of times that the value of the 11th term (5) occurs (3 times). Step back that many terms (to the 9th term: 5) then add one to that value (6). This then gives the recurrence relation: $a(n+1)=1+a\left ( n+1-a(a(n)) \right )$ Where $a(1)=1$.

## Asymptotic Behavior

The recurrence relation allows us to give an asymptotic expression for the value of the sequence. Let us suppose the sequence grows like $a(n)=A n^\alpha$ Let us put this into the recurrence relation: $A(n+1)^\alpha=1+A\left ( n+1-A(A n^\alpha)^\alpha \right )^\alpha$ Simplifying and rearranging, we obtain: $1=\frac{1}{A(n+1)^\alpha}+\left (1-A^{1+\alpha}\frac{n^{\alpha^2}}{n+1} \right )^\alpha$ As $\alpha<1$, $\frac{n^{\alpha^2}}{n+1}$ goes to zero. For small x, $(1+x)^b\rightarrow 1+bx$. Thus, asymptotically: $1\approx\frac{1}{A(n+1)^\alpha}+1-\alpha A^{1+\alpha}\frac{n^{\alpha^2}}{n+1}$ $\alpha A^{2+\alpha}n^{\alpha^2}(n+1)^{\alpha-1} \approx 1$ Thus it must be the case that $\alpha^2+\alpha-1=0$ $A=\alpha^{-\frac{1}{2+\alpha}}$ The solution to the first equation is $\alpha=\left \{\varphi-1,-\varphi \right \}$ Where $\varphi$ is the golden ratio. As the exponent is clearly positive, we find the sequence is asymptotic to: $a(n)\rightarrow \varphi^{2-\varphi}n^{\varphi-1}$ Below we plot the ratio of these two expressions:

## Definition and Background

A continued fraction is a representation of a number $x$ in the form $x=a_0+\cfrac{b_0}{a_1+\cfrac{b_1}{a_2+\cfrac{b_2}{a_3+\cfrac{b_3}{\ddots}}}}$ Often, the b's are taken to be all 1's and the a's are integers. This is called the canonical or simple form. There are numerous ways of representing continued fractions. For instance, $x=a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\cfrac{1}{\ddots}}}}$ can be represented as $x=a_0+\overset{\infty}{\underset{k=1}{\mathrm{K}}}\frac{1}{a_k}$ Or as $\left [ a_0;a_1,a_2,a_3,... \right]$

## Construction Algorithm

The continued fraction terms can be determined as follows: Given $x$, set $x_0=x$. Then $a_k=\left \lfloor x_k \right \rfloor$ $x_{k+1}=\frac{1}{x_k-a_k}$ Continue until $x_k=a_k$.

## Convergents

The convergents of a continued fraction are the rational numbers resulting from taking the first n terms of the continued fraction. Let $P_n$ and $Q_n$ be the numerators and deominators respectively of the nth convergent (the one that includes $a_n$). It is not difficult to show that $P_n=a_nP_{n-1}+P_{n-2}$ $Q_n=a_nQ_{n-1}+Q_{n-2}$ An alternate way of saying this is that $\begin{bmatrix} a_n & 1\\ 1 & 0 \end{bmatrix} \begin{bmatrix} P_{n-1} & Q_{n-1}\\ P_{n-2} & Q_{n-2} \end{bmatrix} = \begin{bmatrix} P_{n} & Q_{n}\\ P_{n-1} & Q_{n-1} \end{bmatrix}$ Where $\begin{bmatrix} P_{-1} & Q_{-1}\\ P_{-2} & Q_{-2} \end{bmatrix}= \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}$ And therefore ${}^L\prod^n_{k=0} \begin{bmatrix} a_k & 1\\ 1 & 0 \end{bmatrix} = \begin{bmatrix} P_{n} & Q_{n}\\ P_{n-1} & Q_{n-1} \end{bmatrix}$ Let $p_n=\frac{P_n}{P_{n-1}}$ $q_n=\frac{Q_n}{Q_{n-1}}$ Then $p_n=a_n+\frac{1}{p_{n-1}}$ $q_n=a_n+\frac{1}{q_{n-1}}$ We find that $\frac{P_{n+1}}{Q_{n+1}}=a_0+\sum_{k=0}^{n}\frac{(-1)^k}{Q_kQ_{k+1}}$ And thus $\left | x- \frac{P_{n}}{Q_{n}}\right |<\frac{1}{Q_nQ_{n+1}}$ As $a_n \geq 1$, $Q_n \geq F_n$ i.e. the nth Fibonacci number. This, then, implies Hurwitz's theorem: For any irrational number x, there exist infinitely many ratios $P/Q$ such that $\left | x-\frac{P}{Q} \right |<\frac{k}{Q^2}$ Only if $k \geq 1/\sqrt{5}$.

## Periodic Continued Fractions

Suppose that for $k \geq N$, $a_{k+M}=a_k$. Let $[a_0;a_1,a_2,...a_{N-2}]=\frac{P_{Y1}}{Q_{Y1}}$ $[a_0;a_1,a_2,...a_{N-1}]=\frac{P_{Y2}}{Q_{Y2}}$ $\left [a_N;a_1,a_2,...a_{N+M-2} \right ]=\frac{P_{Z1}}{Q_{Z1}}$ $\left [a_N;a_1,a_2,...a_{N+M-1} \right ]=\frac{P_{Z2}}{Q_{Z2}}$ Then x satisfies the formula $x=\frac{P_{Y2}\cdot y+P_{Y1}}{Q_{Y2} \cdot y+Q_{Y1}}$ Where y satisfies $y=\frac{P_{Z2}\cdot y+P_{Z1}}{Q_{Z2} \cdot y+Q_{Z1}}$ Thus a continued fraction will be eventually periodic if and only if it is the solution of some quadratic polynomial.

## Generic Continued Fractions

Let x be uniformly chosen between 0 and 1. We define a sequence of random variables as follows $\xi_0=x$ $\xi_{n+1}=\frac{1}{\xi_n}-\left \lfloor \frac{1}{\xi_n} \right \rfloor$ Clearly, if $x=[0;a_1,a_2,a_3,...]$ Then $\xi_n=[0;a_{n+1},a_{n+2},a_{n+3},...]$ Let us assume that, asymptotically, the $\xi$'s approach a single distribution. Based on our definitions, this would imply that $P(\xi_{n+1} < z)=\sum_{k=1}^{\infty} P \left (\frac{1}{k} < \xi_n < \frac{1}{k+z} \right )$ Differentiating both sides gives the required relationship: $f_\xi(z)=\sum_{k=1}^{\infty}\frac{f_\xi\left ( \tfrac{1}{k+z} \right )}{(k+z)^2}$ Let us test the function $f_\xi(z)=\frac{A}{1+z}$ $\sum_{k=1}^{\infty}\frac{A}{1+\tfrac{1}{k+z}}\frac{1}{(k+z)^2}=\sum_{k=1}^{\infty}\frac{1}{(1+k+z)(k+z)}$ $\sum_{k=1}^{\infty}\frac{1}{(1+k+z)(k+z)}=\sum_{k=1}^{\infty}\frac{1}{k+z}-\frac{1}{k+z+1}=\frac{A}{1+z}$ It can be proved more rigorously that this is indeed the asymptotic probability density function, with $A=1/\ln(2)$. Thus $P(\xi_{n} < z)=\log_2(1+z)$ From this we can easily find the asymptotic density function for the continued fraction terms. The probability that $a_{n+1}=k$ is the same as the probability that $\left \lfloor \tfrac{1}{\xi_n} \right \rfloor=k$. This is then $P(a_{n+1}=k)=P\left ( \frac{1}{k+1} < \xi_n \leq \frac{1}{k} \right )=\log_2(1+\tfrac{1}{k})-\log_2(1+\tfrac{1}{k+1})$ $P(a_{n+1}=k)=\log_2\left ( \frac{(k+1)^2}{k(k+2)} \right )=\log_2\left ( 1+\frac{1}{k(k+2)} \right )$ This is called the Gauss-Kuzmin Distribution.

From this we can then easily find the asymptotic geometric mean of the terms $\underset{n \to \infty}{\lim}\sqrt[n]{\prod_{k=1}^{n}a_k}=\exp\left (\underset{n \to \infty}{\lim} \frac{1}{n}\sum_{k=1}^{\infty} \ln(a_k)\right )=\exp\left ( E(\ln(a_k)) \right )$$\underset{n \to \infty}{\lim}\sqrt[n]{\prod_{k=1}^{n}a_k}= \exp\left (\sum_{j=1}^{\infty}P(a_k=j)\ln(j) \right )$$\underset{n \to \infty}{\lim}\sqrt[n]{\prod_{k=1}^{n}a_k}= \prod_{j=1}^\infty \left ( 1+\frac{1}{j(j+2)} \right )^{\log_2(j)}=2.685452001...=K_0$ This value is called Khinchin's Constant.

Let us now look at the asymptotic behavior of the convergents. Namely, we wish to examine the asymptotic behavior of the denominators. First note that $\xi_n=\frac{1}{\xi_{n-1}}-a_n$ If we let $y_n=1/\xi_n$, we then have $y_{n-1}=a_n+\frac{1}{y_n}$ From above we have that $q_n=a_n+\frac{1}{q_{n-1}}$ As, asymptotically, $\xi_n \sim \xi_{n-1}$, this implies that, asymptotically, $y_n \sim y_{n-1} \sim 1/\xi_n$ and therefore $q_n \sim q_{n-1} \sim 1/\xi_n$. Thus $f_q(z)=\left\{\begin{matrix} \frac{1}{z^2}\frac{1}{\ln(2)}\frac{1}{1+1/z} \\ 0 \end{matrix}\right.\; \; \begin{matrix} z > 1 \\ z \leq 1 \end{matrix}$ As $Q_n=\prod_{k=1}^{n}q_k$ We have $\underset{n \to \infty}{\lim}\sqrt[n]{Q_n}=\underset{n \to \infty}{\lim}\sqrt[n]{\prod_{k=1}^{n}q_k}=\exp\left (\underset{n \to \infty}{\lim}\frac{1}{n}\sum_{k=1}^{\infty}\ln(q_k) \right )$$\underset{n \to \infty}{\lim}\sqrt[n]{Q_n}=\exp\left (E(\ln(q_n)) \right )= \exp\left (\int_{1}^{\infty}\ln(z)\frac{1}{z^2}\frac{1}{\ln(2)}\frac{1}{1+1/z}dz \right )$$\underset{n \to \infty}{\lim}\sqrt[n]{Q_n}= \exp\left (-\frac{1}{\ln(2)}\int_{0}^{1}\frac{\ln(z)}{1+z}dz \right )$$\underset{n \to \infty}{\lim}\sqrt[n]{Q_n}=\exp\left ( \frac{\pi^2}{12\ln(2)} \right )=3.275823...$ This value (or sometimes its natural log) is called Levy's constant.

We want to know how efficient continued fractions are for representing numbers relative to place-value expansions. Suppose we are working in base b. We want to find how many terms in the continued fraction expansion are required to obtain an approximation good to m base-b digits. We will have obtained such an approximation when the error is less than $b^{-m}$ but greater than $b^{-(m+1)}$. From above we have $\left | x- \frac{P_{n}}{Q_{n}}\right |<\frac{1}{Q_nQ_{n+1}}$ Thus $b^{-(m+1)} < \left | x- \frac{P_{n}}{Q_{n}}\right | < \frac{1}{Q_nQ_{n+1}} < \frac{1}{Q_n^2} \leq b^{-m}$ Rearranging, we have $b^m \leq Q_n^2 < b^{m+1}$ $b^{\frac{m}{2n}} \leq \sqrt[n]{Q_n} < b^{\frac{m+1}{2n}}$ Thus, as the center expression approaches a limit for large n, it follows that $m/n$ does as well. Namely, by rearranging, we find that for n the number of continued fraction terms needed to express x in base b up to m decimal places, $\underset{m,n \to \infty}{\lim}\frac{m}{n}=\frac{\pi^2}{6\ln(2)\ln(b)}$ This is known as Loch's Theorem. In particular, for base 10, this implies that each continued fraction term provides on average 1.03064... digits of precision. In fact, base 10 is the largest integral base for which the continued fraction is more efficient.

## The Case of Square Roots

We wish to examine the behavior of the iterated radical expression $R_a(n)=\underbrace{\sqrt{a+\sqrt{a+...\sqrt{a+R_a(0)}}}}_{n\textrm{ radicals}}$ Let $A=\lim_{n \rightarrow \infty} R_a(n)$ Then clearly $A^2=a+A$ And so $A=\tfrac{1+\sqrt{1+4a}}{2}$ In order to determine the nature of the convergence to this limit, let us examine a function defined as follows: $f(x/q)=\sqrt{a+f(x)}$ Where q is a value yet to be determined. Clearly $f(0)=A$, and it is not hard to see that $R_a(n)=f\left ( \tfrac{f^{-1}(R_a(0))}{q^n} \right )$ Thus the behavior of f, as well as the value of q, will determine the convergence of $R_a(n)$. We rearrange the above relation to get $f^2(x)=a+f(qx)$ Let us expand f in a Taylor series. $f(x)=A+b_1 x +b_2 x^2 +b_3 x^3+...$ We can substitute this into our functional equation to get $A^2+2Ab_1 x+(2A b_2+b_1^2)x^2+(2Ab_3+2b_1b_2)x^3+...=a+A+qb_1x+q^2b_2x^2+q^3b_3x^3+...$ By equating coefficients, we find that $q=2A$. Note that changing $b_1$ only affects the scaling of the function. Assuming we want the inverse to be positive as we approach from below, $b_1$ must be negative, thus we simply set $b_1=-1$. Now the rest of the coefficients can be found algorithmically in sequence. In general, the coefficient of $x^k$ will be $b_k=\frac{1}{(2A)^k-2A}\sum_{j=1}^{k-1}b_jb_{k-j}$ And thus $f\left ( \tfrac{x}{2A} \right )=\sqrt{a+f(x)}$$f^2(x)=a+f(2Ax) \\ R_a(n)=f\left ( \tfrac{f^{-1}(R_a(0))}{(2A)^n} \right )$ Where f is defined by the polynomial with the given coefficients. It follows that $\lim_{n \rightarrow \infty} (2A)^n(A-R_a(n))=\lim_{n \rightarrow \infty} (2A)^n(f(0)-f(f^{-1}(R_a(0))/(2A)^n))$$\lim_{n \rightarrow \infty} (2A)^n(A-R_a(n))=-f'(0)f^{-1}(R_a(0))=f^{-1}(R_a(0))$$\lim_{n \rightarrow \infty} (2A)^n \left (A-\underbrace{\sqrt{a+\sqrt{a+...\sqrt{a+z}}}}_{n\textrm{ radicals}} \right )=f^{-1}(z)$ Another way to construct $f(x)$ is by the following approach, which converges fairly quickly: Let $f_0(x)=A-x$. We define $f_{k+1}(x)= f_k^2\left (\frac{x}{2A} \right )-a$ Then $\lim_{k \rightarrow \infty}f_k(x)=f(x)$

### A Special Trigonometric Case

For the case of $a=2$, it is easy to show by induction that $b_k=2(-1)^k\frac{1}{(2k)!}$ Which would imply that $f(x)=2\cos(\sqrt{x})$ Therefore $\lim_{n \rightarrow \infty} 4^n \left( 2-\underbrace{\sqrt{2+\sqrt{2+...\sqrt{2}}}}_{n \textrm{ radicals}}\right)=\pi^2/4$

### An Infinite Product

Beginning with $f^2(x)=a+f(2Ax)$ Let us differentiate to obtain $f(x)f'(x)=Af'(2Ax)$ Thus, if we define $g(x)=-xf'(x)$ Then we easily see that $g(2Ax)=2g(x)f(x)$ Clearly $g(0)=0, g'(0)=1$. Then $g(x)=2f\left (\tfrac{x}{2A} \right )g\left (\tfrac{x}{2A} \right )=2^2f\left (\tfrac{x}{2A} \right ) f\left (\tfrac{x}{(2A)^2} \right ) g\left (\tfrac{x}{(2A)^2} \right )$$g(x)=2^N g\left (\tfrac{x}{(2A)^N} \right )\prod_{k=1}^{N} f\left (\tfrac{x}{(2A)^k} \right )$$g(x)=(2A)^N g\left (\tfrac{x}{(2A)^N} \right )\prod_{k=1}^{N} \tfrac{1}{A}f\left (\tfrac{x}{(2A)^k} \right )$ Taking the limit $g(x)=\underset{N \to \infty}{\lim}(2A)^N g\left (\tfrac{x}{(2A)^N} \right )\prod_{k=1}^{N} \tfrac{1}{A}f\left (\tfrac{x}{(2A)^k} \right )=x\prod_{k=1}^{\infty} \tfrac{1}{A}f\left (\tfrac{x}{(2A)^k} \right )$ Thus $-f'(x)=\prod_{k=1}^{\infty} \tfrac{1}{A}f\left (\tfrac{x}{(2A)^k} \right )$ Thus we need only examine the zeros of f to find the zeros of f'. In fact, if f has zeros $\left \{z_1,z_2,z_3,... \right \}$ Then f will have extrema at $\bigcup_{k=1}^{\infty}\left \{(2A)^kz_1,(2A)^kz_2,(2A)^kz_3,... \right \}$

#### An Associated Infinite Series

Differentiating the log of both sides of the result above, we find the infinite series: $\frac{d}{dx}\ln\left (-f'(x) \right )=\frac{d}{dx}\ln\left (\prod_{k=1}^{\infty} \tfrac{1}{A}f\left (\tfrac{x}{(2A)^k} \right ) \right )$$\frac{f''(x)}{f'(x)}=\sum_{k=1}^{\infty}\frac{1}{(2A)^k}\frac{f'\left (\tfrac{x}{(2A)^k} \right )}{f\left (\tfrac{x}{(2A)^k} \right )}$

### Zeros of $f(x)$

Below is a plot of the zeros of for different values of a on the vertical axis, plotted semi-logarithmically.

Below is a plot the sign of f (Yellow is positive, blue is negative), from which the zero contours can be seen. However, we can also see that some zeroes of f for certain values of a are multiple roots, as f goes to zero without changing sign.

#### Special Cases

Two special cases bear mentioning. In the case $a=1$, the zeros are given by $z_n=2.1973\cdot (1+\sqrt{5})^{2n}$ for $n \geq 0$. In fact, in this case, after the first zero, f is always between -1 and 0. f is -1 at $x_n=2.1973\cdot (1+\sqrt{5})^{2n+1}$ for $n \geq 0$. For $a=2$, the zeros are at $z_n=\left ( (2n+1)\frac{\pi}{2} \right )^2$ And, in fact, $f(x)=2$ at $x_n=\left ( 2n\pi \right )^2$, and $f(x)=-2$ at $x=\left ( (2n+1)\pi \right )^2$, for $n\geq0$.

### Periodic and Possible Fractal Structure

Although f is generally not very interesting close to zero, it exhibits remarkable behavior on larger scales. We find, namely, that if we take $h(x)=\left | f(x) \right |^{x^{-\log_{2A}(2)}}$ Then h is exponentially periodic, asymptotically. We define $J(x)=h((2A)^x)$ This function has period 1, asymptotically. Below we show the behavior of J for some values of A

Note that the number of zeros remains constant. All seem to be single roots. In fact, the location of the dominant maxima seem constant as well However, within the periodicity, J appears to have a fractal structure. Below we show a zoom of $J(x)$ for $a=3$.

## Complex Behavior

We can take the series and functional definitions of the function and use them to extend the function to the entire complex plane. Below we plot the complex sign of $f(Cz|z|)$ for different values of a, and a certain value of C (this rescaling done to make the regularities more evident). The complex sign is given by the color:
• Dark Blue$\Leftrightarrow\textrm{Re}< 0 ,\textrm{Im} < 0$
• Light Blue$\Leftrightarrow\textrm{Re} < 0 ,\textrm{Im} > 0$
• Orange$\Leftrightarrow\textrm{Re} > 0,\textrm{Im} < 0$
• Yellow$\Leftrightarrow\textrm{Re} > 0,\textrm{Im} > 0$
This allows us to find zeros, which correspond to points where all four colors meet.

We note several remarkable features:
• The function is conjugate-symmetric.
• The function displays remarkable regularity away from the real line. Note the persistent ripples which reach total regularity at $a=2$. There is a structure of "fingers" that gradually join, each finger corresponding to one zero. The position of certain features on the real line remains fixed, e.g. the prominent feature at about 0.8.
• The evolution of the function over a can be broken into three eras.
1. Pre-Saturating: For $a< 1$, there is exactly one real zero.
2. Saturating: For $1\leq a < 2$, zeros join to form pairs of real zeros.
3. Saturated: For $2 \leq a$, all zeros are real.
• The number and larger-scale density of zeros remains roughly constant.
• The function displays quasi-fractal properties, as it becomes increasingly self-similar on larger scales. In a sense, a cross between periodic and fractal behavior, as seen in the other figures.
• The process of the fusing of complex zeros into pairs of real zeros can also be seen in the plots of the real zeros above, giving a new view of the branching features.
• The fingers coalesce along elliptical paths. In fact, these ellipses are of the form $x^2+2y^2=C'^2$

## The Case of Arbitrary Roots

More generally, suppose we examine $R_a(n)=\underbrace{\sqrt[p]{a+\sqrt[p]{a+...\sqrt[p]{a+R_a(0)}}}}_{n\textrm{ radicals}}$ Let $A=\lim_{n \rightarrow \infty} R_a(n)$ Then $f(x/q)=\sqrt[p]{a+f(x)}$ Clearly $f(0)=A$, and it is not hard to see that, again $R_a(n)=f(f^{-1}(R_a(0))/q^n)$ If we do the same analysis as before we find that $q=pA^{p-1}=p(1+a/A)$. Let $f_0(x)=A-x$. We define $f_{k+1}(x)= f_k^p\left (\frac{x}{q} \right )-a$ Then $\lim_{k \rightarrow \infty}f_k(x)=f(x)$ Then similarly we have $\lim_{n \rightarrow \infty} q^n(A-R_a(n))=f^{-1}(R_a(0))$ MATLAB code for evaluating the function for a given a and given radical can be found here.