Monday, September 14, 2020

A Fairly Rigorous Derivation of Euler's Formula

 


Exponential Functions


The general exponential function \(b^x\) for base \(b > 0 \) and real number \(x\) (*) is defined as the function that satisfies the conditions \[ b^x > 0 \\ b^x\cdot b^y=b^{x+y} \\ b^1=b \] It follows that: \[ \prod_{k=1}^{N}b^{x_k}=b^{\sum_{k=1}^{N}x_k} \\ b^0=1 \\ b^{-x}=1/b^x \\ (ab)^x=a^x b^x \\ b^{m/n}=\sqrt[n]{b^m}=\left ( \sqrt[n]{b} \right )^m \\ b^x=\underset{n \to \infty}{\lim}b^{\left \lfloor xn \right \rfloor/n} \]
(*) We will extend this definition to complex \(x\), for which, we will find, that \(b^x>0\) may not hold. Moreover, there is some ambiguity for non-integer \(x\), as, for example, \(4^{1/2}\) may be \(2\) or \(-2\).


Some Exponential Inequalities


Let \(b>0\). By a simple argument we find: \[ 0 \leq \left ( b^{(y-x)/2}-1 \right )^2 \\ b^{(y-x)/2} \leq \frac{b^{(y-x)}+1}{2} \\ b^xb^{(y-x)/2}\leq b^x\left (\frac{b^{y-x}+1}{2} \right ) \\ b^{(y+x)/2} \leq \tfrac{1}{2}b^y+\tfrac{1}{2}b^x \] Suppose that \(0 \leq \alpha,\beta \leq 1\) and that \[ b^\alpha\leq\alpha b + (1-\alpha) \\ b^\beta\leq\beta b + (1-\beta) \] Then \[ b^{(\alpha+\beta)/2} \leq \tfrac{1}{2}b^\alpha+\tfrac{1}{2}b^\beta \\ b^{(\alpha+\beta)/2} \leq \tfrac{1}{2}(\alpha b + (1-\alpha))+\tfrac{1}{2} (\beta b + (1-\beta)) \\ b^{(\alpha+\beta)/2} \leq \tfrac{\alpha+\beta}{2}b+(1-\tfrac{\alpha+\beta}{2}) \] As \(b^0=1\leq 0 \cdot b + (1-0)=1\), and \(b^1=b\leq 1 \cdot b + (1-1)=b\), it follows that, for all dyadic fractions of the form \(x=M/2^N\) for some whole numbers M and N with \(0 \leq M \leq 2^N\): \[ b^x \leq x b + (1-x) \] Moreover, as all real numbers \(0 \leq x \leq 1\) can be written as the limit \[ x=\underset{N \to \infty}{\lim} \frac{\left \lfloor x \cdot 2^N \right \rfloor}{2^N} \] It follows that \[ b^x \leq x b + (1-x) \] Holds for all real x in the interval \( [0,1]\) for all \(b>0\), with equality holding only at the extremes. It follows that \(2^x < 1+x\). Additionally, \((1/2)^x < 1-x/2\). We may then make the following argument: for \(0 < x < 1\) \[ x^2 > 0 \\ 1-x^2=(1+x)(1-x) < 1 \\ 1+x < \frac{1}{1-x} \\ (1/2)^x < 1-x/2 \\ 2^x > \frac{1}{1-x/2} \\ 2^x >{1+x/2} \\ 4^x >(1+x/2)^2 > 1+x \] Thus, we have \(2^x < 1+x < 4^x\).


Derivatives and Derivatives of Exponentials


The definition of a derivative of a function is: \[ \frac{\mathrm{d} }{\mathrm{d} x}f(x)=f'(x) \triangleq \underset{h \to 0}{\lim}\frac{f(x+h)-f(x)}{h} \] Thus, for an exponential, the derivative would be given by: \[ \frac{\mathrm{d} }{\mathrm{d} x}b^x\triangleq \underset{h \to 0}{\lim}\frac{b^{x+h}-b^x}{h}=b^x\underset{h \to 0}{\lim}\frac{b^{h}-1}{h}=b^x L(b) \] Where \(L(b)=\underset{h \to 0}{\lim}\frac{b^{h}-1}{h}\), provided this limit exists. This limit can be proven to exist as follows: for \(0 < q < 1 \), and \(0 < x\), for \(y = q x < x\) by the derived inequality \[ (b^x)^q = b^{qx} < (b^x -1) q +1 \\ \frac{b^{qx}-1}{qx} < \frac{(b^x -1)}{x} \\ \frac{b^{y}-1}{y} < \frac{(b^x -1)}{x} \] Thus, the limit is monotonically decreasing (from the right, increasing from the left). Moreover, the limit is bounded from below and above (for |x| < 1), as \[ 1-\tfrac{1}{b} < \frac{(b^x -1)}{x} < b-1 \] Thus, the limit exists, and so \(b^x\) is everywhere differentiable. As exponentials with \(b > 0\) are eveywhere differentiable and thus continuous, we may take the limit for \(h > 0\). From the above inequalities, we have \[ L(2)=\underset{h \to 0}{\lim}\frac{2^{h}-1}{h} < \underset{h \to 0}{\lim}\frac{1+h-1}{h}=1 \\ L(4)=\underset{h \to 0}{\lim}\frac{4^{h}-1}{h} > \underset{h \to 0}{\lim}\frac{1+h-1}{h}=1 \] As the limits are decreasing and both are bounded below ( \(L(2) > 1/2, \; L(4) > 1 \)), it follows that both limits converge. Thus \(L(2) < 1 < L(4) \). As L is clearly continous, by the intermedate value theorem, it follows that there is some real number \(2 < e < 4 \) such that \( L(e)=1 \). Let us define this number \(e\) to be that number that satisfies \[L(e)=\underset{h \to 0}{\lim}\frac{e^{h}-1}{h}=1\] This implies that \[ \frac{\mathrm{d} }{\mathrm{d} x}e^x=e^x \] This is a defining feature of the number \(e\). We may also notice that, for any real x, if \(h \to 0\) then \(xh \to 0\). Thus \[ \underset{h \to 0}{\lim}\frac{e^{xh}-1}{xh}=1 \\ \underset{h \to 0}{\lim}\frac{e^{xh}-1}{h}=x \] Thus \(L(e^x)=x\). This implies that, by definition, \(L(x)=\log_e (x)=\ln (x)\). Moreover, given the chain rule \(\frac{\mathrm{d} }{\mathrm{d} x}f(g(x))=f'(g(x))g'(x)\), we find \[ \frac{\mathrm{d} }{\mathrm{d} x} L(e^x)=L'(e^x)e^x=1 \] And thus \(L'(x)=1/x\). This is a very helpful result. For example, by rewriting and using the chain rule, we find: \[ \frac{\mathrm{d} }{\mathrm{d} x} x^a=\frac{\mathrm{d} }{\mathrm{d} x} e^{aL(x)}=e^{aL(x)} \frac{a}{x}=a x^{a-1} \] A result that is otherwise difficult to establish in the general case. We may write the limit derived above in an equivalent way as \[ \underset{n \to \infty}{\lim}n \cdot (e^{x/n}-1)=x \] Which directly implies that \[ e^x=\underset{n \to \infty}{\lim} \left ( 1+\frac{x}{n} \right )^n \] Let us expand the above expression using the binomial theorem: \[ e^x=\underset{n \to \infty}{\lim} \left ( 1+\frac{x}{n} \right )^n \\ e^x= \underset{n \to \infty}{\lim}\sum_{k=0}^{n}\binom{n}{k}\left ( \frac{x}{n} \right )^k \\ e^x= \underset{n \to \infty}{\lim}1+\sum_{k=1}^{n}\frac{x^k}{k!}\prod_{j=1}^{k}\left ( 1-\frac{j-1}{n} \right ) \] Clearly, in the limit, all the factors in the products from 1 to k go to 1. Thus, we find: \[ e^x=1+\sum_{k=1}^{\infty}\frac{x^k}{k!} \] It can be checked that this series converges for all real x by the ratio test. This is an extremely useful formula, and can be taken to be a more robust and easy-to-work-with definition for \(e^x=\exp(x)\). Note this formula directly implies that: \[ e=\sum_{k=0}^{n}\frac{1}{k!} \] Note that, as a verification, we can check that \(e^0=1=1+\sum_{k=1}^{n}\frac{0^k}{k!}\) and \[ \frac{\mathrm{d} }{\mathrm{d} x} e^x=e^x=1+\sum_{k=2}^{n}\frac{kx^{k-1}}{k!}=1+\sum_{k=2}^{n}\frac{x^{k-1}}{(k-1)!}=1+\sum_{k=1}^{n}\frac{x^{k}}{k!} \] Which verifies the differentiation formula.


Trigonometric Functions and Inequalities


The definitions of the basic trigonometric functions are given by Figure 1. The curve between points C and D is the set of points equidistant from A between the line segments \(AC\) and \(AD\), i.e. a circular arc. Let us call the length of this curve \(L\). Then the standad definition for the basic trigonometic functions is given by: \[ \theta=\frac{L}{\overline{AD}} \\ \\ \sin(\theta)\triangleq \frac{\overline{BD}}{\overline{AD}}, \; \;\; \cos(\theta)\triangleq\frac{\overline{AB}}{\overline{AD}}, \; \;\; \tan(\theta)\triangleq\frac{\overline{BD}}{\overline{AB}} \] Using these, let us look at figure 2. This figure will serve to evaluate bounds on the trigonometric functions for small angles (\(0 < \theta < 1 \)) Let us denote the length of the curve \(BE\), which is a circular arc, by \(L\). It is clear that \[ \overline{BD} < L < \overline{BF} \] (An alternative way to demonstrate this is through areas, as triangle ABD is a strict subset of sector ABE which is a strict subset of triangle ABF.) Using the definitions above, and defining \(\theta=L/\overline{AB}\), we have: \[ \frac{\overline{BD}}{\overline{AB}}=\sin(\theta) < \frac{L}{\overline{AB}}=\theta < \frac{\overline{BF}}{{\overline{AB}}}=\tan(\theta)=\frac{\sin(\theta)}{\cos(\theta)} \] And so it follows that \[ \theta\cdot\cos(\theta) < \sin(\theta) <\theta \] It follows from the Pythagorean theorem that \[ \sin(\theta)^2+\cos(\theta)^2=1 \] From which we find: \[ \cos(\theta)^2 > 1-\theta^2 > (1-\theta^2)^2 \] The last inequality following from the fact that \(0 < \theta < 1 \). We thus find \[ 1-\theta^2 < \cos(\theta) < 1 \\ \theta-\theta^3 < \sin(\theta) <\theta \] Let us now find the summation formulas for sine and cosine. These are easily found using the construction in figure 3. \[ RB=QA \;\;\;\;\;\;\;\;\;\; RQ=BA \] \[ \frac{RQ}{PQ}=\frac{QA}{OQ}=\sin(\alpha) \;\;\;\;\;\;\;\; \frac{PR}{PQ}=\frac{OA}{OQ}=\cos(\alpha) \] \[ \frac{PQ}{OP}=\sin(\beta) \;\;\;\;\;\;\;\; \frac{OQ}{OP}=\cos(\beta) \] \[ \frac{PB}{OP}=\sin(\alpha+\beta) \;\;\;\;\;\;\;\; \frac{OB}{OP}=\cos(\alpha+\beta) \] \[ PB=PR+RB=\frac{OA}{OQ}PQ+QA \] \[ \frac{PB}{OP}=\frac{OA}{OQ}\frac{PQ}{OP}+\frac{QA}{OP}=\frac{OA}{OQ}\frac{PQ}{OP}+\frac{QA}{OQ}\frac{OQ}{OP} \] \[ \sin(\alpha+\beta)=\cos(\alpha)\sin(\beta)+\sin(\alpha)\cos(\beta) \] \[ OB=OA-BA=\frac{OA}{OQ}OQ-\frac{BA}{PQ}PQ \] \[ \frac{OB}{OP}=\frac{OA}{OQ}\frac{OQ}{OP}-\frac{BA}{PQ}\frac{PQ}{OP} \] \[ \cos(\alpha+\beta)=\cos(\alpha)\cos(\beta)-\sin(\alpha)\sin(\beta) \]


Complex Numbers


Complex numbers can be defined and used in the usual way, namely, as algebraic objects with the symbol \(i\) having the property that \(i^2=-1\). Additionally, we can define the norm of a complex number as \(|a+bi|^2=a^2+b^2\). Some simple theorems we will make use of: \[ (a+bi)\cdot (c+di)=(ac-bd)+i(ad+bc) \\ |(a+bi)\cdot (c+di)|=|a+bi|\cdot|c+di| \\ \frac{1}{a+bi}=\frac{a-bi}{a^2+b^2} \] Let us define the function \[ \mathrm{cis}(x)=\cos(x)+i\sin(x) \] This function has the property that \[ \mathrm{cis}(\alpha)\cdot \mathrm{cis}(\beta)= \left (\cos(\alpha)+i\sin(\alpha) \right ) \cdot \left(\cos(\beta)+i\sin(\beta) \right ) \\ \mathrm{cis}(\alpha)\cdot \mathrm{cis}(\beta)= \left (\cos(\alpha)\cos(\beta)-\sin(\alpha)\sin(\beta) \right ) + i\left(\sin(\alpha)\cos(\beta)+\sin(\beta)\cos(\alpha) \right ) \\ \mathrm{cis}(\alpha)\cdot \mathrm{cis}(\beta)= \cos(\alpha+\beta) + i\sin(\alpha+\beta) \] And thus \(\mathrm{cis}(\alpha)\cdot \mathrm{cis}(\beta)=\mathrm{cis}(\alpha+\beta)\). It follows by induction and the definition of the exponential that, for any natural number \(n\): \[ (\mathrm{cis}(x))^n=\mathrm{cis}(nx) \] And, thus \[ \mathrm{cis}(x)=(\mathrm{cis}(x/n))^n \] Importantly, this is true in the limit of large \(n\). We can always pick \(n\) large enough to make \(x/n\) as small as needed. Thus, we can use the inequalities derived above, namely: \[ \mathrm{cis}\left ( \frac{x}{n} \right )=1+i\frac{x}{n}-\frac{x^2}{n^2}g(x) \] Let \(g(x)=g_r(x)+i g_i(x)\) where \(g_r, g_i\) are real. Then \(0 < g_r(x) < 1\) and \(0 < g_i(x) < \tfrac{x}{n} \). Clearly, then, for \(n > |x|\), \[ |g(x)|^2 < 1+\frac{x^2}{n^2} < 2 \] And so \(|g(x)| < 2\). Also important to note is that a generic complex number can be written as \[ z=a+bi=r \cdot\mathrm{cis}(\theta) \] Where \(r=|z|\) and \(\theta\) satisfies \(r \cos(\theta)=a, \;\;\; r \sin(\theta)=b\). From the above geometric argument, assuming \(a,b > 0\) we have \[ \sin(\theta)=\frac{b}{|z|} < \theta < \tan(\theta)=\frac{b}{a} \] From the fact that \((\mathrm{cis}(x))^n=\mathrm{cis}(nx)\), we find that \[ z^n=(a+bi)^n=r^n \cdot\mathrm{cis}(n\theta) \]


A Lemma for a Family of Limits


From the above we have, for \(0 < x < 1\): \[ 2^{x} < 1+x < 4^{x} \] Let \(x=B/n^2 \), for \(B>0\) and sufficiently large \(n\). Then \[ 2^{B/n^2} < 1+\frac{B}{n^2} < 4^{B/n^2} \\ 2^{B/n} < \left (1+\frac{B}{n^2} \right )^n < 4^{B/n} \] In the limit of large \(n\), \(B/n \to 0\). As \(2^0=4^0=1\), we have \[ \underset{n \to \infty}{\lim} \left (1+\frac{B}{n^2} \right )^n=1 \] A similar argument applies to the case that \(B < 0\). In fact, suppose B is complex, then: \[ \underset{n \to \infty}{\lim} \left (1+\frac{B}{n^2} \right )^n =\underset{n \to \infty}{\lim} \left |1+\frac{B}{n^2} \right |^n \mathrm{cis}\left ( n\theta \right ) \] Where \[ \frac{1+\frac{B}{n^2}}{\left | 1+\frac{B}{n^2} \right |}=\mathrm{cis}(\theta) \] For sufficiently large \(n\), the real part is always positive. It's clear that \(-|B|/n^2 \leq b \leq |B|/n^2\), and so \(-\frac{|B|}{n^2} \leq \theta \leq \frac{|B|}{n^2}\). It clearly follows that \(-\frac{|B|}{n} \leq n\theta \leq \frac{|B|}{n}\). Thus, i nthe limit of large n, \(n\theta \to 0\), and so \(\mathrm{cis}(n\theta)\to 1\). Therefore, for all complex \(B\): \[ \underset{n \to \infty}{\lim} \left (1+\frac{B}{n^2} \right )^n=1 \] Finally, let us note that \[ 1+\frac{A}{n}+\frac{B}{n^2}=\left ( 1+\frac{A}{n} \right )\frac{1+\frac{A}{n}+\frac{B}{n^2}}{1+\frac{A}{n}}= \left ( 1+\frac{A}{n} \right )\left ( 1+\frac{1}{n^2}\frac{B}{1+\frac{A}{n}} \right ) \] For sufficiently large \(n\), we have, then \[ \left |\frac{B}{1+\frac{A}{n}} \right | < 2|B| \] It follows from the above that \[ \underset{n \to \infty}{\lim}\left (1+\frac{A}{n}+\frac{B}{n^2} \right )^n=\underset{n \to \infty}{\lim}\left ( 1+\frac{A}{n} \right )^n \] Clearly this applies to any \(B(n)\) such that, there is some M such that, for \(n>M\), \(|B(n)| < K\) for some real \(K>0\).


Euler's Formula and Identity


We recall the following from a previous section: \[ \mathrm{cis}(x)=(\mathrm{cis}(x/n))^n \] And, for sufficiently large \(n\): \[ \mathrm{cis}\left ( \frac{x}{n} \right )=1+i\frac{x}{n}-\frac{x^2}{n^2}g(x) \] Where \(|g(x)| < 2\). Combining yields: \[ \mathrm{cis}(x)=\left ( 1+i\frac{x}{n}-\frac{x^2}{n^2}g(x) \right )^n \] Equality must hold in the limit of large \(n\), and so, using the above lemma, we have: \[ \mathrm{cis}(x)=\underset{n \to \infty}{\lim}\left ( 1+i\frac{x}{n} \right )^n \] Using the limit definition of \(e^x\), this yields, at last, Euler's celebrated formula: \[ e^{ix}=\cos(x)+i\sin(x) \] This has the special case, by the definition of \(\pi\) and the trigonometric functions: \[ e^{i\pi}+1=0 \] Using the power series expansion for the exponential function, and equating realand imaginary parts yields the two power series expansions: \[ \cos(x)=1+\sum_{k=1}^{\infty}\frac{(-x^2)^k}{(2k)!} \\ \sin(x)=\sum_{k=0}^{\infty}\frac{(-1)^k x^{2k+1}}{(2k+1)!} \]

No comments:

Post a Comment