## Deriving the formula: $\sin(2x)=2\sin(x)\cos(x)$

### Way 1: From Geometry

$RB=QA \;\;\;\;\;\;\;\;\;\; RQ=BA$ $\frac{RQ}{PQ}=\frac{QA}{OQ}=\sin(\alpha) \;\;\;\;\;\;\;\; \frac{PR}{PQ}=\frac{OA}{OQ}=\cos(\alpha)$ $\frac{PQ}{OP}=\sin(\beta) \;\;\;\;\;\;\;\; \frac{OQ}{OP}=\cos(\beta)$ $\frac{PB}{OP}=\sin(\alpha+\beta) \;\;\;\;\;\;\;\; \frac{OB}{OP}=\cos(\alpha+\beta)$ $PB=PR+RB=\frac{OA}{OQ}PQ+QA$ $\frac{PB}{OP}=\frac{OA}{OQ}\frac{PQ}{OP}+\frac{QA}{OP}=\frac{OA}{OQ}\frac{PQ}{OP}+\frac{QA}{OQ}\frac{OQ}{OP}$ $\sin(\alpha+\beta)=\cos(\alpha)\sin(\beta)+\sin(\alpha)\cos(\beta)$ Particularly, if $\alpha=\beta=x, \;\;\;\; \sin(2x)=2\sin(x)\cos(x)$.

### Way 2: From the Product Formula

Recall from this post that the product formulas for sine and cosine are, respectively: $\sin(x)=x\prod_{n=1}^{\infty}\left ( 1-\frac{x^2}{\pi^2 n^2} \right )$ And $\cos(x)=\prod_{n=1}^{\infty} \left (1-\frac{x^2}{\pi^2 (n-1/2)^2 } \right )$ Thus $\sin(2x)=2x\prod_{n=1}^{\infty}\left ( 1-\frac{4 \cdot x^2}{\pi^2 n^2} \right ) =2\cdot x\prod_{n=\mathrm{even}\geq1}^{\infty}\left ( 1-\frac{4 \cdot x^2}{\pi^2 n^2} \right ) \cdot \prod_{n=\mathrm{odd}\geq1}^{\infty}\left ( 1-\frac{4 \cdot x^2}{\pi^2 n^2} \right )$ $\sin(2x) =2\cdot x\prod_{n=1}^{\infty}\left ( 1-\frac{x^2}{\pi^2 n^2} \right ) \cdot \prod_{n=1}^{\infty}\left ( 1-\frac{x^2}{\pi^2 (n-1/2)^2} \right )$ $\sin(2x)=2\cdot \sin(x) \cdot \cos(x)$

### Way 3: From the Taylor Series

The Taylor series for sine and cosine can be construed as, respectively: $\frac{\sin(\sqrt{x})}{\sqrt{x}}=\sum_{k=0}^{\infty}\frac{(-1)^k}{(2k+1)!}x^k$ $\cos(\sqrt{x})=\sum_{k=0}^{\infty}\frac{(-1)^k}{(2k)!}x^k$ Thus $\frac{\sin(\sqrt{x})\cos(\sqrt{x})}{\sqrt{x}}=\sum_{j=0}^{\infty}\frac{(-1)^j}{(2j+1)!}x^j \sum_{k=0}^{\infty}\frac{(-1)^k}{(2k)!}x^k$ Using a Cauchy product, we find: $\frac{\sin(\sqrt{x})\cos(\sqrt{x})}{\sqrt{x}}=\sum_{j=0}^{\infty}c_j x^j$ Where $c_m=\sum_{n=0}^{m} \frac{(-1)^n}{(2n+1)!}\frac{(-1)^{m-n}}{(2(m-n))!} =\frac{(-1)^m}{(2m+1)!}\sum_{n=0}^{m} \binom{2m+1}{2n+1} =\frac{(-1)^m}{(2m+1)!}\sum_{n=0}^{m} \binom{2m}{2n+1}+\binom{2m}{2n}$ $c_m=\frac{(-1)^m}{(2m+1)!}\sum_{n=0}^{2m} \binom{2m}{n}=\frac{(-1)^m}{(2m+1)!}2^{2m}$ And thus $\frac{\sin(\sqrt{x})\cos(\sqrt{x})}{\sqrt{x}}=\sum_{m=0}^{\infty}\frac{(-1)^m}{(2m+1)!}(4x)^m=\frac{\sin(\sqrt{4x})}{\sqrt{4x}}=\frac{\sin(2\sqrt{x})}{2\sqrt{x}}$ Substituting $x=y^2$ and rearranging, we find: $2\sin(y)\cos(y)=\sin(2y)$

### Way 4: From Euler's Formula

Euler's formula is: $e^{ix}=\cos(x)+i\sin(x)$ Thus $e^{i2x}=\cos(2x)+i\sin(2x)=\left ( e^{ix} \right)^2=\left (\cos(x)+i\sin(x) \right )^2$ $e^{i2x}=\left [\cos^2(x)-\sin^2(x) \right ]+i\left [ 2\sin(x)\cos(x) \right ]$ Thus, by equating real and imaginary parts, $\sin(2x)=2\sin(x)\cos(x)$ and $\cos(2x)=\cos^2(x)-\sin^2(x)$

## The Half-Angle Formulas

We find from the last demonstration $\cos(2x)=\cos^2(x)-\sin^2(x)=2\cos^2(x)-1=1-2\sin^2(x)$ Substituting $2x=y$ and solving, we find: $\sin\left ( \frac{y}{2} \right )=\sqrt{\frac{1-\cos(y)}{2}}$ $\cos\left ( \frac{y}{2} \right )=\sqrt{\frac{1+\cos(y)}{2}}$

## An Infinite Product Formula

We can write the double-angle formula as $\sin(x)=2\sin\left ( \frac{x}{2} \right )\cos\left ( \frac{x}{2} \right )$ Iterating this, we then have $\sin(x)=2^n\sin\left ( \frac{x}{2^n} \right ) \prod_{k=1}^{n}\cos\left ( \frac{x}{2^k} \right )$ However, in the limit as n gets large, $2^n\sin\left ( \frac{x}{2^n} \right )\rightarrow x$. Thus, letting n go to infinity, we have $\sin(x)=x \prod_{k=1}^{n}\cos\left ( \frac{x}{2^k} \right )$ A simple theorem of this general result is $\frac{\pi}{2}=\frac{1}{\cos(\tfrac{\pi}{4})\cos(\tfrac{\pi}{8})\cos(\tfrac{\pi}{16})\cdots } =\frac{1}{\sqrt{\tfrac{1}{2}}\sqrt{\tfrac{1}{2}+\tfrac{1}{2}\sqrt{\tfrac{1}{2}}}\sqrt{\tfrac{1}{2}+\tfrac{1}{2}\sqrt{\tfrac{1}{2}+\tfrac{1}{2}\sqrt{\tfrac{1}{2}}}}\cdots }=\frac{2}{\sqrt{2}}\frac{2}{\sqrt{2+\sqrt{2}}}\frac{2}{\sqrt{2+\sqrt{2+\sqrt{2}}}}\cdots$ This is known as Viète's formula.

We note that $2\cos(x/2)=\sqrt{2+2\cos(x)}$ Thus, by iterating, we find $2\cos(x/2^n)=\underset{n\;\, \mathrm{radicals}}{\underbrace{\sqrt{2+\sqrt{2+...\sqrt{2+2\cos(x)}}}}}$ Thus $2\sin(x/2^{n+1})=\sqrt{2-\underset{n\;\, \mathrm{radicals}}{\underbrace{\sqrt{2+\sqrt{2+...\sqrt{2+2\cos(x)}}}}}}$ And we can thus conclude that $x=\underset{n\rightarrow \infty}{\lim} 2^n\sqrt{2-\underset{n\;\, \mathrm{radicals}}{\underbrace{\sqrt{2+\sqrt{2+...\sqrt{2+2\cos(x)}}}}}}$ For example $\pi/3=\underset{n\rightarrow \infty}{\lim} 2^n\sqrt{2-\underset{n\;\, \mathrm{radicals}}{\underbrace{\sqrt{2+\sqrt{2+...\sqrt{2+1}}}}}}$ $\pi/2=\underset{n\rightarrow \infty}{\lim} 2^n\sqrt{2-\underset{n\;\, \mathrm{radicals}}{\underbrace{\sqrt{2+\sqrt{2+...\sqrt{2}}}}}}$

## An Infinite Series

Above, we derived $\sin(x)=x \prod_{k=1}^{n}\cos\left ( \frac{x}{2^k} \right )$ Taking the log of both sides and differentiating $\frac{\mathrm{d} }{\mathrm{d} x}\ln\left (\sin(x) \right )=\frac{\mathrm{d} }{\mathrm{d} x}\ln\left (x \prod_{k=1}^{n}\cos\left ( \frac{x}{2^k} \right ) \right )$ $\cot(x)=\frac{1}{x}-\sum_{k=1}^{\infty}\frac{1}{2^k}\tan \left ( \frac{x}{2^k} \right )$ $\\ \frac{1}{x}-\cot(x)=\sum_{k=1}^{\infty}\frac{1}{2^k}\tan \left ( \frac{x}{2^k} \right )$ From this we can easily derive $\frac{1}{\pi}=\sum_{k=2}^{\infty}\frac{1}{2^k}\tan \left ( \frac{\pi}{2^k} \right )$

## A Definite Integral

Let $I=\int_{0}^{\pi/2}\ln\left ( \sin(x) \right )dx =\int_{\pi/2}^{\pi}\ln\left ( \sin(x) \right )dx =\int_{0}^{\pi/2}\ln\left ( \cos(x) \right )dx$ Then $2I=\int_{0}^{\pi}\ln\left ( \sin(x) \right )dx =2\int_{0}^{\pi/2}\ln\left ( \sin(x) \right )dx =\int_{0}^{\pi/2}\ln\left ( \sin(x) \right )+\ln\left ( \cos(x) \right )dx$ $2I=\int_{0}^{\pi/2}\ln\left ( \sin(x) \cos(x) \right )dx=\int_{0}^{\pi/2}\ln\left (\tfrac{1}{2} \sin(2x) \right )dx=-\frac{\pi}{2}\ln(2)+\int_{0}^{\pi/2}\ln\left (\sin(2x) \right )dx$ By the substitution $u=2x$, we then have $2I=-\frac{\pi}{2}\ln(2)+\tfrac{1}{2}\int_{0}^{\pi}\ln\left (\sin(u) \right )du=-\frac{\pi}{2}\ln(2)+I$ Therefore $I=\int_{0}^{\pi/2}\ln\left (\sin(x) \right )dx=-\frac{\pi}{2}\ln(2)$

## Tuesday, December 15, 2015

### Some Introductory Quantum Mechanics: Mathematico-Theoretical Background

Quantum mechanics (QM), being a novel and revolutionary framework for describing phenomena, requires a substantially different mathematical tool-set and way of thinking about physical systems and objects. There is dispute over how exactly to interpret the mathematical system used, but we will not discuss here the various interpretations. Rather, we will just describe and examine the framework and how it can be used to make predictions, all of which is agreed upon.

This will be a multi-part series giving a general introduction to quantum theory. This is part 2.

## Hilbert, State, and Dual Spaces

Hilbert space is a generalized vector space: a sort of extended analog of the usual Euclidean space. Elements of a Hilbert space are sorts of vectors, and are denoted using a label (basically just a name) and some indication of vector-hood. We will use "bra-ket notation", in which elements of the vector space are denoted as$\left | \phi \right >$ (a ket) ($\phi$ is merely a label. We may sometimes use numbers, or other symbols, but these are all merely labels). Every such element has a corresponding "sister" in what is called the dual space, which is denoted by $\left < \phi \right |$ (a bra). (The name is basically a joke: two halves of the word "bracket"). The use of the dual space will become apparent in our later discussion. In general, and in QM especially, the vector space is complex, meaning the vector's "components" (loosely speaking) are complex numbers.

## Inner Products

To be a Hilbert space, there must also be an inner product, or a way of associating a complex number to each pair of vectors (the order may be important: the inner product of A and B need not be the same as that of B and A). The inner product of $\left | \phi \right >$ and $\left | \psi \right >$ is denoted by $\left \langle \psi \right | \left. \phi \right \rangle$, that is the dual of $\left | \psi \right >$ acting on $\left | \phi \right >$. In particular, to be a Hilbert space, we must have that if $\left \langle \psi \right | \left. \phi \right \rangle = z$, $\left \langle \phi \right | \left. \psi \right \rangle = \bar{z}$, that is, the complex conjugate. If $\left | \phi \right \rangle= r \left | \psi \right \rangle$ then $\left \langle \phi \right |= \bar{r} \left \langle \psi \right |$. Also, we must have $\left \langle \psi \right | \left. \psi \right \rangle \geq 0$, with equality holding iff $\left | \psi \right >$ is the zero vector. Clearly $\left \langle \psi \right | \left. \psi \right \rangle$ will be real.

Beyond this, the inner product is linear. In general, if $\left | \phi \right \rangle= a\left | \alpha \right \rangle+b\left | \beta \right \rangle$ and $\left | \psi \right \rangle= c\left | \gamma \right \rangle+d\left | \delta \right \rangle$, then we have: $\left \langle \psi \right | \left. \phi \right \rangle =a\bar{c}\left \langle \gamma \right | \left. \alpha \right \rangle + a\bar{d}\left \langle \delta \right | \left. \alpha \right \rangle + b\bar{c}\left \langle \gamma \right | \left. \beta \right \rangle + b\bar{d}\left \langle \delta \right | \left. \beta \right \rangle$ We can also prove the famous Cauchy-Schwartz Inequality, namely, that: $\left |\left \langle \psi \right | \left. \phi \right \rangle \right |^2 \leq \left \langle \psi \right | \left. \psi \right \rangle \left \langle \phi \right | \left. \phi \right \rangle$ Two vectors $\left | \phi \right >$ and $\left | \psi \right >$ are said to be orthogonal if $\left \langle \psi \right | \left. \phi \right \rangle=0$. A vector is said to be normal or normalized if $\left \langle \phi \right | \left. \phi \right \rangle =1$. If we have a set of vectors ${| \left. \phi_1 \right \rangle} , {| \left. \phi_2 \right \rangle} , {| \left. \phi_3 \right \rangle},...$ such that $\left \langle \phi_j \right. | \left. \phi_k \right \rangle = 0$ for all $j \neq k$, then the set is called orthogonal set. If it is also the case that $\left \langle \phi_k \right. | \left. \phi_k \right \rangle = 1$ for all k, then the set is called orthonormal.

## Operators

An operator is something which acts on a vector to produce another vector: $A \left | \phi \right \rangle= \left | \phi' \right \rangle$. The operator $A$ is linear if, for any $\left | \phi \right \rangle= a\left | \alpha \right \rangle+b\left | \beta \right \rangle$, we have $A\left | \phi \right \rangle= a A\left | \alpha \right \rangle+b A\left | \beta \right \rangle$.
Let $A \left | \phi \right \rangle= \left | \phi' \right \rangle$ and $B \left | \psi \right \rangle= \left | \psi' \right \rangle$. If $\left \langle \psi \right | \left. \phi' \right \rangle=\left \langle \psi' \right | \left. \phi \right \rangle$ then A and B are called conjugate operators, denoted $A=B^{\dagger}$ and $B=A^{\dagger}$, so $A=\left (A^{\dagger} \right )^\dagger$. We also have $\left \langle \phi' \right |= \left \langle \phi \right | A^\dagger$. If $A=A^\dagger$, then A is called Hermitian. If $A=-A^\dagger$, then A is called anti-Hermitian. If $\left \langle \psi' \left | \right. \phi'\right \rangle = \left \langle \psi \left | \right. \phi\right \rangle$, for all pairs of vectors, then A is called unitary.
We also have the following properties: $(A+B)\left | \phi \right \rangle= A\left | \phi \right \rangle+B\left | \phi \right \rangle$ $AB\left | \phi \right \rangle= A\left (B\left | \phi \right \rangle \right )$ Note that it is not necessarily the case that $AB\left | \phi \right \rangle= BA\left | \phi \right \rangle$ That is, operators need not commute. In fact, we commonly use the notation $[A,B]=AB-BA$ (this is called the commutator of A and B). Non-commutativity will play an important role in the theory.

For a given A, in some cases, for certain $\left | \phi \right \rangle$, we have that $A\left | \phi \right \rangle= \lambda \left | \phi \right \rangle$ for some constant $\lambda$. In this case, we call $\lambda$ an eigenvalue of the operator A and $\left | \phi \right \rangle$ the corresponding eigenvector.
Often it is the case that we can find a set of orthonormal vectors that are the eigenvectors of a given linear operator, such that we can also write any vector as a linear sum of the eigenvectors. In that case, $| \left. \psi \right \rangle = a_1 | \left. \phi_1 \right \rangle +a_2 | \left. \phi_2 \right \rangle+a_3 | \left. \phi_3 \right \rangle+...$where $a_k=\left \langle \phi_k \right. | \left. \psi \right \rangle$ ($a_k$ is called the projection of $\psi$ into the $\phi_k$ direction). Then $\left \langle \psi\left. \right | \psi \right \rangle=|a_1|^2+|a_2|^2+|a_3|^2+...$ $A\left| \psi \right \rangle = \lambda_1 a_1 | \left. \phi_1 \right \rangle + \lambda_2 a_2 | \left. \phi_2 \right \rangle+\lambda_3 a_3 | \left. \phi_3 \right \rangle+...$ $\left \langle \psi \right | A\left| \psi \right \rangle = \lambda_1 \left |a_1 \right |^2 + \lambda_2 \left |a_2 \right |^2+\lambda_3 \left |a_3 \right |^2 +...$ If the operator is also Hermitian, then we call it an observable. Particularly, if an operator is Hermitian, all its eigenvalues are real.
If $| \left. \psi \right \rangle$ is normalized, then we can use the notation $\left \langle A \right \rangle_\psi=\left \langle \psi\left | A \right |\psi \right \rangle$ and $\sigma^2_A=\left \langle A^2 \right \rangle_\psi-\left \langle A \right \rangle^2_\psi$.

## Postulates of Quantum Mechanics

Given that mathematical background, we can now lay out the fundamental postulates of QM. Exactly how to interpret these postulates will be left for later discussion.
1. Wavefunction Postulate
The state of a physical system at a given time is defined by a wavefunction which is a ket vector in the Hilbert space of possible states. Generally, the vector is required to be normalized.
2. Observable Postulate
Every physically measurable quantity corresponds to an observable operator that acts on the vectors in the Hilbert space of possible states.
3. Eigenvalue Postulate
The possible results of a measurement of a physically measurable quantity are the eigenvalues of the corresponding observable.
4. Probability Postulate
Suppose the set of orthonormal eigenvectors of observable A ${| \left. \phi_{k_1} \right \rangle} , {| \left. \phi_{k_2} \right \rangle} , {| \left. \phi_{k_3} \right \rangle},...$ all have eigenvalue $\lambda$. Suppose the initial wavefunction can be written as $| \left. \psi \right \rangle = a_1 | \left. \phi_1 \right \rangle +a_2 | \left. \phi_2 \right \rangle+a_3 | \left. \phi_3 \right \rangle+...$ (i.e. the linear sum of orthonormal eigenvectors of A). Note that $\psi$ is a superposition of other eigenstates. That is, it is a sort of combination of states that have definite properties. Each eigenstate has a well-defined value for the observable, but $\psi$ does not.
The probability of measuring the observable to have the value $\lambda$ is given by $P(\lambda)=\left | a_{k_1} \right |^2+\left | a_{k_2} \right |^2+\left | a_{k_3} \right |^2+...$. More simply, if no two eigenvectors have the same eigenvalue, then the probability that we will measure the observable to have value $\lambda_k$ is $| \left \langle \phi_k\left | \right. \psi\right \rangle |^2$. This is called the Born Rule.
Given this, it is easy to see that $\left \langle A\right \rangle_\psi=\left \langle \psi \left | A \right | \psi\right \rangle$ is the expected value of the operator A.
5. Collapse Postulate
Immediately after measurement, the wavefunction becomes the normalized projection of the prior wavefunction onto the sub-space of values that give the measured eigenvalue. That is, using the above description, the wavefunction immediately after measurement becomes $\alpha \cdot( a_{k_1}| \left. \phi_{k_1}\right \rangle +a_{k_2}| \left. \phi_{k_2}\right \rangle+a_{k_3}| \left. \phi_{k_3}\right \rangle +...)$ where $\alpha$ is a suitable normalization constant, chosen to make the resulting vector normalized. More simply, if no two eigenvectors have the same eigenvalue, then the wavefunction immediately after we measure the observable to have value $\lambda_k$ is $| \left. \psi \right \rangle=| \left. \phi_k \right \rangle$.
6. Evolution Postulate
The time-evolution of the wavefunction, in the absence of measurement, is given by the time-dependent Schrodinger Equation: $\hat{E} \left.|\psi \right \rangle=\hat{H}\left.|\psi \right \rangle$ Where $\hat{E}$ is the energy operator, which is given by $i \hbar \frac{\partial }{\partial t}$, and $\hat{H}$ is the Hamiltonian operator, which is defined analogously as in classical mechanics. In particular, it is the sum of the kinetic and potential energy operators.

## Spatial Dimensions

A common Hilbert space to use is that of functions of one spatial dimension and time. This is an example of an infinite dimensional Hilbert space (at any x-coordinate, the wavefunction could take on a completely independent value). We often speak of eigenfunctions instead of eigenvectors in such a space. In this Hilbert space, we define the inner product of two wavefunctions to be $\left \langle \phi\left | \right. \psi\right \rangle =\int_{-\infty}^{\infty}\bar{\phi}(x,t)\psi(x,t)dx$. The momentum operator in the x-direction is given by $P_x=\frac{\hbar}{i}\frac{\partial }{\partial x}$. The position operator is quite simply $X=x$. The (un-normalized) eigenfunctions for each are easily found to be, respectively $\left. | \psi\right \rangle_p=e^{ipx/\hbar}$ $\left. | \psi\right \rangle_{x_0}=\sqrt{\delta(x-x_0)}$
The classical kinetic energy is given by $E_k=\frac{1}{2}mv^2=\frac{p^2}{2m}$. The potential energy is given simply by $E_p=V(x,t)$, that is, merely a specification of the potential energy as a function of position and possibly time. Thus, the time-dependent Schrodinger Equation can be written as $i \hbar \frac{\partial }{\partial t} \left.|\psi \right \rangle=\left ( \frac{-\hbar ^2}{2m} \frac{\partial^2 }{\partial x^2}+V(x,t) \right)\left.|\psi \right \rangle$ If the wavefunction is an eigenfunction of energy, with eigenvalue E, then its energy does not change with time and we can write the time-independent Schrodinger Equation: $E \left.|\psi \right \rangle=\left ( \frac{-\hbar ^2}{2m} \frac{\partial^2 }{\partial x^2}+V(x,t) \right)\left.|\psi \right \rangle$ That is, $\psi$ is an eigenfunction of the Hamiltonian. We can often then solve this to find not only the wavefunction solutions, but the energy solutions: often such an equation will only be soluble with a discrete set of possible energies. The conditions of normalizability and normalization, as well as boundary conditions contribute toward determining energies and solutions.
The extension to multiple dimensions follows analogously.

## Spin

The Hilbert space to describe the spin state of an electron (or other spin 1/2 particle) is typically that of a two-by-one matrix. That is, a ket will be of the form $\left. |\psi \right \rangle= \begin{pmatrix} a\\ b \end{pmatrix}$ And the corresponding bra will be $\left \langle \psi | \right.= \begin{pmatrix} \bar{a} & \bar{b} \end{pmatrix}$ The condition for normalization is that $|a|^2+|b|^2=1$. A similar description can be used for polarization for photons. The operators for spin in the x, y and z directions, are, respectively: $S_x=\frac{\hbar}{2}\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}$ $S_y=\frac{\hbar}{2}\begin{pmatrix} 0 & -i\\ i & 0 \end{pmatrix}$ $S_z=\frac{\hbar}{2}\begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}$ All of these have eigenvalues $+\frac{\hbar}{2}$ and $-\frac{\hbar}{2}$, with corresponding eigenvectors: $\left. |+x \right \rangle=\left. |+ \right \rangle=\frac{1}{\sqrt{2}}\begin{pmatrix} 1\\ 1 \end{pmatrix},\; \; \left. |-x \right \rangle=\left. |- \right \rangle=\frac{1}{\sqrt{2}}\begin{pmatrix} 1\\ -1 \end{pmatrix}$ $\left. |+y \right \rangle=\left. |\rightarrow \right \rangle=\frac{1}{\sqrt{2}}\begin{pmatrix} -i\\ 1 \end{pmatrix},\; \; \left. |-y \right \rangle=\left. |\leftarrow \right \rangle=\frac{1}{\sqrt{2}}\begin{pmatrix} 1\\ i \end{pmatrix}$ $\left. |+z \right \rangle=\left. |\uparrow \right \rangle=\begin{pmatrix} 1\\ 0 \end{pmatrix},\; \; \left. |-z \right \rangle=\left. |\downarrow \right \rangle=\begin{pmatrix} 0\\ 1 \end{pmatrix}$

## Multiple Particles

In the case of more than one particle, we can construct a total wavefunction by composing those of each particle. For instance, if we have two particles, the first with spin up and the second with spin down, we can write that in a variety of ways. For instance: $\left. |\uparrow \right \rangle_1 \otimes \left. |\downarrow \right \rangle_2=\left. |\uparrow \right \rangle_1\left. |\downarrow \right \rangle_2=\left. |\uparrow \downarrow \right \rangle$ Clearly this case can be described in a way that treats each particle separately: the first particle is in one state and the second particle is in another state. However, sometimes it can be the case that the total wavefunction cannot be described in such a way. For instance: $\left. |\psi \right \rangle=\frac{1}{\sqrt{2}}\left ( \left. |\uparrow \downarrow \right \rangle +\left. | \downarrow \uparrow \right \rangle \right )$ In this case, if we measure the first particle to have spin up, the wavefunction collapses to the state $\left. |\uparrow \downarrow \right \rangle$. This is an example of entanglement, which is where two objects' states cannot be independently described.