Solving Master Equations

Solving Master Equations

A master equation

(22)\[\frac{d P_m}{dt} = \sum_n R_{mn} P_n - \sum_n R_{nm} P_m\]

can be simplified into

(23)\[\frac{d P_m}{dt} + \sum_n A_{mn}P_n = 0 .\]

To figure out the elements of this new matrix \(A\), we need to understand how to combine the RHS of master equation. The matrix form of the master equation (22) is

\[\begin{split}\partial_t \begin{pmatrix} P_1 \\ P_2 \\ \vdots \\ P_N \end{pmatrix} = \begin{pmatrix} 0 & R_{12} & \cdots & R_{1N} \\ R_{21} & 0 & \cdots & R_{2N} \\ \vdots & \vdots & \ddots & \vdots \\ R_{N1} & R_{N2} & \cdots & 0 \end{pmatrix} \begin{pmatrix} P_1 \\ P_2 \\ \vdots \\ P_N \end{pmatrix} - \sum_{n}R_{nm} \begin{pmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & 0 \\ 0 & 0 & \cdots & 1 \end{pmatrix} \begin{pmatrix} P_1 \\ P_2 \\ \vdots \\ P_N \end{pmatrix}\end{split}\]

Since \(A_{mm} = \sum_n R_{nm}\) is just a real number, we can combine the two matrices on the RHS,

\[\begin{split}\partial_t \begin{pmatrix} P_1 \\ P_2 \\ \vdots \\ P_N \end{pmatrix} + \begin{pmatrix} A_{11} & -R_{12} & \cdots & -R_{1N} \\ -R_{21} & A_{22} & \cdots & -R_{2N} \\ \vdots & \vdots & \ddots & \vdots \\ -R_{N1} & -R_{N2} & \cdots & A_{NN} \end{pmatrix} \begin{pmatrix} P_1 \\ P_2 \\ \vdots \\ P_N \end{pmatrix} = 0 .\end{split}\]

The \(\mathbf A\) matrix is then defined as

\[\begin{split}A_{mn} = \begin{cases} -R_{mn} &\quad n \neq m \\ \sum_{n} R_{nm} & \quad n=m \end{cases}\end{split}\]

The abstract form of the master equation (23) is

(24)\[\partial_t \mathbf P + \mathbf A \mathbf P = 0 .\]

To solve it, we need to diagonalize \(\mathbf A\) so that the solution can be written down as

\[\mathbf P_{\text{diag}} = e^{- \mathbf A_{\text{diag}} t} .\]

In general, the diagonailization is achieved by using an invertible matrix \(\mathbf S\),

\[\partial_t \bf S^{-1} P + S^{-1}AS S^{-1} P = 0\]

in which \(\mathbf A_{\text{diag}} = \bf S^{-1} P S\) and \(\mathbf P_{\text{diag}} = \bf S^{-1} P\).

Food for Thought

Is there a mechanism that ensures the \(\mathbf A\) is invertible ? If \(\mathbf A\) is defective , none of these can be done. Do all physical systems have invertible \(\mathbf A\)?

Quantum Mechanics

The master equation (24) is similar to the Schrodinger equation quantum mechanics

\[i\hbar \frac{\partial }{\partial t} \ket{\psi} = \hat H \ket{\psi}.\]

It is also quite similar to the Liouville equation,

\[i\partial_t \rho^N = \hat L^N \rho^N .\]

Solving Master Equations: Fourier Transform

Fourier transform is a fast and efficient method of diagonalizing \(\mathbf A\) matrix.

We consider the case that a coarse-grain system with translational symmetry. The values of the elements in \(\mathbf A\) matrix only dependends on \(l:= n-m\), i.e.,

\[\partial_t P_m + \sum_n A_{mn} P_n = 0.\]

For translational symmetric system, a discrete Fourier transform is applied to find out the normal modes. Define the kth mode as

(25)\[P^k = P_m e^{ikm} .\]

Multiply \(e^{ikm}\) on both sides of the master equstion and sum over \(m\), we get

\[\sum_m \partial_t P_m e^{ikm} + \sum_m \sum_n e^{ikm}A_{m-n} P_n = 0.\]

With the definition of the kth mode defined in (25), the master equation can be written as

\[\partial_t P^k + \sum_n\sum_m e^{ik(m-n)}A_{m-n} e^{ikn}P_n = 0,\]

which “accidently” diagonalizes the matrix \(\mathbf A\). Define the kth mode of \(\mathbf A\) as \(A^k = \sum_{l=m-n} e^{ik(m-n)}A_{m-n}\). The master equation

\[\partial_t P^k + \sum_{l=m-n}e^{ik(m-n)}A_{m-n} \sum_n e^{ikn}P_n = 0\]

is reduced to

\[\partial_t P^k + A^k P^k = 0 \qquad\text{No summation over k!.}\]

Note

Note that summation over n and m is equivalent to summation over n and m-n.

Finally we have the solution for the normal modes,

\[P^k(t) = P^k(0) e^{-A^k t} .\]

To find out the final solution, perform an inverse Fourier transform on the kth mode,

\[P_m(t) = \frac{1}{N} \sum_k P^k(t) e^{-ikm} .\]

Important

Due to the Born van Karman BC, we chose,

\[e^{ikm} = e^{ik(m+N)}\]

which leads to

\[k=\frac{2\pi}{N} .\]

A discrete transform will become an integral if we are dealing with continous systems. It is achieved by using the following transformation,

\[\frac{1}{N}\sum_k \rightarrow \frac{1}{2\pi} \int dk.\]

This transformation is important because the discrete transform has \(\frac{1}{N}\sum_k\) in it.

Finite Chain with Nearest-Neighbor Interactions

../_images/finite-chain-nearest-neighbour-interactions.png

Fig. 22 Finite chain with nearest-neighbour interactions

On a 1D finite chain, the transfer rate is

\[R_{mn} = F(\delta_{m,n-1}+ \delta_{m,n+1}).\]

which leads to the following master equation,

(26)\[\partial_t P_m = F(P_{n+1} + P_{n-1}) -2F P_m .\]

Perform a discrete Fourier transform on equation (26),

\[\partial_t P^k = F(e^{ikm} P_{m+1} + e^{ikm} P_{m-1} -2 P^k) .\]

Collect terms, we get

\[\partial_t P^k = 4F\sin^2\frac{k}{2} .\]

The solution for the kth mode is solved,

\[P^k(t) = P^k(0) e^{-4F \sin^2\frac{k}{2} t} .\]

To retrieve the solution to the original master equation, an inverse Fourier transform is applied,

\[P_m(t) = \frac{1}{N} \sum_ {k} P^k(t) e^{-i km} .\]

Apply Born-Von Karman boundary condition, we find that \(k\) is quantized,

\[k = \frac{2\pi}{N} n, \qquad n=0,1,2, \cdots, N-1 .\]

Matrix Form

The matrix form of the equations makes it easier to understand. Here we work out the 1D finite chain problem using the matrix form explicitly.

First of all, citing the master equation (26), we have

\[\partial_t P_m = F(P_{n+1} + P_{n-1}) -2F P_m .\]

We rewrite it in the matrix form

\[\begin{split}\partial_t \begin{pmatrix} P_1 \\ P_2 \\ P_3 \\ P_4 \\ P_5 \\ P_6 \end{pmatrix} + F \begin{pmatrix} 2 & -1 & 0 & 0 & 0 & -1 \\ -1 & 2 & -1 & 0 & 0 & 0 \\0 & -1 & 2 & -1 & 0 & 0 \\ 0 & 0 & -1 & 2 & -1 & 0 \\ 0 & 0 & 0 & -1 & 2 & -1 \\ -1 & 0 & 0 & 0 & -1 & 2 \end{pmatrix} \begin{pmatrix} P_1 \\ P_2 \\ P_3 \\ P_4 \\ P_5 \\ P_6 \end{pmatrix} = 0\end{split}\]

Matrix Form Makes a Difference

An easy method to get the matrix form is to write down the \(\mathbf R\) matrix whose diagonal elements are all 0s. We construct the \(\mathbf A\) matrix by adding a minus sign to all elements and use the sum of the original elements at the diagonal in the corresponding line. One should pay attention to the signs.

The additive of the matrices makes it possible to decomposed a complicated matrix into several simple matrices.

To solve this equation, we diagonalize the 6 times 6 matrix. Similar to the discrete Fourier transform we used in the previous method, we have

\[\begin{split}\partial_t \begin{pmatrix} P^{k_1} \\ P^{k_2}\\ P^{k_3}\\ P^{k_4}\\ P^{k_5}\\ P^{k_6} \end{pmatrix} + 4F \begin{pmatrix} \sin^2\frac{k_1}{2} 0 & 0 & 0 & 0 & 0 \\ 0 & \sin^2\frac{k_2}{2} & 0 & 0 & 0 & 0 \\ 0 & 0 & \sin^2\frac{k_3}{2} & 0 & 0 & 0 \\ 0 & 0 & 0 & \sin^2\frac{k_4}{2} & 0 & 0 \\ 0 & 0 & 0 & 0 & \sin^2\frac{k_5}{2} & 0 \\ 0 & 0 & 0 & 0 & 0 & \sin^2\frac{k_6}{2} \end{pmatrix} \begin{pmatrix} P^{k_1} \\ P^{k_2}\\ P^{k_3}\\ P^{k_4}\\ P^{k_5}\\ P^{k_6} \end{pmatrix} = 0\end{split}\]

We imediately recognize the solution,

\[P^k(t) = P^k(0) e^{-4F\sin^2(k/2) t} .\]

Eigenvalue Problem

Note that the elements of the diagonailized \(\mathbf A_{\text{diag}}\) matrix are just the eigenvalues of \(\mathbf A\) matrix with their corresponding eigenvectors. The descrete master equations is equivalent to the eigenvalue problem of \(\mathbf A\) matrix.

Infinite Chain with Nearest-Neighbor Interactions

For an infinite chain, we have exactly the same master equation. The difference lies in the boundary conditions. For infinite chain, we have \(N\rightarrow \infty\) and

\[\frac{1}{N}\sum_k \rightarrow \frac{1}{2\pi}\int dk .\]

Validate the Relation

The way to check this result is to check the sum. Apply the operators on unity, both sides should converge to 1,

\[\frac{1}{N}\sum_{k=1}^{N} 1 = 1 \Leftrightarrow \frac{1}{2\pi}\int_{-\pi}^{\pi} 1 dk = 1\]

The solutions are

\[\begin{split}P_m(t) &= \frac{1}{2\pi} \int _{-\pi}^{\pi} P^k(0) e^{-2F(1-\cos{k})t} e^{-ikm} dk \\ &= P^k(0) e^{-2Ft} \frac{1}{2\pi} \int_{-\pi}^{\pi} e^{2Ft\cos{k}} e^{-ikm} dk \\ & = P^k(0) e^{-2Ft} \mathrm{I_m}(2Ft)\end{split}\]

or

\[P_m(t) = P_m(0)I_m(2Ft)e^{-2Ft}\]

where \(I_m(2Ft)e^{-2Ft}\) is the propagator.

Modified Bessel Function

Vocabulary: The modified Bessel function is defined as

\[\mathrm{I_m} (z) = \frac{1}{2\pi}\int_{-\pi}^{\pi} e^{-ikm} e^{z\cos{k}} dk .\]

Since the argument has imaginary part in it, it is also called the Bessel function of imaginary argument.

2D Lattice

A 2D lattice is shown in Fig. 24.

equilateral Triangle Lattice

Fig. 24 An equilateral triangle lattice. Source: Equilateral Triangle Lattice by Jim Belk @ Wikipedia

We require it to have translational symmetry in both x and y directions. The solution is the product of solutions to x and y directions,

\[P_m(t) = e^{-2(F_x+F_y)t} \mathrm{I_m}(2F_xt) \mathrm{I_m}(2F_y t)\]

Continuum Limit

For a continuum system,

\[\begin{split}\partial_t P_m &= F(P_{m-1} + P_{m+1}) - 2FP_m \\ & = F(P_{m+1}-P_m -(P_m-P_{m-1})) \\ & = F \epsilon^2 \frac{ (P_{m+1}-P_m)/\epsilon -(P_m-P_{m-1})/\epsilon ) }{\epsilon}\end{split}\]

We can identify the form of derivatives on the right hand side but it becomes zero if \(F\) is a constant when we take the limit \(\epsilon \rightarrow 0\).

On the other hand, \(F\) should increases when the two particle sites becomes closer. To reconcile the zero-right-hand-side problem, we assume that

\[F = \frac{D}{\epsilon^2} .\]

The continuum limit of our master equation becomes the diffusion equation,

\[\frac{\partial P(x,t)}{\partial t} = D\frac{\partial^2 P(x,t)}{\partial x^2}.\]


Back to top

© 2021, Lei Ma | Created with Sphinx and . | Source on GitHub | Physics Notebook Datumorphism | Index | Page Source