# Master Equation¶

## Master Equation¶

Master equation method allows us to ignore the details of the microscopic equations yet capturing the properties of the system. The master equations describes the probability of states $$P_\xi(t)$$ by looking into the gains and losses,

$\frac{dP_\xi}{d t} = \sum_{\mu} \left( T_{\xi\mu} P_\mu - T_{\mu\xi} P_\xi \right).$

What’s the problem of this master equation?

1. It’s linear and first order. High order effects are ignored.

2. It comes from nowhere. To prove and interprate the transfer rates is required.

## “Derive” Master Equation¶

Suppose the states of a system are denoted as $$\xi$$, and the probability of the states at time $$t$$ are denoted as $$P_\xi(t)$$. We assume the probability of the states at time $$t$$ only depend on the probability of the states a moment ago,

(15)$P_\xi(t) = \sum_\mu Q_{\xi\mu} P_\mu(t-\tau) .$

This is the Chapman-Kolmogorov equation. The equation describes a randomwalk process, a.k.a. Markov process.

Markov Process and State Machine

The assumption that the current states doesn’t depend on the full history indicates that the system follows a fixed state diagram. In other words, we are using a state machine to approximate a statistical system, except that our physical system has an enormous amount of possible states.

It may sound weird that we only need the most recent history. However, a Markov process allows us to reach an desired equilibrium quickly.

The formalism of equation (15) reminds us of the time derivative,

$\partial_t P_\xi(t) = \lim \frac{P_\xi(t) - P_\xi(t-\tau)}{\tau} .$

Notice that

(16)$\sum_\mu Q_{\mu\xi} = 1.$

Why the normalization?

It is required that the probability of all the states at time $$t$$ to be 1, $$\sum_\xi P_\xi(t) = 1$$.

Substract from the Chapman-Kolmogorov equation a term $$P_\xi(t-\tau)$$, we get

(17)$P_\xi(t) - P_\xi(t-\tau) = \sum_\mu Q_{\xi\mu}P_\mu(t-\tau) - \sum_\mu Q_{\mu\xi} P_\xi(t-\tau) .$

in which we have applied a variation of equation (16), i.e.,

$P_\xi(t-\tau) = \sum_\mu Q_{\mu\xi} P_\xi(t-\tau) .$

We divide equation (17) by $$\tau$$ on both sides and the limit of $$\tau\rightarrow 0$$ is applied,

$\lim_{\tau\rightarrow} \frac{P_\xi(t) - P_\xi(t-\tau)}{\tau} = \lim_{\tau\rightarrow 0} \frac{ \sum_\mu Q_{\xi\mu}P_\mu(t-\tau) - \sum_\mu Q_{\mu\xi} P_\xi(t-\tau) }{\tau}$

At first glance, the right hand side of the above equation goes to infinity. However, we do not expect the time derivative of a probability distribution to be infinite. That being said, there should be a specific relation between the two terms in the denominator. If we assume that

$Q_{ux} = R_{ux}\tau + O(\tau^n)$

with $$n > 1$$, the infinity is removed. This is a self-consistency condition so that our initial assumption that the system doesn’t depend the full history holds even for our infinitesimal relations.

It is werid.

By enforcing a system to obey Chapman-Kolmogorov equation, we admit that the system loses information before a short time interval $$\tau$$. Now we take the limit $$\tau\rightarrow 0$$, which means the system has no memory of the past at all! How is this possible?

We get the following differential form

$\partial_t P_\xi(t) = \sum_\mu \left( R_{\xi\mu}P_\mu(t) - R_{\mu\xi}P_\xi(t) \right) .$

Derivation of Master Equation “Rigorously”

Derivation of the master equation can be more rigorous. 1 This note references Reichl’s chapter 6 B and Irwin Oppenheim and Kurt E. Shuler’s paper. 2

First of all we need the following conditional probability,

$P_1(y_1,t_1) P_{1|1}(y_1,t_1|y_2,t_2) = P_2(y_1,t_1;y_2,t_2)$

which means the probability of variable Y has value $$y_1$$ at time $$t_1$$ and $$y_2$$ at time $$t_2$$ is given by the probability of variable Y has value $$y_1$$ at time $$t_1$$ times that of it has value $$y_1$$ at time $$t_1$$ given it has value $$y_2$$ at time $$t_2$$.

Assume that the probability at $$t_n$$ only depends on that at $$t_{n-1}$$, we have

$P_{n-1|1}(y_1,t_1;\cdots;y_{n-1},t_{n-1}|y_n,t_n) = P_{1|1}(y_{n-1},t_{n-1}|y_n,t_n) = P_{1|1}(y_{n-1},t_{n-1}|y_n,t_n) ,$

if the indices are arranged so that $$t_1<t_2< \cdots <t_n$$.

This assumption indicates that the system is chaotic enough. This is called a Markov process.

Similar to the transition coefficients $$T_{\xi\mu}$$ we defined previously, this $$P_{1|1}(y_{n-1},t_{n-1}|y_n,t_n)$$ is the transition probability.

To find out the time derivative of $$P_1(y_2,t_2)$$, we need to write down the time dependence of it,

$P_1(y_1,t_1;y_2,t_2) = P_1(y_1,t_1) P_{1|1}(y_1,t_1|y_2,t_2)$

We integrate over $$y_1$$,

$P_1(y_2,t_2) = \int P_1(y_1,t_1)P_{1|1}(y_1,t_1|y_2,t_2)dy_1$

As we can write $$t_2=t_1+\tau$$,

$P_1(y_2,t_1+\tau) = \int P_1(y_1,t_1) P _ {1|1}(y_1,t_1|y_2,t_1+\tau) dy_1$

Next we can construct time derivatives of these quantities.

$\partial_{t_1} P_1(y_2,t_1) = \int \lim_{\tau\rightarrow 0} \frac{\int P_1(y_1,t_1) P _ {1|1}(y_1,t_1|y_2,t_1+\tau) - P_1(y_1,t_1) P _ {1|1}(y_1,t_1|y_2,t_1) }{\tau} dy_1$

We expand the right hand side using Taylor series, the details of which can be found in Reichl’s book 1 , then we get a time derivative,

(18)$\partial_{t} P_1(y_2,t) = \int dy_1 \left( W(y_1,y_2)P_1(y_1,t) - W(y_2,y_1)P_1(y_2,t) \right) .$

Equation (18) is the master equation.

The reason that $$W(y_1,y_2)$$ is a transition rate is that it represents “the probability per unit time that the system changes from state $$y_1$$ to $$y_2$$ in the time interval $$t_1\rightarrow t_1 +\tau$$ “. 1

Is master equation the same as a Markov process?

In this derivation, we have used the Markov process. However, a master equation is not equivalent to a Markov process.

Read Irwin Oppenheim and Kurt E. Shuler’s paper for more details. 2

Patterns in the Probabilities

We can find out the Chapman-Kolmogorov equation

$P_{1|1}(y_1,t_1|y_3,t_3) = \int P _{1|1}(y_1,t_1|y_2,t_2)P_{1|1}(y_2,t_2|y_3,t_3)dy_2$

by comparing the following three equations.

$P_2(y_1,t_1;y_3,t_3) = \int P_3(y_1,t_1;y_2,t_2;y_3,t_3) dy_2$
$P_3(y_1,t_1;y_2,t_2;y_3,t_3) = P_1(y_1,t_1) P _{1|1}(y_1,t_1|y_2,t_2) P _{1|1}(y_2,t_2|y_3,t_3)$
$\frac{P_2(y_1,t_1;y_3,t_3)}{P_1(y_1,t_1)} = P_{1|1}(y_1,t_1|y_3,t_3)$

## Examples of Master Equation¶

The are many application of the master equation

$\partial_t P_\xi(t) = \sum_\mu \left( R_{\xi\mu}P_\mu(t) - R_{\mu\xi}P_\xi(t) \right) .$

### Two States Degenerate System¶

The master equations for a two-state degenerate system are

$\partial_t P_1 = R (P_2 - P_1)$

and

$\partial_t P_2 = R (P_1 - P_2) .$

To solve the equations, we choose a set of “canonical coordinates”,

$P_+ = P_1+P_2$

and

$P_- = P_1 - P_2 .$

By adding the two original master equations, we have

$\partial_t P_+ = 0$

and

$\partial_t P_- = -2R P_- .$

Obviously, the solutions to these equations are

$P_+(t) = P_+(0), \qquad P_-(t) = P_-(0)e^{-2Rt} .$

This result proves that whatever states was the system in initially, it will get to equilibrium since the term $$e^{-2R t}$$ is a decaying process. This is a relaxation process.

Von Neumann equation

In QM, Von Neumann equations is

$\hat \rho = \hat\rho(0) e^{-i \hat H t/\hbar},$

which is very similar to the solution to the stat mech Liouville equation,

$P(t) = P(0) e^{-A t},$

where A is a matrix

$A_{\xi\mu} = -R_{\xi\mu}, \qquad A_{\xi\xi} = \sum_\mu R_{\mu\xi} .$

The difference here is the $$i$$ in the exponential. With $$i$$, we get rotation or oscillation, while without $$i$$, the process is a decay process.

### Non Degenerate Two State System¶

In a non-degenerate two-state system, the transfer matrix is

$\begin{split}\mathbf A = \begin{pmatrix}A_{11}, A{12} \\ A_{21}, A_{22}\end{pmatrix}\end{split}$

The master equation is

$\begin{split}\partial_t \begin{pmatrix}P_1 \\ P_2 \end{pmatrix} = \begin{pmatrix}A_{11}, A_{12}\\ A_{21}, A_{22} \end{pmatrix} \begin{pmatrix}P_1 \\ P_2 \end{pmatrix}\end{split}$

The non-degenerate system shows similar exponential decaying or growing behaviors as in the degenerate system. However, the equilibrium point or fixed point is different.

$\partial_t P_1 = R_{12} P_2 - R_{21} P_1$

shows that at equilibrium, i.e., $$\partial_t P_1 = 0$$, the ratio of the probabilities is defined by the coefficients

$\frac{R_{12}}{R_{21}} = \frac{P_1(\infty)}{P_2(\infty)}.$

## Footnotes¶

1(1,2,3)

Linda E. Reichl. “A Modern Course in Statistical Physics”.

2(1,2)

Irwin Oppenheim and Kurt E. Shuler. “Master Equations and Markov Processes”. Phys. Rev. 138, B1007 (1965) .

| Created with Sphinx and . | | | |