Master Equation ==================== .. index:: Master Equatioin .. role:: highlit Master Equation ----------------------------- **Master equation** method allows us to ignore the details of the microscopic equations yet capturing the properties of the system. The master equations describes the probability of states :math:`P_\xi(t)` by looking into the gains and losses, .. math:: \frac{dP_\xi}{d t} = \sum_{\mu} \left( T_{\xi\mu} P_\mu - T_{\mu\xi} P_\xi \right). :label: eq-master-eq-component-form .. admonition:: Food for thought :class: warning 1. It's linear and first order. High order effects are ignored. 2. It comes from nowhere. Proof and interpretation the transfer rates are required. .. admonition:: Matrix Form :class: info It is obvious that the above formulation can be rewritten as a matrix equation, .. math:: \partial_t \mathbf P = \hat{\mathbf T} \mathbf P. :label: eq-master-eqn-matrix-orign But what is :math:`\hat{\mathbf T}`? We rewrite :eq:`eq-master-eqn-matrix-orign` using the indices, .. math:: \begin{align} \partial_t P_\xi &= \sum_{\mu}\hat{T}_{\xi\mu} P_\mu \\ &= \sum_{\mu \neq \xi}\hat{T}_{\xi\mu} P_\mu + \hat{T}_{\xi\xi} P_\xi. \end{align} Comparing the above compoment form with :eq:`eq-master-eq-component-form`, we can define :math:`\hat{\mathbf T}` as .. math:: \hat{T}_{\xi\mu} =\begin{cases} T_{\xi\mu} & \text{if } \xi \neq \mu, \\ - \sum_\mu T_{\mu\xi} & \text{if } \xi = \mu. \end{cases} To summarize, if we force the sum of each column (i.e., second index) to be zero by redefining the diagonal elements, we get a nice matrix form of the master equation. .. index:: Chapman-Kolmogorov equation "Derive" Master Equation -------------------------- Suppose the states of a system are denoted as :math:`\xi`, and the probability of the states at time :math:`t` are denoted as :math:`P_\xi(t)`. We assume the probability of the states at time :math:`t` only depends on the probability of the states a moment ago, .. math:: P_\xi(t) = \sum_\mu Q_{\xi\mu} P_\mu(t-\tau) . :label: eqn-chapman-kolmogorov-eqn This is the :highlit:`Chapman-Kolmogorov equation`. The equation describes a randomwalk process, a.k.a. Markov process. .. admonition:: Markov Process and State Machine :class: note The assumption that the current states doesn't depend on the full history indicates that the system follows a fixed state diagram. In other words, we are using a state machine to approximate a statistical system, except that our physical system has an enormous amount of possible states. It may sound weird that we only need the most recent history. However, a Markov process allows us to reach an desired equilibrium quickly. The formalism of equation :eq:`eqn-chapman-kolmogorov-eqn` reminds us of the time derivative, .. math:: \partial_t P_\xi(t) = \lim \frac{P_\xi(t) - P_\xi(t-\tau)}{\tau} . Notice that .. math:: \sum_\mu Q_{\mu\xi} = 1. :label: eqn-chapman-kolmogorov-eqn-normalization-relation .. admonition:: Why the normalization? :class: note It is required that the probability of all the states at time :math:`t` to be 1, :math:`\sum_\xi P_\xi(t) = 1`. Substract from the Chapman-Kolmogorov equation a term :math:`P_\xi(t-\tau)`, we get .. math:: P_\xi(t) - P_\xi(t-\tau) = \sum_\mu Q_{\xi\mu}P_\mu(t-\tau) - \sum_\mu Q_{\mu\xi} P_\xi(t-\tau) . :label: eqn-chapman-kolmogorov-eqn-sub-norm-rel in which we have applied a variation of equation :eq:`eqn-chapman-kolmogorov-eqn-normalization-relation`, i.e., .. math:: P_\xi(t-\tau) = \sum_\mu Q_{\mu\xi} P_\xi(t-\tau) . We divide equation :eq:`eqn-chapman-kolmogorov-eqn-sub-norm-rel` by :math:`\tau` on both sides and the limit of :math:`\tau\rightarrow 0` is applied, .. math:: \lim_{\tau\rightarrow} \frac{P_\xi(t) - P_\xi(t-\tau)}{\tau} = \lim_{\tau\rightarrow 0} \frac{ \sum_\mu Q_{\xi\mu}P_\mu(t-\tau) - \sum_\mu Q_{\mu\xi} P_\xi(t-\tau) }{\tau} At first glance, the right hand side of the above equation goes to infinity. However, **we do not expect the time derivative of a probability distribution to be infinite.** That being said, there should be a specific relation between the two terms in the denominator. If we assume that .. math:: Q_{ux} = R_{ux}\tau + O(\tau^n) with :math:`n > 1`, the infinity is removed. This is a self-consistency condition so that our initial assumption that the system doesn't depend the full history holds even for our infinitesimal relations. .. admonition:: It is werid. :class: warning By enforcing a system to obey :highlit:`Chapman-Kolmogorov equation`, we admit that the system loses information before a short time interval :math:`\tau`. Now we take the limit :math:`\tau\rightarrow 0`, which means the system has no memory of the past at all! How is this possible? We get the following differential form .. math:: \partial_t P_\xi(t) = \sum_\mu \left( R_{\xi\mu}P_\mu(t) - R_{\mu\xi}P_\xi(t) \right) . .. admonition:: Derivation of Master Equation "Rigorously" :class: note Derivation of the master equation can be more rigorous. [1]_ This note references Reichl's chapter 6 B and Irwin Oppenheim and Kurt E. Shuler's paper. [2]_ First of all we need the following conditional probability, .. math:: P_1(y_1,t_1) P_{1|1}(y_1,t_1|y_2,t_2) = P_2(y_1,t_1;y_2,t_2) which means the probability of variable Y has value :math:`y_1` at time :math:`t_1` and :math:`y_2` at time :math:`t_2` is given by the probability of variable Y has value :math:`y_1` at time :math:`t_1` times that of it has value :math:`y_1` at time :math:`t_1` given it has value :math:`y_2` at time :math:`t_2`. Assume that :highlit:`the probability at` :math:`t_n` :highlit:`only depends on that at` :math:`t_{n-1}`, we have .. math:: P_{n-1|1}(y_1,t_1;\cdots;y_{n-1},t_{n-1}|y_n,t_n) = P_{1|1}(y_{n-1},t_{n-1}|y_n,t_n) = P_{1|1}(y_{n-1},t_{n-1}|y_n,t_n) , if the indices are arranged so that :math:`t_1`_ .