Introduction: Beyond Spectral Methods - Why We Need Field Theory¶
In Lecture 34 we studied Directed Percolation (DP), a prototypical nonequilibrium system with an absorbing-state transition. By analyzing the Master Equation's transition matrix \(\mathcal{W}\), spectral methods revealed long-time behavior: zero eigenvalues correspond to steady states (absorbing), and as control parameters cross criticality a slow mode approaches zero, with a diverging relaxation time that signals a phase transition. The core of spectral methods lies in solving the eigenvalues and eigenvectors of the transition matrix.
However, spectral methods face intrinsic limitations. For systems with multiple species, many reaction channels, or complex interactions, the state space grows exponentially; the transition matrix \(\mathcal{W}\) becomes prohibitively large and complicated, making diagonalization or full spectral analysis analytically and computationally impractical. More fundamentally, while spectral tools can identify slow modes at criticality, they struggle to systematically treat fluctuations, spatial correlations, and derive universal behavior such as critical exponents. The physical essence of critical phenomena lies in collective fluctuations, whereas spectral analysis is essentially linear and struggles with nonlinear, multiscale interactions.
We therefore seek a stronger, more extensible framework that starts from microscopic stochastic rules and systematically yields macroscopic, universal behavior, especially near criticality.
Our goal is to construct a path-integral representation for general stochastic processes governed by Master Equations. This transforms the "differential" evolution problem (solving large coupled ODEs) into a statistical field theory over histories. This methodological shift is profound: it converts the problem of solving a large-scale system of coupled ordinary differential equations (the Master Equation) into a problem of summing over all possible "histories" of the system.
This enables mature tools from quantum and statistical field theory: saddle-point approximation (corresponding to mean-field theory), perturbation expansion (Feynman diagrams), and the renormalization group. These tools are designed precisely for handling collective behavior and critical phenomena arising from interactions among many degrees of freedom.
This approach is not conceived out of thin air; it forms a perfect parallel with the theoretical framework established in previous lectures. In Lectures 31 and 32, a field-theory framework was built for continuous stochastic processes described by Langevin equations, namely the Martin–Siggia–Rose (MSR) or Janssen–De Dominicis (JDD) response functional. That theory describes systems where state variables change continuously and are driven by Gaussian white noise.
However, many physical, chemical, and biological systems involve fundamentally discrete "birth" and "death" or state-jump processes: molecule counts in chemical reactions, infection counts in epidemic spreading, or protein copy numbers in gene expression. The natural language for these processes is the Master Equation, not the Langevin equation. Therefore, our task here is to construct a parallel field-theoretic form for these discrete, count-based stochastic processes, culminating in a path-integral action functionally equivalent to the JDD functional—thus completing another crucial piece of the theory of stochastic process field quantization.
1. Poisson Representation: From Discrete Particles to Continuous Fields¶
Langevin-to-path-integral works for continuous variables, but the Master Equation describes integer-valued variables like the particle number \(n\). Functional calculus (functional derivatives/integrals) is not directly applicable on discrete spaces. Before constructing a path integral we must map discrete number space to an equivalent continuous-field space. The Poisson representation achieves this.
1.1 Master Equation: The Starting Point¶
Consider a well-mixed system with state vector \(\vec{n}\), whose components \(n_i\) represent the count of species \(i\). The probability distribution \(P(\vec{n},t)\) evolves by the Master Equation $$ \partial_t P(\vec{n}, t) = \sum_{\vec{n}' \ne \vec{n}} \big[ W(\vec{n}|\vec{n}') P(\vec{n}',t) - W(\vec{n}'|\vec{n}) P(\vec{n},t) \big]. $$
This equation embodies probability conservation. The left side is the rate of change of the probability of state \(\vec{n}\). The right side consists of two parts: gain and loss.
Gain Term: \(\sum_{\vec{n}'} W(\vec{n}|\vec{n}')P(\vec{n}',t)\) represents the total probability flow from all other states \(\vec{n}'\) into state \(\vec{n}\). \(W(\vec{n}|\vec{n}')\) is the transition rate from state \(\vec{n}'\) to state \(\vec{n}\).
Loss Term: \(-\sum_{\vec{n}'} W(\vec{n}'|\vec{n})P(\vec{n},t)\) represents the total probability flow out of state \(\vec{n}\) to any other state \(\vec{n}'\).
For convenience, write this in compact matrix form: $$ \frac{d}{dt}|\vec{P}(t)\rangle = \hat{Q} |\vec{P}(t)\rangle $$
where \(|\vec{P}(t)\rangle\) is a column vector with components being the probabilities \(P(\vec{n},t)\) over all possible states \(\vec{n}\), and \(\hat{Q}\) is the transition matrix (the Q matrix introduced in Lecture 7).
To simplify the derivation, first consider a system with only one species, so the state vector \(\vec{n}\) simplifies to a scalar \(n\). The core quantity of interest is the conditional probability \(p(n,t|n_0,t_0)\), namely the probability of finding the system in state \(n\) at time \(t\), given that it was in state \(n_0\) at time \(t_0\).
1.2 The Challenge of Discreteness and the Poisson Transform¶
The Master Equation is essentially an infinite-dimensional system of coupled ordinary differential equations, whose state variable \(n\) is a discrete integer. This constitutes a fundamental obstacle to directly applying field-theory methods. We need a method to "bridge" this gap, mapping the discrete particle-number space onto a continuous field space.
The key technique for solving this problem is the Poisson Representation. Its core idea is to no longer directly handle individual conditional probabilities \(p(n,t|n_0,t_0)\), but instead construct a new object \(p(n,t|x,t_0)\). This new object is obtained by averaging (marginalizing) over a special initial condition distribution: assume that at the initial time \(t_0\), the particle number \(n_0\) of the system is not a fixed value, but rather follows a Poisson distribution with mean \(x\).
By taking a weighted sum of the original conditional probabilities \(p(n,t|n_0,t_0)\) with this Poisson distribution, we define the new probability function:
This transformation is mathematically called the Poisson transform. It transforms a family of functions \(\{p(n, t | n_0, t_0)\}_{n_0=0}^\infty\) that depend on discrete initial values \(n_0\), through integration (summation) with the Poisson kernel function \(\frac{x^{n_0}}{n_0!} e^{-x}\), into a single function \(p(n, t | x, t_0)\) that depends on the continuous variable \(x\).
In Dirac notation, we introduce Poisson basis states \(|n_0\rangle_x\) with weights \(x^{n_0} e^{-x}/n_0!\), and define the transformed state
An inverse projection at \(x=0\) recovers the original conditionals,
ensuring no loss of information.
1.3 Physical Meaning of the Continuous Field \(x\)¶
The field \(x\) parametrizes Poisson distributions peaked near the underlying count; it is a continuous proxy for the discrete population and supports inversion. Two complementary views:
- Physical intuition: for a large volume with average count \(x\), the probability to find exactly \(n_0\) particles is Poisson. Thus \(x\) represents the initial mean.
- Algebraic convenience: Poisson weights interact simply with multiplication by \(x\) and differentiation \(\partial_x\), a key feature used below.
This transformation is elegant for three reasons:
-
Information encoding: It encodes all information about the discrete variable \(n_0\) (i.e., all values of \(p(n,t|n_0,t_0)\)) into a single function \(p(n,t|x,t_0)\) of the continuous variable \(x\). This is consistent with the idea in Lecture 4 of using the generating function \(\sum_n P_n x^n\) to encode the entire probability distribution \(\{P_n\}\).
-
Physical intuition: The choice of Poisson distribution is not arbitrary. Imagine randomly placing particles in a large volume such that the average particle count is \(x\). Then, the probability of finding exactly \(n_0\) particles in that volume follows a Poisson distribution. Therefore, the variable \(x\) can be intuitively understood as the mean of the initial particle number distribution.
-
Algebraic convenience: Choosing the Poisson distribution as the "basis" is due to its very simple algebraic relationship with differential operators, which will play a crucial role in subsequent derivations.
Therefore, \(x\) should be understood as a "generating field"; its introduction is purely for mathematical convenience, aiming to transform a discrete random variable problem into a continuous field theory problem, thereby paving the way for using calculus and path integrals.
1.4 Operator Form and Information Recovery¶
To make this transformation more formal, we can introduce Dirac notation (bra-ket) commonly used in quantum mechanics. Define a "Poisson basis state" \(|n_0\rangle_x\) that depends on the continuous variable \(x\), whose functional form is exactly a Poisson distribution:
Then, the system state after Poisson transformation can be regarded as a "state vector" \(|\Psi(x,t)\rangle\), which is a linear superposition of these basis states with coefficients given by the corresponding conditional probabilities:
This \(|\Psi(x,t)\rangle\) is the core object of the theory. It is a function of the continuous variable \(x\) and completely carries all initial condition information of the system at time \(t_0\).
Of course, this transformation must be reversible; otherwise information would be lost. We must be able to recover from \(|\Psi(x,t)\rangle\) the original probability \(p(n,t|n_0,t_0)\) corresponding to any specified initial condition \(n_0\). This can be achieved through a clever operator:
This formula is the inverse transform of the Poisson transformation. The operator \((\partial_x + 1)^{n_0}\) acts on the function \(|\Psi(x,t)\rangle\) and is then evaluated at \(x=0\); its effect is equivalent to "projecting" or "screening" out the term corresponding to the initial particle number \(n_0\) from the generating function. This ensures that the new representation does not lose any physical information, laying a solid foundation for subsequent kinetic evolution derivations.
2. Schrodinger-Like Form of the Master Equation¶
The ultimate goal of introducing the Poisson representation is to transform the Master Equation itself from a matrix equation acting on discrete probability distributions \(P(n,t)\) into a partial differential equation (PDE) acting on continuous functions \(|\Psi(x,t)\rangle\). This step is crucial for reaching the path integral, as it transforms the problem into the realm of calculus and functional analysis.
2.1 Transforming the Generator and Normal Ordering¶
The mapping yields a differential "Hamiltonian" \(\hat{Q}(x,\partial_x)\). Normal ordering (all \(x\) to the left of \(\partial_x\)) ensures the operator acts correctly on Poisson basis states and encodes gain/loss consistently.
-
Physical motivation: In particle systems, fundamental processes are particle creation and annihilation. In the Poisson representation, these two basic operations are represented by the multiplication operator \(x\) and the differential operator \(\partial_x\), respectively (this will be formalized in the Doi-Peliti formalism in the next lecture). Normal ordering is a prescription to ensure that this operator substitution faithfully reproduces the discrete particle number changes described by the original Master Equation.
-
Mathematical rule: Normal ordering requires that when constructing the operator \(\hat{Q}(x, \partial_x)\), all operators related to particle creation (i.e., \(x\)) must be written to the right of all operators related to particle annihilation (i.e., \(\partial_x\)). In other words, all differential operators \(\partial_x\) must be moved to the left of all \(x\) operators.
This rule is a crucial technical prerequisite for subsequent derivations to proceed smoothly.
2.2 Example: Particle Decay (\(A \xrightarrow{k} \emptyset\))¶
For the decay process, the Master Equation is \(\dot P_n = k[(n+1)P_{n+1} - n P_n]\). The generator must include a number operator (measuring \(n\)) and an annihilation operation (remove one particle). With normal ordering one finds $$ \hat{Q}(x, \partial_x) = k\,(\partial_x - 1)\, x, $$ whose terms reproduce gain \(k\partial_x x\) and loss \(-k x\) contributions.
Now, to translate this process into the operator \(\hat{Q}(x, \partial_x)\) in the Poisson representation, we establish a basic "translation dictionary": - The operation of annihilating one particle \((n \to n-1)\) corresponds to the operator \(\partial_x\) in the Poisson representation. - The operation of creating one particle \((n \to n+1)\) corresponds to the operator \(x\). - The operation of measuring the particle number \(n\) corresponds to the operator \(x\partial_x\).
In the decay process, \(n \rightarrow n-1\). The rate of this transition is proportional to the current particle number \(n\). Therefore, the corresponding operator \(\hat{Q}\) should be composed of the operator representing "measuring particle number \(n\)" and the operator representing "annihilating one particle". According to the normal ordering rule, we obtain:
Although this operator is concise, it completely encodes the decay process. - The \(k(\partial_x)x\) part corresponds to the gain term \(k(n+1)P_{n+1}\). - The \(-kx\) part corresponds to the loss term \(-knP_n\).
It can be rigorously proven that acting this operator on the Poisson basis states precisely reproduces the discrete difference structure of the Master Equation. The lecture blackboard clearly shows this form.
2.3 Evolution Equation: Schrodinger Analogy¶
With \(\hat{Q}\) in hand, the Master Equation becomes a PDE for \(|\Psi(x,t)\rangle\). Using the forward form for consistency with the path-integral derivation, $$ \frac{\partial}{\partial t} |\Psi(x,t)\rangle = \hat{Q}(x,\partial_x)\, |\Psi(x,t)\rangle. $$ This is isomorphic to an imaginary-time Schrodinger equation. The analogy is illuminating: \(|\Psi\rangle\) plays the role of a (classical) "wavefunction" encoding the full probability content; \(\hat{Q}\) plays the role of a (generally non-Hermitian) Hamiltonian generating time evolution. Non-Hermiticity reflects dissipation and probability flow (e.g., toward absorbing states), unlike closed quantum systems.
Physical meaning and analogy: This correspondence is not coincidental; it reveals deep mathematical connections between classical stochastic processes and quantum systems:
- State vector \(|\Psi(x,t)\rangle\) plays the role of the "wave function" in quantum mechanics, containing all information about the system's probability distribution.
- Operator \(\hat{Q}(x, \partial_x)\) plays the role of the "Hamiltonian", generating the system's time evolution.
- Time \(t\) corresponds to "imaginary time" \(\tau = it\) in quantum mechanics.
The significance of this analogy is revolutionary. It means that the entire mathematical arsenal developed for solving quantum systems—especially Feynman's path integral method—can now be directly "transplanted" to solve Master Equation problems.
An important distinction: non-Hermiticity
Note that the "Hamiltonian" \(\hat{Q}\) here is generally not Hermitian (non-Hermitian). In quantum mechanics, the Hermiticity of the Hamiltonian guarantees conservation of total probability \(\int |\psi|^2 dx\). However, in classical stochastic processes, what is conserved is total probability \(\sum_n P(n,t) = 1\). The constructed "wave function" \(|\Psi(x,t)\rangle\)'s norm (e.g., \(\int |\Psi(x,t)|^2 dx\)) need not be conserved. The non-Hermiticity of \(\hat{Q}\) reflects the dissipative nature of stochastic processes and the one-way flow of probability (e.g., toward absorbing states)—this is one of the essential differences between classical stochastic processes and standard quantum mechanics.
3. Derivation of the Path Integral¶
We now follow Feynman's route: discretize time, express evolution over a small step, insert Fourier identities to turn derivatives into algebraic factors, exponentiate, iterate, and then take the continuum limit to obtain a path integral over histories.
3.1 Time Discretization and Single-Step Evolution¶
Divide \([t_0, t]\) into \(N\) steps of size \(\Delta t\). For a small step \(t_0 \to t_1=t_0+\Delta t\), the backward evolution reads $$ |\Psi(x_0, t_0)\rangle \approx \big[ 1 + \Delta t\, \hat{Q}(x_0, \partial_{x_0}) \big] \, |\Psi(x_1, t_1)\rangle + \mathcal{O}(\Delta t^2). $$ This defines the infinitesimal propagator linking adjacent time slices.
3.2 Fourier Identity: Algebraizing the Operator¶
Insert the identity \(\int dx_1 \, \delta(x_1-x_0) = 1\) with \(\delta(x) = \int \frac{dq_1}{2\pi} e^{-iq_1 x}\) to turn \(\partial_{x_0}\) into multiplication by \(iq_1\) inside the kernel: $$ \partial_{x_0} |\Psi(x_1, t_1)\rangle = \int dx_1 \int \frac{dq_1}{2\pi} (iq_1) e^{-iq_1(x_1-x_0)} |\Psi(x_1, t_1)\rangle.
As shown in the lecture's aside, any function \(f(x_1)\) can be written as \(f(x_0) = \int dx_1 \delta(x_1 - x_0) f(x_1)\) using the Dirac \(\delta\) function. Using the Fourier representation of the \(\delta\) function \(\delta(x) = \int \frac{dq}{2\pi} e^{-iqx}\), we can transform the operator \(\partial_{x_0}\) acting on \(|\Psi(x_0, t_1)\rangle\) into an algebraic factor \(iq_1\) inside the integral kernel:
Physical meaning: This operation introduces a new integration variable \(q_1\). This variable can be viewed as the "response field" or "momentum field" conjugate to the field \(x\), whose role is to "measure" the temporal changes of the field \(x\). This parallels the idea of introducing the response field \(\tilde{\phi}\) for the Langevin equation in Lecture 32.
Applying this technique to the evolution equation, the operator \(\hat{Q}(x_0, \partial_{x_0})\) becomes the function \(\hat{Q}(x_0, iq_1)\). We thus obtain:
Here \(M\) is the number of field components (in the single-particle example \(M=1\)). Through this transformation, the originally complex expression containing differential operators becomes an easier-to-handle form involving integrals over ordinary functions.
3.3 Exponentiation and the Single-Step Kernel¶
Notice that for infinitesimal \(\Delta t\), the term in square brackets is a first-order approximation of an exponential: \(1 + \Delta t \hat{Q}(x_0, iq_1) \approx \exp[\Delta t \hat{Q}(x_0, iq_1)]\). Substituting this back and combining the two exponential terms, we obtain the integral representation of the single-step propagator from time \(t_1\) to \(t_0\):
3.4 Iteration and the Continuum Limit¶
Now we iterate this single-step process \(N\) times, propagating the state from \(t_N=t\) all the way back to \(t_0\). Each step inserts a similar integral, ultimately yielding an expression containing \(N\) integrals:
Physical meaning: This expression is, in the case of time discretization, a summation over all possible intermediate paths \((x_1, x_2, \dots, x_{N-1})\) and all possible histories of response fields \((q_1, q_2, \dots, q_N)\).
Next, we take the continuum limit \(N \to \infty, \Delta t \to 0\), while keeping the total time \(t - t_0\) fixed. In this limit: - The product of integral measures \(\prod_{i=1}^{N} \frac{dx_i dq_i}{(2\pi)^M}\) becomes the functional integral measure \(\int \mathcal{D}[x]\mathcal{D}[q]\). - The sum \(\sum_{j=1}^{N} \dots \Delta t\) becomes the time integral \(\int_{t_0}^{t} d\tau \dots\). - The difference quotient \(\frac{x_j - x_{j-1}}{\Delta t}\) becomes the time derivative \(\partial_\tau x(\tau)\).
3.5 Final Path Integral and Action¶
After the above steps, we finally obtain the path-integral representation for the probability of evolving from the initial Poisson distribution (parameterized by \(x_0\)) to the final state \(|\Psi(x_N, t_N)\rangle\):
where the integral is over all field \(x(\tau)\) and response field \(q(\tau)\) history paths satisfying the boundary condition \(x(t_0) = x_0\). \(S'[x,q]\) in the exponential is the path integral action, with the form:
Physical meaning of the action: This action is the heart of the entire theory. It is a functional that assigns a complex weight to each possible system history \((x(\tau), q(\tau))\). It completely encodes the microscopic stochastic kinetic rules.
Kinetic term \(iq(\tau) \partial_\tau x(\tau)\): This term, universally present in such path integrals, is the "Berry phase" term that couples the time derivative of the field \(x\) with its conjugate field \(q\).
Hamiltonian term \(-\hat{Q}(x(\tau), iq(\tau))\): This term originates from the Master Equation's evolution operator and contains the system's specific physical processes (such as birth, death, interactions).
To help understand this complex derivation, the table below summarizes the key steps and their conceptual purposes.
| Step | Mathematical Tool | Conceptual Purpose |
|---|---|---|
| 1. Time discretization | \(e^{\Delta t \hat{Q}} \approx 1 + \Delta t \hat{Q}\) | Decompose continuous time evolution into a series of small, manageable steps. |
| 2. Fourier identity | \(\partial_x f(x) \to \int (iq) e^{\dots}\) | Convert differential operator \(\partial_x\) into an algebraic variable \(iq\), allowing it to be placed into an exponential. |
| 3. Exponentiation | \(1+\epsilon \approx e^\epsilon\) | Write single-step evolution in exponential form, forming the kernel of the propagator (Propagator). |
| 4. Iteration | \(\int \prod_i dx_i dq_i\) | Sum over all intermediate states, "linking" single-step propagators into finite-time evolution. |
| 5. Continuum limit | \(\sum \Delta t \to \int d\tau\) | Sum over all possible field evolution history paths, obtaining the complete functional integral and action. |
4. Code Practice: Stochastic Switching in a Bistable System¶
To demonstrate the power of the path-integral framework for complex nonlinear stochastic processes, this code practice explores a classic model in nonequilibrium statistical physics—the Schlögl model. This model describes a chemical reaction system that can exhibit bistability—two distinct macroscopic stable states.
This model is strictly described by the Master Equation. Stochastic "jumping" events have profound correspondences in path-integral theory, called "instantons"—the most probable paths for the system to overcome potential barriers. Therefore, this example can establish intuitive connections with more advanced field theory concepts.
4.1 Schlogl Reaction Model¶
The model consists of two pairs of forward and reverse chemical reactions:
-
Three-body autocatalysis and decay: \(A + 2X \underset{k_2}{\stackrel{k_1}{\rightleftharpoons}} 3X\)
-
Generation and linear decay: \(B \underset{k_4}{\stackrel{k_3}{\rightleftharpoons}} X\)
Here, the concentrations of \(A, B\) are considered constants maintained by the external environment, while the particle count \(n\) of \(X\) is the random variable of interest. By adjusting the reaction rates, this system can be designed to have two stable fixed points and one unstable fixed point at the mean-field level.
For Gillespie simulation, we need the propensity (propensity function) for each reaction channel: $$ A + 2X \to 3X: a_1 = k_1 [A] \frac{n(n-1)}{2} $$
4.2 Implementation¶
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.animation as animation
from matplotlib.colors import LightSource
# --- Use dark theme ---
plt.style.use('dark_background')
def gillespie_schlogl_single(n0, k, t_max):
"""Single Gillespie simulation of Schlogl model"""
t, n = 0.0, float(n0) # Ensure n is a float
times, populations = [t], [n]
k1, k2, k3, k4 = k
while t < t_max:
if n < 0: n = 0 # *** Added robustness: prevent negative particle count ***
# Propensity calculations (for simplicity, constant concentrations are absorbed into rate constants)
a1 = k1 * n * (n - 1) / 2
a2 = k2 * n * (n - 1) * (n - 2) / 6
a3 = k3
a4 = k4 * n
a_total = a1 + a2 + a3 + a4
if a_total <= 1e-9:
# If total rate is extremely small, jump forward a small step to avoid infinite loop
t += 0.1
if t > t_max: break
times.append(t)
populations.append(n)
continue
dt = -np.log(np.random.rand()) / a_total
t += dt
if t > t_max: break
# Determine which reaction occurs
rand_val = np.random.rand() * a_total
if rand_val < a1:
n += 1 # A + 2X -> 3X
elif rand_val < a1 + a2:
n -= 1 # 3X -> A + 2X
elif rand_val < a1 + a2 + a3:
n += 1 # B -> X
else:
n -= 1 # X -> B
times.append(t)
populations.append(n)
return times, populations
k_params = [3e-7, 1e-4, 1e-3 * 105, 1.0]
N_TRAJECTORIES = 10000
T_MAX_SHORT = 50.0
TIME_SLICES = 100
N_BINS = 500
N0_short = 100
print("Running large-scale simulations for probability landscape...")
final_trajectories = np.zeros((N_TRAJECTORIES, TIME_SLICES))
time_points = np.linspace(0, T_MAX_SHORT, TIME_SLICES)
for i in range(N_TRAJECTORIES):
if i % 1000 == 0:
print(f"Simulating trajectory {i}/{N_TRAJECTORIES}")
t, p = gillespie_schlogl_single(N0_short, k_params, T_MAX_SHORT)
interp_p = np.interp(time_points, t, p, right=p[-1])
final_trajectories[i, :] = interp_p
print("Building probability distribution surface P(n,t)...")
prob_surface = np.zeros((N_BINS, TIME_SLICES))
n_values = np.arange(N_BINS)
for i in range(TIME_SLICES):
counts, _ = np.histogram(final_trajectories[:, i], bins=np.arange(N_BINS + 1) - 0.5)
prob_surface[:, i] = counts / N_TRAJECTORIES
# --- 3D Visualization (corrected) ---
fig2 = plt.figure(figsize=(16, 12))
ax2 = fig2.add_subplot(111, projection='3d')
# *** Fix: Adjust meshgrid input order to match prob_surface dimensions ***
# prob_surface dimensions are (N_BINS, TIME_SLICES) -> (n, t)
# We need T_mesh and N_mesh to have the same dimensions
T_mesh, N_mesh = np.meshgrid(time_points, n_values)
ls = LightSource(azdeg=20, altdeg=45)
def update_surface(frame):
ax2.clear()
# Slice to get current frame and previous data
current_T = T_mesh[:, :frame+1]
current_N = N_mesh[:, :frame+1]
current_P = prob_surface[:, :frame+1]
# Apply lighting effects only when there is sufficient data
if current_P.shape[1] > 1:
rgb = ls.shade(current_P, cmap=plt.cm.magma, vert_exag=0.1, blend_mode='soft')
surf = ax2.plot_surface(current_T, current_N, current_P,
rstride=1, cstride=1, facecolors=rgb,
linewidth=0, antialiased=True, alpha=1.0)
else: # For the first frame, use colormap without lighting
surf = ax2.plot_surface(current_T, current_N, current_P,
rstride=1, cstride=1, cmap=plt.cm.magma,
linewidth=0, antialiased=True, alpha=1.0)
ax2.set_title(f'Emergence of Bistability: $P(n,t)$ at t={time_points[frame]:.2f}', fontsize=20, pad=20)
# Swap X and Y axis labels to match the new grid
ax2.set_xlabel('Time (t)', fontsize=15, labelpad=20)
ax2.set_ylabel('Particle Number (n)', fontsize=15, labelpad=20)
ax2.set_zlabel('Probability P(n,t)', fontsize=15, labelpad=20)
ax2.set_xlim(0, T_MAX_SHORT)
ax2.set_ylim(0, N_BINS)
ax2.set_zlim(0, np.max(prob_surface) * 1.1 if np.max(prob_surface) > 0 else 0.1)
ax2.view_init(elev=25, azim=-150 + frame * 0.5)
ax2.xaxis.set_pane_color((0.0, 0.0, 0.0, 1.0))
ax2.yaxis.set_pane_color((0.0, 0.0, 0.0, 1.0))
ax2.zaxis.set_pane_color((0.1, 0.1, 0.1, 1.0))
ani = animation.FuncAnimation(fig2, update_surface, frames=TIME_SLICES, interval=80)
print("Saving animation... (This may take a moment)")
ani.save('schlogl_bistability_evolution.gif', writer='pillow', fps=15)
Initial state (t=0): The probability distribution is a sharp "pulse" at \(n=100\) (manifested as a sharp "ridge" at the origin of the time axis in the figure), representing that all simulated trajectories start from the same deterministic initial condition.
Evolution and bifurcation: As time progresses, this single probability "mountain" rapidly broadens and shifts. Most strikingly, after a short period, this unimodal distribution splits into two independent peaks. This is exactly the manifestation of bistability at the probability distribution level.
Emergence of bistability: One peak gradually forms and stabilizes in the low particle number region (\(n \approx 85\)), while another peak forms in the high particle number region (\(n \approx 380\)). This indicates that as time evolves, systems starting from the same initial point will ultimately exhibit a certain probability distribution in either of these two states.
Stationary distribution: As the animation progresses, the heights and shapes of the two peaks gradually stabilize, forming the system's final stationary probability distribution—a typical bimodal distribution. The peak of the lower stable state is higher, indicating that this state is more stable (its "potential well" is deeper), and the system has a higher probability of remaining in this state after a long time.
The Master Equation describes how probability "flows" and redistributes in state space. The path integral is precisely the fundamental theoretical tool for computing the propagators of this "flow" and ultimately depicting the entire evolutionary landscape.
Conclusion¶
The process from a basic Master Equation to a path-integral representation explored in this lecture centers on using the Poisson representation to cleverly map a discrete particle-count system to a field-theory framework based on continuous fields \(x(t)\) and their conjugate fields \(q(t)\).
Key steps:
-
Poisson transform: Encode discrete probability distributions \(P(n,t)\) as continuous functions \(|\Psi(x,t)\rangle\). This step transforms the discrete transition matrix \(\hat{Q}\) in the original Master Equation into a differential operator \(\hat{Q}(x, \partial_x)\) acting on continuous space.
-
Schrödinger analogy: From this we obtain an evolution equation formally isomorphic to the imaginary-time Schrödinger equation: \(\frac{\partial}{\partial t} |\Psi(x,t)\rangle = \hat{Q}(x,\partial_x) |\Psi(x,t)\rangle\). This correspondence reveals deep mathematical connections between classical stochastic processes and quantum systems, laying the foundation for subsequent borrowing of quantum field theory tools.
-
Path integral derivation: Using a series of ingenious mathematical techniques such as time discretization and Fourier identities, the solution to the differential equation is ultimately expressed as an integral over all field history paths. The weight is determined by the core action \(S'[x,q] = \int d\tau [iq(\tau)\partial_\tau x(\tau) - \hat{Q}(x(\tau), iq(\tau))]\). This action completely encodes the microscopic stochastic kinetic rules, containing a kinetic term \(iq(\tau)\partial_\tau x(\tau)\) that describes temporal evolution and a Hamiltonian term \(-\hat{Q}(x(\tau), iq(\tau))\) that contains the system's specific physical processes.
This provides a systematic, first-principles starting point for analyzing complex nonequilibrium stochastic processes. With this action \(S'[x,q]\), one can employ the full toolbox of statistical field theory:
Saddle-point approximation: Can readily derive the system's mean-field behavior (rate equations). For example, in stochastic logistic growth, the saddle-point approximation recovers the deterministic logistic equation.
Gaussian integration (one-loop approximation): By expanding the action around the saddle point, one can systematically calculate fluctuations and correlation functions around the mean field.
Renormalization group: For critical phenomena, renormalization group methods can be used to analyze the action, thereby computing universal quantities such as critical exponents, overcoming the limitations of traditional spectral methods in handling fluctuations and spatial correlations.
This field-theory framework brings complex random problems that previously seemed treatable only through numerical simulation into the realm of analytical computation, opening paths to understanding collective behavior and phase transition phenomena in fields ranging from chemical reaction networks to epidemic spread and ecosystem dynamics. For example, this lecture's visualization of the Schlögl model vividly demonstrates how a system evolves from a single stable state to bistability driven by random fluctuations, forming a dynamic probability landscape—a rich phenomenon that mean-field theory cannot fully capture.
The Poisson representation and its derived path-integral form introduced in this lecture are actually a concrete realization of a deeper, more universal algebraic structure. This underlying structure bears remarkable similarity to the algebra of creation and annihilation operators used in quantum mechanics for treating harmonic oscillators and many-body systems.
In the next lecture (Lecture 36), we will delve into this algebraic foundation, namely the so-called Doi–Peliti formalism. There we will discover:
- The generating field \(x\) can be strictly identified with a creation operator \(a^\dagger\).
- The derivative operator \(\partial_x\) can be identified with an annihilation operator \(a\).
- Their basic commutation relation \([\partial_x, x] = 1\) is exactly the manifestation of the basic commutation relation \([a, a^\dagger] = 1\) for bosonic creation–annihilation operators.
Starting from this more fundamental operator algebra, we will re-derive the path integral, obtaining the so-called coherent-state path integral. This approach is not only more elegant and powerful in form, but also reveals the theory's deeper structure more clearly and naturally explains phenomena such as the so-called "imaginary noise" that appears in certain problems.


