Skip to content

Introduction: From Onsager-Machlup to the J-D Action

In the previous lecture (31), we successfully constructed a beautiful path-integral formulation for stochastic processes centered on the Onsager-Machlup (OM) functional. This functional plays the role of an "action" for random paths and provides deep physical insight into detailed balance and the fluctuation-dissipation theorem (FDT) from a "sum over histories" viewpoint.

However, as noted at the outset, the OM functional is inconvenient for practical calculations: its action is quadratic in the difference between the time derivative of the field and the drift, i.e., a square. Such a form becomes cumbersome for perturbation theory and more elaborate analyses.

To overcome this limitation, we need to introduce a mathematically more tractable path-integral framework. The central goal is to construct a new, linear action that is physically equivalent to the OM functional. A linear action greatly simplifies perturbative calculations and theoretical analysis. This line of thought naturally leads to the derivation of the Janssen-De Dominicis (J-D) action, the core of this lecture.

It is worth emphasizing that the key trick—imposing the dynamical constraint via a Fourier representation by introducing an auxiliary response field to linearize the problem—is not new. This idea is methodologically consistent with the MSRJD (Martin-Siggia-Rose-Janssen-De Dominicis) formalism introduced in Lecture 22 for handling more complex multiplicative noise systems. This consistency in theoretical tools not only demonstrates the internal harmony and self-consistency of stochastic field theory, but also highlights the response field as a powerful and universal mathematical tool.

Ultimately, the more powerful J-D formalism is not merely a cosmetic rephrasing. It provides a precise "scalpel" to dissect response functions and correlation functions and, in a strikingly transparent way, derives one of the most profound relations in statistical physics—the fluctuation-dissipation theorem (FDT)—from first principles.

1. Recap: Onsager-Machlup Functional and Detailed Balance

To pave the way for the J-D action, we briefly recap the previous lecture's core: the Onsager-Machlup functional as the action of random paths, and its deep equilibrium symmetry-detailed balance.

1.1 Statistical Weight of Paths

From the path-integral perspective, the transition amplitude for a stochastic system to evolve from an initial configuration \(\phi_0\) to a final one \(\phi_f\) is a functional integral over all histories connecting these endpoints:

\[ \langle \phi_f | \phi_0 \rangle = \int_{\phi(0)=\phi_0}^{\phi(t_f)=\phi_f} \mathcal{D}[\phi] \, \exp\bigl[-G(\phi; 0, t_f)\bigr]. \]

This is formally identical to the Feynman path integral in quantum mechanics. The key quantity is the OM functional \(G[\phi]\), which assigns a statistical weight to each path \(\phi(t)\). Smaller \(G[\phi]\) means higher probability; \(G\) thus plays the role of an action for stochastic dynamics.

1.2 OM Functional: The "Cost" of a Path

For a system governed by multicomponent Langevin equations \(\partial_t \phi_\alpha = A_\alpha[\phi] + \xi_\alpha\), the OM functional reads

\[ G[\phi] = \frac{1}{2} \int_{x,t} \bigl(\dot{\phi}_\alpha - A_\alpha[\phi]\bigr) \, N_{\alpha\beta}^{-1} \, \bigl(\dot{\phi}_\beta - A_\beta[\phi]\bigr) + (\text{functional determinant}). \]

The physical meaning of this functional is very clear:

  • The term \((\dot{\phi}_\alpha - A_\alpha)\) is precisely the noise \(\xi_\alpha\) required (via the Langevin equation) to drive the system along this specific path \(\phi(t)\).

  • The entire functional therefore measures the "cost" or "magnitude" of this noise history. A path that requires intense, rare noise to realize will have a large \(G[\phi]\) value, and its probability \(e^{-G[\phi]}\) will be exponentially suppressed. The inverse covariance matrix \(N^{-1}\) acts as a metric, weighting this "cost" according to the strength of each noise component.

1.3 Detailed Balance: Time Symmetry

The true power of the OM functional emerges when combined with the fundamental principles of thermodynamics. For a system in thermal equilibrium, its microscopic dynamics must satisfy time-reversal invariance. This microscopic symmetry manifests at the macroscopic level of path probabilities as the detailed balance principle.

In the language of path integrals, detailed balance is expressed as a profound relationship between the path action and the system's free energy \(F[\phi]\):

\[ \beta F[\phi_0] + G[\phi; 0, t_f] = \beta F[\phi_f] + G_R[\phi; 0, t_f] \, , \]

where \(\beta = 1/(k_B T)\) is the inverse temperature and \(G_R\) is the action on the time-reversed path. The left-hand side is proportional to the total (log) probability of evolving forward from \(\phi_0\) to \(\phi_f\); the right-hand side corresponds to the reversed process. Detailed balance requires these to be precisely connected via the free-energy difference between the endpoints.

This seemingly abstract symmetry relation imposes extremely stringent constraints on the dynamics. As derived at the end of Lecture 31, this relation requires that the system's deterministic drift term \(A_\alpha\) must be related to dissipative processes (e.g., in Model A, \(A_\alpha = -L \frac{\delta F}{\delta \phi_\alpha}\)), and that the noise strength \(N\) must be directly related to the system's dissipation coefficient \(L\) and temperature \(T\). For Model A, this constraint ultimately manifests as the famous Einstein-Onsager relation:

\[ N = 2 L k_B T \]

This is a specific form of the fluctuation-dissipation theorem, revealing that the system's random fluctuations (quantified by \(N\) or \(T\)) and its dissipative response to driving forces (quantified by \(L\)) are two sides of the same microscopic process.

2. Building the J-D Generating Functional: MSRJD Formalism

While the OM functional offers a beautiful picture near equilibrium, its scope is limited by the detailed balance condition. To treat more general, potentially far-from-equilibrium problems, we need a more universal framework that does not rely on this condition. The goal of this section is to construct such a universal path-integral formulation that applies to any system described by an additive-noise Langevin equation \(\partial_t \phi = A[\phi] + \xi\).

This construction is a direct application of the core idea of "assigning weights to paths" (Lecture 19), and the mathematical techniques used are identical to the MSRJD (Martin-Siggia-Rose-Janssen-De Dominicis) formalism introduced in Lecture 22 for handling more complex multiplicative noise systems. This highlights the universality of this approach.

2.1 Formal Averaging and the Philosophy of Change of Variables

We consider the ensemble average of an arbitrary observable \(\Theta[\phi]\). From a physical perspective, since the field trajectory \(\phi(t)\) is uniquely determined by the random noise history \(\xi(t)\), \(\Theta[\phi]\) is ultimately also a functional of the noise. The average of \(\Theta[\phi]\) is essentially a weighted average over all possible noise histories \(\xi(t)\).

However, the noise \(\xi\) itself is usually not directly observable microscopically, while the field \(\phi\) is the macroscopic physical quantity of interest. Therefore, a core strategy is to perform a change of variables: transform the integration variables from the unobservable noise field \(\xi\) to the observable physical field \(\phi\). To implement this in a functional integral, we need a mathematical tool to ensure the legitimacy of this transformation, which is the functional Dirac \(\delta\) function. It can formally write the averaging process as:

\[ \langle \Theta[\phi] \rangle_{\xi} = \Bigl\langle \int \mathcal{D}[\phi] \, \Theta[\phi] \, \underbrace{\delta\bigl[\phi(t) = \text{solution of the Langevin equation}\bigr]}_{\text{hard constraint: only physical paths}} \Bigr\rangle_{\xi} \, . \]

The path integral \(\int \mathcal{D}[\phi]\) scans all possible field configuration histories, regardless of whether they conform to physical laws. The \(\delta\) function acts like a "filter" or hard constraint: it selects from all possible paths only the one that uniquely satisfies the Langevin equation driven by a specific noise \(\xi\), ensuring that only physically meaningful paths contribute to the final average.

2.2 MSRJD: Making the Constraint Explicit

While the above expression is formally correct, it is difficult to operate directly. The MSRJD method is a powerful and standard technique that expresses this abstract constraint condition using the specific dynamical equation itself:

\[ \delta\bigl[\phi(t) = \text{solution}\bigr] \to \underbrace{J[\phi]}_{\text{Jacobian}} \, \underbrace{\delta\bigl(\partial_t \phi - A[\phi] - \xi\bigr)}_{\text{constraint in dynamical form}} \, . \]

The two newly appearing terms have clear mathematical and physical meanings:

  • Functional \(\delta\) function: Now its argument is the Langevin equation itself. This function enforces that for any contributing path, its time evolution rate \(\partial_t \phi\) must precisely equal the sum of the deterministic drift \(A[\phi]\) and the random noise \(\xi\).

  • Jacobian functional determinant \(J[\phi]\): In any integral change of variables, the Jacobian is necessary as it describes how the integration "volume element" changes. Here, \(J[\phi] = |\det(\delta\xi/\delta\phi)|\). However, unlike the complex, path-dependent Jacobian encountered in Lecture 22 when dealing with multiplicative noise, for the additive noise case of concern in this lecture, the transformation from \(\xi\) to \(\phi\) is merely a "translation," and its Jacobian is a path-independent constant. Therefore, it can be absorbed into the overall normalization factor and temporarily ignored in calculations, greatly simplifying the problem.

2.3 Introducing the Response Field

The next step is to use the Fourier integral representation of the \(\delta\) function, which is the core mathematical technique of this entire formalism, also used in Lecture 22. The basic idea is that any \(\delta\) function can be represented as an integral over all frequencies (or modes) of plane waves. Extending this to function space, we get:

\[ \delta\bigl(\partial_t \phi - A[\phi] - \xi\bigr) \propto \int \mathcal{D}[\tilde{\phi}] \, \exp\Bigl[i \int_{x,t} \tilde{\phi}_\alpha\, (\partial_t \phi_\alpha - A_\alpha[\phi] - \xi_\alpha)\Bigr] \, . \]

(Note: To maintain consistency with course blackboard and literature conventions, the imaginary unit \(i\) will later be absorbed into the definition of \(\tilde{\phi}\) through a "Wick rotation" to obtain a real exponential, which is more natural when dealing with statistical weights in statistical physics.)

Substituting this expression back, the observable average becomes an extended path integral over both the physical field \(\phi\) and the newly introduced auxiliary field \(\tilde{\phi}\):

\[ \langle \Theta[\phi] \rangle_{\xi} = \Bigl\langle \int \mathcal{D}[\phi] \, \mathcal{D}[\tilde{\phi}] \, \Theta[\phi] \, \exp\Bigl[-\int_{x,t} \tilde{\phi}_\alpha\, (\partial_t \phi_\alpha - A_\alpha[\phi] - \xi_\alpha)\Bigr] \Bigr\rangle_{\xi} \, . \]

This newly introduced field \(\tilde{\phi}\) is called the response field (or conjugate field). At first glance, \(\tilde{\phi}\) seems to be just a "ghost" field introduced to represent the \(\delta\) function without physical substance. However, it has profound physical meaning.

This field directly measures the causal response of the system to external perturbations. It can be imagined as a "probe" that "measures" the degree to which the dynamical equation is violated at every point in the path integral. Ultimately, the correlation function \(\langle \phi(x,t) \tilde{\phi}(x',t') \rangle\) calculated through this formalism will be proven to be proportional to the average response produced by the system at \((x,t)\) after receiving an infinitesimal "kick" at \((x',t')\).

3. Janssen-De Dominicis Action

After introducing the response field, the path integral expression contains three functional integrations: over \(\phi\), over \(\tilde{\phi}\), and over the remaining random noise \(\xi\). The next and final step is to perform the average over the noise field \(\xi\). Since the noise is assumed to be Gaussian white noise, this average can be performed exactly.

The Janssen-De Dominicis (J-D) action is often presented as part of the broader MSRJD formalism, with its historical roots in quantum field theory, developed from the early work of Martin, Siggia, Rose, and later systematically applied to classical stochastic dynamics by Janssen and De Dominicis. Its physical essence is to transform an unwieldy stochastic differential equation (such as the Langevin equation) into a more tractable statistical field theory problem through path integrals. The core technique of this transformation is introducing an auxiliary "response field" that enforces the system's dynamical constraints through Fourier transformation, thereby linearizing the inconvenient quadratic terms in the original action (such as the Onsager-Machlup functional). The resulting J-D action is a functional of both physical and response fields, elegantly encoding the system's deterministic drift and noise statistics, serving as the generating functional for all physical observables (such as correlations and response functions). Specific application scenarios are extremely broad—it is not only the theoretical foundation for deriving fundamental relations like the fluctuation-dissipation theorem, but also the standard starting point for using powerful tools such as perturbation theory, Feynman diagrams, and renormalization group to analyze complex nonequilibrium systems, playing a central role especially in studying critical dynamics of phase transitions, systems with multiplicative noise, and stochastic processes in soft matter and biophysics.

3.1 Integrating Out the Noise

The term in the path integral that directly couples to the noise \(\xi\) is \(\exp[\int \tilde{\phi}_\alpha \xi_\alpha]\). To average this quantity over Gaussian noise, we need to use an important property of Gaussian integrals: for a zero-mean Gaussian variable \(X\), we have \(\langle e^X \rangle = e^{\frac{1}{2} \langle X^2 \rangle}\). Extending this to functionals, the specific calculation is:

\[ \Bigl\langle \exp\Bigl[\int_{x,t} \tilde{\phi}_\alpha \, \xi_\alpha\Bigr] \Bigr\rangle_{\xi} = \exp\Bigl[\tfrac12 \iint \tilde{\phi}_\alpha(x,t)\, \tilde{\phi}_\beta(x',t')\, \langle \xi_\alpha(x,t)\, \xi_\beta(x',t') \rangle\Bigr]. \]

For Gaussian white noise, the correlation function is \(\langle \xi_\alpha(x,t)\, \xi_\beta(x',t') \rangle = N_{\alpha\beta}\, \delta(x-x')\, \delta(t-t')\). Substituting this into the above equation and performing the integrals over \(x'\) and \(t'\), the \(\delta\) functions make the integration trivial, giving:

\[ \langle e^{\ldots} \rangle_{\xi} = \exp\Bigl[\tfrac12 \int_{x,t} \tilde{\phi}_\alpha \, N_{\alpha\beta} \, \tilde{\phi}_\beta \Bigr]. \]

The physical meaning of this result is: The original physical noise field \(\xi\) has been integrated out (averaged out), but its statistical properties (described by the noise correlation matrix \(N_{\alpha\beta}\)) are permanently "imprinted" in the system's effective action through a quadratic term in the response field.

3.2 Final Path Integral and Action

Collecting terms, the observable average becomes

\[ \langle \Theta[\phi] \rangle = \int \mathcal{D}[\phi] \, \mathcal{D}[\tilde{\phi}] \, \Theta[\phi] \, e^{-S[\phi,\tilde{\phi}]} \]

with the Janssen-De Dominicis (J-D) action

\[ S[\phi,\tilde{\phi}] = \int_{x,t} \Bigl[\underbrace{\tilde{\phi}_\alpha\, (\dot{\phi}_\alpha - A_\alpha[\phi])}_{\text{dynamical constraint}}\; -\; \underbrace{\tfrac12 \tilde{\phi}_\alpha \, N_{\alpha\beta} \, \tilde{\phi}_\beta}_{\text{noise contribution}}\Bigr]. \]

All dynamical information-deterministic drift and stochastic fluctuations-is now encoded in a single action functional. Computation of observables reduces to evaluating path integrals with this action, which serves as the generating functional for correlation and response functions.

3.3 Anatomy of the J-D Action

The J-D action consists of two main pieces:

  1. Dynamical constraint: \(\int \tilde{\phi}_\alpha (\dot{\phi}_\alpha - A_\alpha[\phi])\). The response field acts as a Lagrange multiplier enforcing the dynamics "on average."
  2. Noise contribution: \(-\tfrac12 \int \tilde{\phi}_\alpha N_{\alpha\beta} \tilde{\phi}_\beta\), a quadratic form entirely fixed by noise statistics.

Compared with the OM functional:

  • OM: \((\dot{\phi}-A)^2\)-physically intuitive (path-deviation cost) but quadratic and cumbersome for calculations.
  • J-D: \(\tilde{\phi}(\dot{\phi}-A)\)-linear in the dynamics and amenable to perturbation theory at the price of introducing the auxiliary response field.

This is a common trade-off in field theory: introduce auxiliary fields to simplify the action's structure.

To clarify the roles of the fields, we summarize:

Symbol Name Physical Interpretation Source Equation
\(\phi(x,t)\) physical field Primary dynamical variable (e.g., order parameter, concentration field). \(\partial_t \phi = \mathcal{L}[\phi] + \xi\)
\(\tilde{\phi}(x,t)\) response field Auxiliary field measuring causal response to external perturbations. \(\partial_t \tilde{\phi} = -\mathcal{L}^\dagger[\tilde{\phi}]\)
\(\xi(x,t)\) noise field Random driving from environment (e.g., thermal fluctuations). fluctuation-dissipation relation
\(h(x,t)\) external field Controllable probe coupling to \(\phi\) to measure response. driving term acting on \(\phi\)

4. Probing the System: Response and Correlation Functions

The power of the J-D formalism lies in computing measurable quantities. The action built above is the "engine"; here we drive it to extract the system's two core dynamical characteristics: response to external probes (response functions) and spontaneous fluctuations at equilibrium (correlation functions).

4.1 Apply a Perturbation: Physical Meaning of the Response Field

To give the "ghost" field \(\tilde{\phi}\) introduced in the mathematical derivation a firm physical meaning, we adopt a strategy commonly used by physicists: active probing. By introducing a weak, controllable external perturbation field \(h(x,t)\) into the system, we observe which quantity it couples to in the theoretical framework.

For a near-equilibrium system (e.g., Model A), the drift derives from a free energy \(F\): \(A[\phi] = -L\, \delta F/\delta \phi\). An external field \(h\) typically couples directly to the physical field \(\phi\), which is equivalent to modifying the system's free energy landscape:

\[ F \to F' = F - \int d^d x\, dt\, h(x,t)\, \phi(x,t) \, . \]

This change in free energy immediately propagates to the drift term:

\[ \frac{\delta F'}{\delta \phi} = \frac{\delta F}{\delta \phi} - h \quad \Rightarrow \quad A'[\phi] = -L\, \frac{\delta F'}{\delta \phi} = A[\phi] + L h \, . \]

Now, substituting this new drift term \(A'\) into the dynamical constraint part of the J-D action \(\int \tilde{\phi} (\dot{\phi} - A')\) yields an additional term related to the external field:

\[ S \to S' = S - \int_{x,t} L\, h(x,t)\, \tilde{\phi}(x,t) \, . \]

This result reveals a crucial connection: the controllable external probe \(h\) in the physical world is actually coupled to that seemingly abstract auxiliary field \(\tilde{\phi}\) in the J-D action. In other words, the external field \(h\) acts as the source for the response field \(\tilde{\phi}\). This provides the most direct evidence for the "response" nature of \(\tilde{\phi}\).

4.2 Linear Response Function \(\chi\)

The linear response function \(\chi\) (also called dynamic susceptibility in magnetic systems) is defined as the system's response to an infinitesimal external field, specifically as the functional derivative of the mean field with respect to \(h\) evaluated at vanishing \(h\):

\[ \chi(x,t; x',t') = \frac{\delta \langle \phi(x,t) \rangle}{\delta h(x',t')}\biggr|_{h\to 0} \, . \]

Using the path integral expression \(\langle \phi \rangle = Z^{-1}\!\int \mathcal{D}[\phi,\tilde{\phi}]\, \phi\, e^{-S'}\) and differentiating \(e^{-S'}\) with respect to \(h(x',t')\) pulls down the factor \(L\, \tilde{\phi}(x',t')\). Therefore, we obtain:

\[ \chi(x,t; x',t') = L\, \langle \phi(x,t)\, \tilde{\phi}(x',t') \rangle \, . \]

This equation is one of the central results of response theory and a direct manifestation of the powerful capabilities of the MSRJD framework. It rigorously proves the physical interpretation of the response field \(\tilde{\phi}\): the spatiotemporal correlation function between the physical field \(\phi\) and the response field \(\tilde{\phi}\) directly gives the system's linear response function. An originally abstract mathematical construct is now precisely equated with a measurable physical quantity.

4.3 Two-Point Correlation \(C\) and Causality

Unlike the response function, the two-point correlation function \(C\) describes the statistical correlation of the physical field itself at different spatiotemporal points, i.e., the statistical correlation between spontaneous, internal fluctuations of the system:

\[ C(x,t; x',t') = \langle \phi(x,t)\, \phi(x',t') \rangle \, . \]

It is essential to emphasize the fundamental physical distinction between \(C\) and \(\chi\):

  • Correlation function \(C\): Describes spontaneous fluctuations in equilibrium. It is a symmetric quantity (in the time difference), reflecting "correlation" rather than "causality."

  • Response function \(\chi\): Describes a nonequilibrium process, i.e., how an external perturbation causes the system to respond at another spatiotemporal point. It is an asymmetric, causal quantity.

The fundamental physical principle—causality—requires that the response function must satisfy \(\chi(x,t;x',t')=0\) if \(t < t'\), i.e., effects cannot precede causes. How is this principle automatically encoded in the J-D formalism? The answer lies in the structure of the action. The dynamical term \(\int \tilde{\phi} \dot{\phi}\) in the J-D action is key. In field theory calculations, the response function \(\langle \phi \tilde{\phi} \rangle\) is essentially the Green's function (or propagator) derived from this action. It is precisely this term containing the first-order time derivative that ensures the calculated Green's function is "retarded" in time, i.e., it naturally incorporates the Heaviside step function \(\Theta(t-t')\) representing causality. Therefore, causality is not manually imposed but naturally emerges from the formalism describing unidirectional temporal evolution encoded by the Langevin equation.

5. Fluctuation-Dissipation Theorem: Linking Response and Fluctuations

At this point, the J-D formalism has provided systematic tools for computing response functions \(\chi\) (via \(\langle \phi\tilde{\phi} \rangle\)) and correlation functions \(C\) (via \(\langle \phi\phi \rangle\)). For systems in thermal equilibrium, there exists a profound and universal relationship between these two seemingly unrelated quantities: the fluctuation-dissipation theorem.

5.1 Conditions for FDT

The prerequisite for deriving the FDT is that the system is in thermal equilibrium, i.e., satisfies the detailed balance condition. In the language of the J-D formalism, this means the dynamics has a specific structure, which is completely consistent with the conclusions of Lecture 31:

  1. Drift term \(A\) is conservative, i.e., can be derived from the gradient of a free-energy functional: \(A = -L\, \delta F/\delta \phi\).

  2. Noise strength \(N\) and dissipation coefficient \(L\) and temperature \(T\) satisfy the Einstein-Onsager relation: \(N = 2L k_B T\).

5.2 FDT as an Action Symmetry

The FDT relates response (via \(\langle \phi\tilde{\phi} \rangle\)) and correlation (via \(\langle \phi\phi \rangle\)). This strongly suggests that under the above equilibrium conditions, the J-D action may possess a hidden symmetry that connects \(\tilde{\phi}\) and \(\phi\).

Such a symmetry indeed exists and is closely related to the system's time-reversal invariance. Although the complete derivation is more complex, the core idea is that under equilibrium conditions, a clever nonlinear field transformation can rewrite the J-D action in a more symmetric form (sometimes discussed as BRST symmetry in supersymmetric field theory). Any continuous symmetry of the action leads to specific relationships among correlators through Noether's theorem (or its equivalent form in field theory, Ward identities). The fluctuation-dissipation theorem is precisely the Ward identity corresponding to this time-reversal-related hidden symmetry.

5.3 Final Form and Physical Meaning of FDT

For a stationary, homogeneous equilibrium system, the final form of the fluctuation-dissipation theorem is:

\[ \chi(x-x', t-t') = \frac{1}{k_B T}\, \Theta(t-t')\, \Bigl(-\frac{\partial}{\partial t'} C(x-x', t-t')\Bigr). \]

Due to the system's stationarity, \(C\) depends only on the time difference \(\tau = t-t'\), and \(\frac{\partial}{\partial t'} C(t-t') = -\frac{dC}{d\tau}(\tau)\). Therefore, the above equation can also be written as:

\[ \chi(\tau) = -\frac{1}{k_B T}\, \Theta(\tau)\, \frac{dC}{d\tau}(\tau)\,. \]

This equation is one of the most profound results in statistical physics, with extremely far-reaching physical meaning:

  • Left side \(\chi(\tau)\): Describes the system's dissipative behavior. It is a nonequilibrium response, telling you how the system will respond and relax at future time \(\tau > 0\) if you apply a perturbation at \(t=0\).

  • Right side \(C(\tau)\): Describes the system's fluctuation behavior. It is an equilibrium property, telling you that even without any external intervention, the system's physical quantities will spontaneously fluctuate, and these fluctuations have temporal correlations.

The FDT shows that these two behaviors are not independent but determined by the same microscopic dynamics. A system's dissipative properties (how it "forgets" a perturbation) are completely determined by its spontaneous fluctuation properties at equilibrium (how it "remembers" its own past fluctuations). To know how a system will react to a "poke," just quietly observe how it "jitters" at equilibrium.

6. Application: J-D Action and Feynman-Diagram Perturbation Theory

While direct numerical simulations (such as the previous double-well potential simulation) can verify the theory, the true power of the J-D formalism lies in providing a powerful platform for analytical calculations. When systems have nonlinear interactions, exact solutions become impossible, and perturbation theory becomes the core analytical tool.

This section will demonstrate how to use the J-D action as a starting point and systematically calculate the first-order correction to the system's correlation function using Feynman diagram language. This process is a bridge connecting stochastic dynamics and modern statistical field theory.

Feynman diagrams are intuitive graphical tools that depict the extremely complex mathematical expressions in perturbation theory as images composed of "lines" representing field or particle propagation and "vertices" representing their interactions, thus providing a clear recipe for calculating interactions in complex systems.

Feynman diagrams were first proposed in quantum electrodynamics (QED). We can use the most classic and simplest example to illustrate: the mutual repulsion of two electrons.

Imagine this physical process: Two electrons (both negatively charged) are approaching each other. Because like charges repel, they will push each other away and fly off in different directions.

So how do they "tell" each other to push away? In quantum field theory, this "pushing" force is transmitted by exchanging a "messenger particle." For electromagnetic force, this messenger particle is the photon.

The Feynman diagram is a "spacetime snapshot" of this process.

Feynman Diagram of Electron-Electron Repulsion

The horizontal axis represents time flowing from left to right, and the vertical axis represents spatial position. The blue solid lines in the diagram represent the electron paths, with two electrons flying in from the left (initial state). The red wavy line represents the photon transmitted between the two electrons. It is "virtual" because it only exists during the interaction moment, serving as the carrier of force. The green dots are "vertices," which are the most important parts of the diagram. They represent interaction events. At the left vertex, electron 1 emits a photon; at the right vertex, electron 2 absorbs this photon. After exchanging the photon, both electrons' paths are deflected, flying out from the right (final state), achieving mutual repulsion.

When physicists see this diagram, they can follow a strict set of "Feynman rules" to "translate" each part of the diagram into a specific mathematical integral. Calculating this integral gives the probability of this repulsion process occurring.

Therefore, Feynman diagrams turn a complex particle interaction process into a simple "line diagram," where every stroke corresponds to a part of a mathematical formula, making calculations as intuitive as reading a picture.

6.1 Model: Interacting \(\phi^4\) Theory (Model A)

Return to the Ginzburg-Landau (\(\phi^4\)) model of Lecture 25. The Langevin equation (Model A) is

\[ \frac{\partial \phi}{\partial t} = -L\, (r\phi - c\nabla^2\phi + u\phi^3) + \xi(\mathbf{x}, t) \, , \]

with the J-D action

\[ S[\phi,\tilde{\phi}] = \int d^d x\, dt\, \Bigl[ \tilde{\phi} ( \partial_t \phi + L(r\phi - c\nabla^2\phi) ) + L u \, \tilde{\phi}\, \phi^3 - \tfrac12 \tilde{\phi} (2LT) \tilde{\phi} \Bigr] \, , \]

using \(N=2LT\).

Split \(S = S_0 + S_{\mathrm{int}}\) into a Gaussian (free) part and a small nonlinear interaction:

  • Free action (quadratic terms): $$ S_0 = \int d^d x\, dt\, \Bigl[ \tilde{\phi} ( \partial_t \phi + L r\, \phi - L c\nabla^2 \phi ) - L T\, \tilde{\phi}^2 \Bigr]. $$

  • Interaction action (higher-order term): $$ S_{\mathrm{int}} = \int d^d x\, dt\, (L u\, \tilde{\phi}\, \phi^3 ). $$

We aim to compute the true \(\langle \phi\phi \rangle\) under \(S_{\mathrm{int}}\). Expand \(e^{-S_{\mathrm{int}}}\) in a Taylor series:

\[ \langle \phi\phi \rangle = \frac{\int \mathcal{D}[\phi,\tilde{\phi}]\, \phi\phi\, e^{-S_0} (1 - S_{\mathrm{int}} + \tfrac12 S_{\mathrm{int}}^2 - \dots)}{\int \mathcal{D}[\phi,\tilde{\phi}]\, e^{-S_0} (1 - S_{\mathrm{int}} + \dots)} \, , \]

with each term represented by a Feynman diagram.

6.2 Feynman Rules: From Action to Diagrams

Feynman rules are the "dictionary" that translates the action \(S_0\) and \(S_{\mathrm{int}}\) into drawing elements and mathematical expressions.

1) Propagators: They are the free theory (i.e., with only \(S_0\)) two-point correlation functions, representing the basic "propagation" behavior of fields.

Response propagator \(G_0 = \langle \phi\tilde{\phi} \rangle_0\): Describes the bare response of the system to perturbations. Due to causality (\(\tilde{\phi}\) always "responds" to \(\phi\)), it is usually represented as a directed line with arrow in diagrams. In frequency-momentum space, its mathematical form is:

\[ G_0(\omega, \mathbf{q}) = \frac{1}{-i\omega + L(r + c q^2)}. \]

Correlation propagator \(C_0 = \langle \phi\phi \rangle_0\): Describes the bare fluctuation correlations of the free field itself. It is usually represented as an undirected wavy line. Its mathematical form is:

\[ C_0(\omega, \mathbf{q}) = \frac{2LT}{\omega^2 + [L(r + c q^2)]^2}. \]

1) Vertex: It represents the interaction between particles, with its form determined by \(S_{\mathrm{int}}\).

In our model, \(S_{\mathrm{int}} \propto u \tilde{\phi}\phi^3\). This means a vertex is an interaction point between one \(\tilde{\phi}\) field and three \(\phi\) fields. Therefore, it is drawn as a convergence point with one arrow pointing to it (\(\tilde{\phi}\) line) and three wavy lines departing from it (\(\phi\) lines). The "strength" of this vertex is determined by the interaction parameter \(u\).

For clarity, the following Python code draws schematic rule illustrations (propagators and vertex):

import numpy as np
import matplotlib.pyplot as plt

def plot_feynman_rules():
    fig = plt.figure(figsize=(12, 8))
    fig.suptitle('Feynman Rules for Model A with J-D Action', fontsize=18)

    # 1. Response propagator G0
    ax1 = fig.add_subplot(2, 2, 1)
    ax1.set_title('Response Propagator $G_0 = \\langle \\phi \\tilde{\\phi} \\rangle_0$', fontsize=14)
    ax1.set_xlim(0, 1)
    ax1.set_ylim(0, 1)
    ax1.axis('off')
    ax1.arrow(0.1, 0.5, 0.8, 0, head_width=0.05, head_length=0.05, fc='k', ec='k', lw=2)
    ax1.text(0.5, 0.6, 'Directed line (retarded)', fontsize=12, ha='center')

    # 2. Correlation propagator C0
    ax2 = fig.add_subplot(2, 2, 2)
    ax2.set_title('Correlation Propagator $C_0 = \\langle \\phi \\phi \\rangle_0$', fontsize=14)
    ax2.set_xlim(0, 1)
    ax2.set_ylim(0, 1)
    ax2.axis('off')
    x = np.linspace(0.1, 0.9, 300)
    y = 0.5 + 0.1 * np.sin(8 * np.pi * (x - 0.1) / 0.8)
    ax2.plot(x, y, 'k-', lw=2)
    ax2.text(0.5, 0.6, 'Wavy line', fontsize=12, ha='center')

    # 3. Interaction vertex
    ax3 = fig.add_subplot(2, 2, 3)
    ax3.set_title('Vertex $-Lu\\, \\tilde{\\phi} \\phi^3$', fontsize=14)
    ax3.set_xlim(0, 1)
    ax3.set_ylim(0, 1)
    ax3.axis('off')
    center = (0.5, 0.5)
    ax3.plot(center[0], center[1], 'ko', markersize=10)
    ax3.arrow(0.9, 0.5, -0.38, 0, head_width=0.08, head_length=0.08, fc='k', ec='k', lw=2)
    angles = [np.pi * 5/6, np.pi, np.pi * 7/6]
    for angle in angles:
        x_end = center[0] + 0.4 * np.cos(angle)
        y_end = center[1] + 0.4 * np.sin(angle)
        x_wave = np.linspace(center[0], x_end, 50)
        y_wave = np.linspace(center[1], y_end, 50)
        offset = 0.03 * np.sin(np.linspace(0, 3*np.pi, 50))
        perp_vec = np.array([-(y_end-center[1]), x_end-center[0]])
        perp_vec /= np.linalg.norm(perp_vec)
        ax3.plot(x_wave + offset*perp_vec[0], y_wave + offset*perp_vec[1], 'k-', lw=2)
    ax3.text(0.5, 0.2, 'Coupling Strength $-Lu$', fontsize=14, ha='center', color='red')

    plt.tight_layout(rect=[0, 0, 1, 0.95])
    plt.show()

# Run the plotting function
plot_feynman_rules()

Run Output

6.3 First-Order Correction: Hartree Diagram

Now, using these rules to calculate the first-order perturbation correction to the correlation function \(C=\langle\phi\phi\rangle\). At zeroth order, the correlation function is just the bare correlation propagator \(C_0\). The first-order correction is given by a diagram containing one interaction vertex. For the two-point function, the simplest diagram (called Hartree correction or "sunset diagram") is shown below:

Run Output

The physical meaning of this diagram is: a fluctuation (the \(\phi\) on the left) "splits" into three intermediate fluctuations through nonlinear interactions (the two vertices in the middle) during propagation. These intermediate fluctuations interact and then "merge" into the final fluctuation (the \(\phi\) on the right). This process corrects the original, simple propagation behavior.

From Diagram to Mathematical Integral:

According to the Feynman rules, the mathematical expression corresponding to this diagram is (in frequency-momentum space):

\[ \delta C(\omega, \mathbf{q}) \propto (-L u)^2 \int \frac{d\omega'\, d^d\mathbf{k}}{(2\pi)^{d+1}} \, G_0(\omega, \mathbf{q})\, C_0(\omega', \mathbf{k})\, C_0(\omega-\omega', \mathbf{q}-\mathbf{k})\, C_0(-\omega', -\mathbf{k})\, G_0(-\omega, -\mathbf{q}). \]

This is a complex expression that needs to be calculated through integration. But the key point is that Feynman diagrams provide us with a clear, intuitive "recipe" that decomposes a complex physical process into a combination of basic components (propagators and vertices).

This application demonstrates how the J-D formalism transforms from an abstract theoretical tool into a powerful, systematic computational framework. By decomposing the action into "free" and "interaction" parts, a set of Feynman rules can be derived that converts complex perturbative expansion calculations into intuitive diagrammatic problems. This makes it possible to study physical phenomena in nonlinear stochastic systems (such as scaling behavior in critical dynamics) and is one of the core technologies of modern nonequilibrium statistical physics.

Conclusion

This lecture introduced the powerful field-theoretic tool for studying stochastic dynamics—the Janssen-De Dominicis formalism. By encoding the Langevin equation constraint into the "action" of a path integral, this method provides a unified theoretical framework for any system with additive noise.

The core constructive step is introducing the response field \(\tilde{\phi}\). This field, which initially appeared as a mathematical auxiliary tool, has been proven to have profound physical meaning: it directly measures the causal response of the system to external perturbations, and its correlation function with the physical field \(\phi\), \(\langle \phi \tilde{\phi} \rangle\), precisely gives the system's linear response function \(\chi\).

The culmination of this formalism is its provision of a clear and systematic derivation path for the fluctuation-dissipation theorem. The FDT reveals that in systems at thermal equilibrium, two seemingly completely different phenomena—spontaneous fluctuations caused by internal thermal motion (described by correlation function \(C\)) and the dissipative response of the system to external perturbations (described by response function \(\chi\))—are actually two sides of the same coin. The J-D formalism shows that this profound connection is rooted in the time-reversal symmetry (detailed balance) that the system satisfies at equilibrium.

The fluctuation-dissipation theorem is one of the cornerstones of equilibrium statistical physics, but its brilliance also limits its domain. When a system is strongly driven away from equilibrium, the detailed balance condition is broken, and the simple FDT no longer holds. For example, when a colloidal particle is rapidly dragged by a laser trap in a fluid, what would be the relationship between the system's response and fluctuations?

This is one of the core questions of modern nonequilibrium statistical physics. The next lecture will explore the cutting-edge progress on this question, entering the field of nonequilibrium work and fluctuation theorems. The Jarzynski equality and Crooks fluctuation theorem to be introduced are among the most significant breakthroughs in nonequilibrium statistical physics in recent years. They are remarkable generalizations of the second law in nonequilibrium processes—even in highly irreversible processes that generate substantial dissipation, these theorems can still establish precise equality relationships between nonequilibrium physical quantities (such as work done in nonequilibrium processes) and equilibrium thermodynamic quantities (such as free energy differences). These fluctuation theorems can be seen as successors and generalizations of FDT in the nonequilibrium world, providing entirely new theoretical tools for understanding and manipulating microscopic systems far from equilibrium.