跳转至

Introduction

In Lecture 7, the course built a standard "toolbox" for analyzing two-dimensional dynamical systems. Through the introduction of the Poincaré-Bendixson Theorem, the analysis broke through the local limitations of Linear Stability Analysis (the trace \(\tau\) and determinant \(\Delta\) of the Jacobian matrix), providing rigorous topological criteria for the existence of Limit Cycles in the two-dimensional phase plane. This theoretical framework successfully explained the spontaneous oscillations in the RhoGTPase biochemical cycle and preliminarily explored the dynamical characteristics of the three-strategy cyclic game Rock-Paper-Scissors (RPS).

However, complex systems in reality often contain far more than two or three components. When the number of strategies \(S\) in evolutionary games increases (e.g., \(S \gg 3\)), the system is no longer constrained by the topological restrictions of the two-dimensional plane, and the Poincaré-Bendixson theorem no longer applies. This lecture will achieve two key dimensional leaps on this foundation: from low-dimensional to high-dimensional strategy space, and from temporal evolution to spatiotemporal evolution.

This lecture will first focus on the high-dimensional generalization of Replicator Dynamics. For zero-sum games containing \(S\) strategies, their evolution follows the Anti-symmetric Lotka-Volterra Equation (ALVE). This content will reveal a profound interdisciplinary connection: the ALVE equation describing biological strategy competition is mathematically equivalent in structure to Boson Condensation in driven-dissipative systems. To solve global stability problems in high-dimensional phase space, the course will no longer rely on geometric intuition but instead introduce the powerful algebraic tool of Lyapunov Functions. In particular, using Relative Entropy (i.e., Kullback-Leibler divergence) as a Lyapunov function, we rigorously prove the thermodynamic direction of system evolution: under anti-symmetric interactions, the system must evolve to specific Condensates, causing some strategies to survive while others go extinct (depletion).

The perspective then shifts from "well-mixed" to "spatially extended." All previous dynamical models (such as Ginzburg-Landau, evolutionary games) assumed the system was spatially uniform (0-dimensional). To describe pattern formation, spatial dimensions must be introduced. Therefore, we trace the physical origins of Brownian Motion, starting from the microscopic Random Walk model, deriving the macroscopic Diffusion Equation through the continuum limit.

This derivation of the diffusion process is crucial—it not only connects microscopic randomness with macroscopic determinism but also lays the foundation for exploring Relaxational Dynamics in the next lecture (Lecture 9). In Lecture 9, the diffusion equation will be combined with reaction dynamics to construct the complete Ginzburg-Landau Model and Model A, formally beginning the exploration of interface dynamics, surface tension, and complex spatiotemporal pattern formation.

1. Evolutionary Game Theory and the Anti-symmetric Lotka-Volterra Equation

In Lecture 7, the course demonstrated the cyclic dynamics of three-strategy systems through the Rock-Paper-Scissors (RPS) model. However, real-world biological communities or quantum systems often contain a large number of interacting components (\(S \gg 3\)). To describe the self-organization behavior of such high-dimensional systems, this lecture first establishes a universal mathematical framework—Replicator Dynamics. This framework not only applies to describing strategy competition in biological populations, but its mathematical structure is also proven to have profound isomorphism with boson condensation phenomena in non-equilibrium quantum systems.

1.1 Theoretical Framework - Strategies, Populations and Microscopic Interaction Mechanisms

Consider a population composed of a large number of individuals, where \(S\) possible strategies (or species) exist, denoted as \(E_1, E_2, \dots, E_S\). The macroscopic state of the system is defined by the number of individuals \(N_i(t)\) adopting each strategy, where \(i = 1, \dots, S\).

Lecture Slide Screenshot

Evolution of strategy space: from simple Rock-Paper-Scissors (\(S=3\)) to The Big Bang Theory version (\(S=5\)) to high-dimensional complex networks

The model is based on a core physical constraint: Population Conservation. Within the time scale under consideration, it is assumed that there is no birth or death of individuals (or births and deaths are in dynamic equilibrium), so that the total number of individuals \(N\) in the system remains constant:

\[ N = \sum_{i=1}^{S} N_i, \quad N_i(t) \geq 0 \]

This constraint means that the system's dynamics is entirely driven by Transitions of individuals between different strategies. Such transitions typically arise from Pairwise Interactions between individuals. When an individual adopting strategy \(E_j\) (Player \(j\)) encounters an individual adopting strategy \(E_i\) (Player \(i\)), if strategy \(E_i\) has an advantage in the current game payoff matrix, individual \(j\) may "imitate" individual \(i\), thereby abandoning original strategy \(E_j\) and adopting strategy \(E_i\).

This microscopic process can be analogized to autocatalytic processes in chemical reaction kinetics:

\[ E_j + E_i \xrightarrow{r_{ij}} E_i + E_i \]

This formula describes the process of Replication or Imitation.

Reactants (\(E_j + E_i\)): Represents an individual adopting strategy \(j\) (\(E_j\)) encountering an individual adopting strategy \(i\) (\(E_i\)).

Products (\(E_i + E_i\)): Represents that after the game or interaction occurs, since strategy \(i\) has an advantage (determined by rate \(r_{ij}\)), the individual adopting strategy \(j\) is "converted" to strategy \(i\) (i.e., imitated \(i\)). The original individual \(i\) remains unchanged. The result is that the system has 1 less \(E_j\) and 1 more \(E_i\).

Autocatalytic: It is called autocatalytic because \(E_i\) is both a product and a reactant (catalyst) in the reaction. The presence of \(E_i\) promotes the production of more \(E_i\) (through converting \(E_j\)), which is precisely the positive feedback mechanism for population growth.

This process mathematically corresponds to the population change jump:

\[ E_j \to E_i, \quad \text{resulting in population change:} \quad N_j \to N_j - 1, \quad N_i \to N_i + 1 \]

The rate \(G_{i \leftarrow j}\) at which this transition occurs depends on two factors:

1.Encounter probability: According to the Law of Mass Action, in a well-mixed system, the frequency of encounter between two types of individuals is proportional to the product of their population numbers \(N_i N_j\).

2.Transition probability: Determined by the intrinsic advantage of strategy \(i\) over strategy \(j\) or the reaction rate constant \(r_{ij}\).

Therefore, the total rate of transition from strategy \(j\) to strategy \(i\) can be written as:

\[ G_{i \leftarrow j} (N_i, N_j) = r_{ij} N_i N_j \]

It is worth noting that in more general stochastic dynamics descriptions (such as master equations), there is usually also a spontaneous Mutation term, where the rate form is modified to \(r_{ij}(N_i + S_{ij})N_j\). However, in the deterministic replicator dynamics limit discussed in this lecture, the focus is on evolution driven purely by "selection pressure," so the spontaneous mutation term is set to \(S_{ij} = 0\).

1.2 Derivation of the Anti-symmetric Lotka-Volterra Equation (ALVE)

Based on the above microscopic mechanism, we can construct the macroscopic differential equation describing the temporal evolution of the \(i\)-th strategy population \(N_i\). The net rate of change of \(N_i\) is determined by all possible "inflows" (other strategies converting to \(i\)) minus all possible "outflows" (\(i\) converting to other strategies):

\[ \dot{N}_i = \underbrace{\sum_{j \ne i} r_{ij} N_i N_j}_{\text{Gain from } j \to i} - \underbrace{\sum_{j \ne i} r_{ji} N_j N_i}_{\text{Loss from } i \to j} \]

By factoring out the common factor \(N_i N_j\), the equation can be simplified to the following form:

\[ \dot{N}_i = \sum_{j \ne i} (r_{ij} - r_{ji}) N_i N_j \]

At this point, a core physical quantity is introduced: the Interaction Matrix \(A\), whose elements \(a_{ij}\) are defined as the difference between two unidirectional transition rates:

\[ a_{ij} = r_{ij} - r_{ji} \]

The physical meaning of this definition is extremely profound. \(a_{ij}\) represents the Net Advantage of strategy \(i\) over strategy \(j\). This definition directly endows matrix \(A\) with Anti-symmetry (or Skew-symmetry):

\(a_{ij} = -a_{ji}\): If \(i\) defeats \(j\) and gains benefit (\(a_{ij} > 0\)), then \(j\) necessarily suffers an equal degree of loss (\(a_{ji} < 0\)). This is precisely the strict mathematical characteristic of Zero-sum Games—one party's gain strictly equals another party's loss.

\(a_{ii} = 0\): The net benefit of any strategy playing against itself is zero.

To make the equation scale-invariant and simplify analysis, the Relative Abundance or strategy frequency \(x_i\) is introduced:

\[ x_i = \frac{N_i(t)}{N}, \quad \text{satisfying constraint} \quad \sum_{i=1}^S x_i = 1 \]

Substituting \(N_i = N x_i\) into the rate equation and canceling the constant \(N\) (or absorbing it into a redefinition of time scale), we finally obtain the standard Anti-symmetric Lotka-Volterra Equation (ALVE):

\[ \dot{x}_i = x_i \sum_{j=1}^S a_{ij} x_j = x_i (A\mathbf{x})_i \]

Here, the vector term \((A\mathbf{x})_i\) has a clear biological meaning: it represents the Fitness of strategy \(i\) in the current population environment \(\mathbf{x}\).

If \((A\mathbf{x})_i > 0\), it means strategy \(i\) dominates in the average confrontation with the current mixed population, and its frequency \(x_i\) will grow exponentially.

If \((A\mathbf{x})_i < 0\), strategy \(i\) is at a disadvantage, and its frequency will decay.

This is precisely the mathematical expression of Frequency-dependent Selection: a strategy's advantage or disadvantage is not absolute but depends on the environment it is in (i.e., the distribution of other strategies).

Lecture Slide Screenshot

Schematic of complex strategy network (\(S=50\)). Nodes represent strategies, arrows represent non-zero net advantages \(a_{ij}\). The core question is: which strategies will ultimately win (condense)?

1.3 Physical Equivalence - Boson Condensation and Quantum State Selection

This section reveals a profound insight connecting biology and physics: the ALVE equation describing evolutionary games above is completely equivalent in mathematical structure to Bose-Einstein Condensation in Driven-Dissipative Systems.

In non-equilibrium quantum statistical physics:

  • Strategy \(E_i\) corresponds to the system's quantum states (such as Floquet States).

  • Individuals correspond to non-interacting Bosons occupying these energy levels.

  • Transitions \(G_{i \leftarrow j}\) correspond to transitions of bosons between different energy levels, induced jointly by external driving fields (Pump) and environmental Dissipation.

This equivalence originates from the statistical properties of bosons. The transition rate for bosons contains a statistical factor \((1+N_i)\), the Bosonic Enhancement effect: the more particles already occupying the target state, the higher the probability of transitioning into it. In the macroscopic limit (\(N \gg 1\)), \((1+N_i) \approx N_i\), which is mathematically consistent with the \(N_i\) factor arising from "imitating the successful" in evolutionary games.

The following table summarizes the rigorous mapping between these two domains:

Feature Dimension Evolutionary Game Theory Driven-Dissipative Bosonic Systems
Basic units Strategies (\(E_i\)) Quantum States (Floquet States)
Dynamical subjects Interacting Agents Non-interacting Bosons
Evolution mechanism Strategy imitation and switching (Reactions/Switches) Transitions between energy levels
Core equation Anti-symmetric Lotka-Volterra Equation (ALVE) Mean-field Limit of Master Eq.
Core question Selection of Strategies Selection of Condensates
Transition rate \(T_{i\to j} \propto N_j N_i\) (Imitation mechanism) \(T_{i\to j} \propto N_j(1 + N_i) \approx N_j N_i\) (Bosonic enhancement)

Lecture Slide Screenshot

This figure summarizes the physical isomorphism comparison between evolutionary game theory and driven-dissipative bosonic systems.

This isomorphism suggests a universal selection mechanism in nature: whether it's the survival of the fittest in biological strategies or the condensation of quantum states, they essentially use nonlinear positive feedback (\(\dot{x}_i \propto x_i\)) to spontaneously "screen" a few macroscopically ordered states from a large number of possible microscopic states.

1.4 Phase Space of the System and Conservation Law Verification

Before deeply analyzing the dynamics, we must confirm that the solutions of the ALVE equation are physically legitimate. Since the sum of strategy frequencies must be 1 and non-negative, the system's state space is confined to a Simplex \(\Delta^{S-1}\):

\[ \Delta^{S-1} = \left\{ \mathbf{x} \in \mathbb{R}_{\geq 0}^S : \sum_{i=1}^S x_i = 1 \right\} \]

We need to verify whether ALVE dynamics automatically guarantees conservation of total probability, i.e., verify whether \(\sum_{i=1}^S \dot{x}_i\) is always zero.

Calculate the total rate of change:

\[ \sum_{i=1}^S \dot{x}_i = \sum_{i=1}^S x_i (A\mathbf{x})_i = \sum_{i=1}^S \sum_{j=1}^S x_i a_{ij} x_j = \mathbf{x}^T A \mathbf{x} \]

This is a Quadratic Form with respect to vector \(\mathbf{x}\). For any anti-symmetric matrix \(A\) (satisfying \(A^T = -A\)), this quadratic form is identically zero. The proof is as follows:

Swap the summation indices \(i\) and \(j\) (which does not change the summation result):

\[ \sum_{i,j} x_i a_{ij} x_j = \sum_{j,i} x_j a_{ji} x_i \]

Using the anti-symmetry condition \(a_{ji} = -a_{ij}\):

\[ \sum_{j,i} x_j (-a_{ij}) x_i = - \sum_{i,j} x_i a_{ij} x_j \]

Since a number equals its negative, that number must be zero. Therefore:

\[ \frac{d}{dt} \left( \sum_{i=1}^S x_i \right) = 0 \]

This rigorously proves that the system's total mass (or total probability) is conserved. Furthermore, since the equation has the form \(\dot{x}_i = x_i (\dots)\), when \(x_i \to 0\), \(\dot{x}_i \to 0\), meaning any trajectory starting with positive values can never cross the boundary \(x_i = 0\) into negative values. Therefore, the simplex \(\Delta^{S-1}\) is a Forward Invariant Set for this dynamical system, laying the geometric foundation for subsequent global stability analysis using Lyapunov functions.

2. Condensation Phenomena and Global Stability Analysis - The Lyapunov Function Method

After establishing the high-dimensional replicator dynamics equation (ALVE), the next core task is to predict the system's long-term evolutionary behavior. In complex ecological competition or quantum state evolution, not all strategies (or quantum states) can survive. As time progresses, the system undergoes spontaneous "selection": the frequency \(x_i(t)\) of some strategies will decay exponentially to zero (Depletion/Extinction), while other strategies will survive and occupy macroscopic proportions (Condensation/Survival). The goal of this section is to find an algebraic criterion that depends only on the interaction matrix \(A\) to precisely predict the final set of survivors.

2.1 Mathematical Formulation of the Condensation Problem

Suppose the system eventually tends toward some steady state or dynamically balanced region. The strategy set \(S\) can be partitioned into two complementary subsets:

  • Condensates (\(I\)): The set of strategies that ultimately survive, satisfying \(x_i^* > 0\).

  • Depleted (\(E\) or \(I^c\)): The set of strategies that ultimately go extinct, satisfying \(x_k^* \to 0\).

According to the ALVE equation \(\dot{x}_i = x_i (A\mathbf{x})_i\), a fixed point \(\mathbf{x}^*\) must satisfy \(\dot{x}_i^* = 0\). This leads to different constraints for the two types of strategies:

  • For survivors \(i \in I\) (\(x_i^* > 0\)), fitness must be zero: \((A\mathbf{x}^*)_i = 0\). This means survivors achieve some kind of force balance among themselves, where no one can gain net benefit from other survivors (Nash equilibrium characteristic).

  • For losers \(k \in E\) (\(x_k^* = 0\)), their fitness \((A\mathbf{x}^*)_k\) has no direct zero constraint, but typically needs to be negative to ensure stability of extinction.

2.2 Condensate Vector and KKT Conditions

To mathematically determine the set \(I\), we introduce a theoretical auxiliary vector—the Condensate Vector \(\mathbf{c}\). \(\mathbf{c}\) is not only a fixed point of the system but also satisfies specific global stability conditions, which are highly similar to the Karush-Kuhn-Tucker (KKT) conditions in optimization theory.

A vector \(\mathbf{c} \in \Delta^{S-1}\) is defined as a condensate vector if and only if it satisfies the following two conditions:

1.Support Condition: For all non-zero components of \(\mathbf{c}\) (i.e., \(i \in I\)), their fitness is strictly zero.

\[ (A\mathbf{c})_i = 0, \quad \forall i \in I \quad (\text{where } c_i > 0) \]

2.Stability/Exclusion Condition: For all zero components of \(\mathbf{c}\) (i.e., \(k \notin I\)), their fitness must be strictly negative.

\[ (A\mathbf{c})_k < 0, \quad \forall k \notin I \quad (\text{where } c_k = 0) \]

Physical interpretation: - The first condition means that within survivors, the game reaches a "draw" or dynamic equilibrium with zero net benefit.

  • The second condition means that any external strategy (\(k\)) attempting to invade this stable population will encounter negative returns (be suppressed) when interacting with the current population \(\mathbf{c}\). This is the mathematical guarantee that the system rejects external invaders.

Research by Knebel et al. (as shown in the figure below) proves that for a given anti-symmetric matrix \(A\), the index set \(I\) satisfying the above conditions uniquely exists. This uniqueness theorem is the cornerstone of the entire condensation theory, guaranteeing the determinism of evolutionary outcomes, i.e., regardless of initial conditions (as long as all \(x_i(0) > 0\)), the survivor set is predestined.

Lecture Slide Screenshot

The upper part of the figure shows the definition and properties of the condensate vector. The left side shows a schematic of condensation and depletion; the right side lists the KKT conditions that condensate vector \(\mathbf{c}\) must satisfy, and the mathematical framework using relative entropy (KL divergence) as a Lyapunov function.

2.3 Lyapunov Function - Relative Entropy and Information Geometry

To rigorously prove that the system \(\mathbf{x}(t)\) must converge to the subspace \(I\) defined by \(\mathbf{c}\), we need to construct a Lyapunov Function. For this type of population dynamics equation, the most natural candidate is Relative Entropy from information theory, also known as Kullback-Leibler divergence.

Define the Lyapunov function \(D(\mathbf{c} \| \mathbf{x})\) as the relative entropy between condensate vector \(\mathbf{c}\) and the system's current state \(\mathbf{x}(t)\):

\[ D(\mathbf{c} \| \mathbf{x}) = \sum_{i \in I} c_i \ln \left( \frac{c_i}{x_i(t)} \right) \]

Note: - The summation is only over \(i \in I\), i.e., non-zero elements of the condensate vector.

  • Since \(x \ln x\) is a convex function, by Gibbs' inequality, \(D \geq 0\), and takes the minimum value only when \(\mathbf{x}\)'s distribution on \(I\) is consistent with \(\mathbf{c}\). This qualifies \(D\) as a "generalized distance" or "potential energy" function in phase space.

Derivation of time derivative (core proof):

Calculate the rate of change \(\dot{D}\) of \(D\) with respect to time:

\[ \frac{d}{dt} D(\mathbf{c} \| \mathbf{x}) = \sum_{i \in I} c_i \frac{d}{dt} [\ln c_i - \ln x_i] = - \sum_{i \in I} c_i \frac{\dot{x}_i}{x_i} \]

Substituting the ALVE equation \(\frac{\dot{x}_i}{x_i} = (A\mathbf{x})_i = \sum_{j=1}^S a_{ij} x_j\):

\[ \dot{D} = - \sum_{i \in I} c_i \left( \sum_{j=1}^S a_{ij} x_j \right) = - \sum_{j=1}^S x_j \left( \sum_{i \in I} c_i a_{ij} \right) \]

Using the anti-symmetry of the matrix \(a_{ij} = -a_{ji}\), the inner summation becomes:

\[ \sum_{i \in I} c_i a_{ij} = - \sum_{i \in I} a_{ji} c_i = - (A\mathbf{c})_j \]

Therefore, \(\dot{D}\) simplifies to:

\[ \dot{D} = \sum_{j=1}^S x_j (A\mathbf{c})_j \]

Decompose the summation index \(j\) into two parts: the part belonging to condensate set \(I\) and the part belonging to depleted set \(E\):

\[ \dot{D} = \sum_{j \in I} x_j \underbrace{(A\mathbf{c})_j}_{=0} + \sum_{k \in E} x_k \underbrace{(A\mathbf{c})_k}_{<0} \]

According to the definition of condensate vector \(\mathbf{c}\) (support condition and exclusion condition): - For \(j \in I\), \((A\mathbf{c})_j = 0\), the first term vanishes. - For \(k \in E\), \((A\mathbf{c})_k < 0\), and since \(x_k \geq 0\), each term is non-positive.

Conclusion:

\[ \frac{d}{dt} D(\mathbf{c} \| \mathbf{x}) = \sum_{k \in E} x_k (A\mathbf{c})_k \leq 0 \]

This result (as shown in the numerical simulation below) has profound physical significance and global dynamical implications:

Lecture Slide Screenshot

The lower part of the figure shows numerical simulation results of ALVE dynamics. The left shows a specific anti-symmetric interaction matrix \(A\). The right shows the evolution of strategy frequencies \(x_i(t)\) over time. It can be seen that despite oscillations in the system, unselected strategies (dashed lines) decay exponentially to zero (depletion), while surviving strategies (solid lines) continue to evolve in the subspace.

1.Generalization of H-theorem: This result is similar to Boltzmann's H-theorem, indicating that during system evolution, the information divergence relative to the "optimal strategy combination" \(\mathbf{c}\) monotonically decreases. The system is constantly "learning" and approaching the optimal solution.

2.Vanishing of entropy production: Knebel and Frey point out that condensation selection is determined by the Vanishing of relative entropy production. The system tends toward a state where entropy production is zero.

3.Rigorous proof of depletion: Since \((A\mathbf{c})_k\) is a strictly negative constant for all \(k \in E\), for \(\dot{D}\) to tend to zero (system stops evolving toward lower potential energy), the only possibility is that all depleted set strategy frequencies \(x_k\) tend to zero. This mathematically rigorously proves that non-condensed states must go extinct exponentially.

4.Oscillations within subspace: After all \(x_k \to 0\), the system is confined to subspace \(I\). Note that within \(I\), the derivative of the Lyapunov function is zero. This means that within the condensate subspace, the system no longer has a dissipative tendency and can maintain motion on iso-energy surfaces. For anti-symmetric systems, this typically means the continued existence of Periodic Orbits or Limit Cycles (like Rock-Paper-Scissors cycles). Although the system has "condensed," it never "rests," but engages in eternal dynamic gaming among survivors.

3. Topological Phase Transitions and Strategy Selection in Complex Networks

In the first two sections, we established a general framework based on the Anti-symmetric Lotka-Volterra Equation (ALVE) and used Lyapunov functions to prove that the system must evolve to specific condensate states. Although the existence of condensate vector \(\mathbf{c}\) is universal, its specific structure (i.e., which strategies survive) strongly depends on the topological properties of interaction matrix \(A\). When the interaction network between strategies has specific spatial structure (such as lattices or chains), ALVE dynamics will exhibit phenomena strikingly similar to Topological Insulators in condensed matter physics. This content demonstrates how dynamical systems theory crosses disciplinary boundaries, using the same mathematical language to describe biological evolution and quantum phase transitions.

3.1 Rock-Paper-Scissors Chains and Edge States

As a concrete physical model, consider arranging \(S\) strategies in a one-dimensional chain structure (RPS Chain). In this network, strategy \(i\) only has non-zero interactions with its spatial neighbors (such as \(i-1\) and \(i+1\)), forming local cyclic dominance relationships similar to Rock-Paper-Scissors. Research by Knebel, Geiger, and Frey (2020) shows that the evolution of such systems is not chaotic but exhibits highly ordered Spatial Localization.

Reference: Knebel, J., Geiger, P. M., & Frey, E. (2020). Topological phase transition in coupled rock-paper-scissors cycles. Physical Review Letters, 125(25), 258301.

When adjusting system control parameters (e.g., the relative strength ratio of left-dominance vs. right-dominance), the support set of condensate vector \(\mathbf{c}\) (i.e., the set of surviving strategies \(I\)) undergoes abrupt changes. These changes have two notable characteristics:

1.Polarization and Edge States:

In certain parameter ranges, the system's total mass (population fraction) is not uniformly distributed on the chain but completely concentrated at one end (left boundary or right boundary). This physically corresponds to Edge States: only strategies at the network's "edge" can survive, while strategies inside the "Bulk" all go extinct (deplete). This forms a perfect analogy with the "bulk insulating, surface conducting" property of topological insulators.

2.Robustness:

This edge condensation phenomenon has extremely strong resistance to parameter perturbations. As long as parameter changes do not cross a certain Critical Point, the system's condensation pattern (i.e., whether survivors are at the left or right end) remains strictly unchanged. This indicates the system is in a phase with Topological Protection, whose macroscopic properties are not determined by minor parameter details but by the system's global topological properties.

3.2 Winding Number and Topological Phase Transition

To quantitatively describe this phase transition, the Winding Number (\(\nu\)) is introduced as the system's topological invariant. Mathematically, this typically involves some kind of integral over the interaction matrix (or its Fourier-transformed Hamiltonian) in the Brillouin zone, similar to the Chern Number in the quantum Hall effect.

As control parameters vary continuously, the system's energy spectrum (in this context, the eigenvalue spectrum of the linearized operator) may undergo band gap closing and reopening. Accompanying this process, the winding number \(\nu\) undergoes discrete jumps (e.g., from \(0\) to \(1\)), marking the occurrence of Topological Phase Transition:

  • Trivial Phase (\(\nu=0\)): Corresponds to mass distribution in bulk states or disordered distribution, with the system being sensitive to perturbations.

  • Non-trivial Topological Phase (\(\nu \neq 0\)): Corresponds to mass precisely locked at boundaries, forming protected edge states.

Physical significance and biological implications: This discovery reveals that evolutionary systems may exploit topological properties to achieve functional stability. In noisy biological environments, if some critical functional gene or species is in a "topologically non-trivial" state, it can stably exist immune to environmental fluctuations. This provides a new physical perspective for understanding biological complexity: Natural selection not only screens for individuals with highest fitness but may also screen for the most topologically robust network structures.

4. From Microscopic Random Walk to Macroscopic Diffusion Equation

In previous lectures (such as replicator dynamics, RPS model), all analyses were built on the assumption of "Well-mixed". This means the system was treated as a zero-dimensional point, ignoring spatial distribution differences of matter. However, real-world biophysical processes—from intracellular protein transport to population migration in ecosystems—all occur in space with extent.

To describe Turing Patterns and other complex spatial self-organization structures, this lecture must break the limitation of "spatial uniformity" and introduce Spatial Extension. The most fundamental physical process connecting microscopic particle motion with macroscopic material transport is Diffusion. This section will show how to start from microscopic particles' random thermal motion and, through rigorous mathematical derivation, obtain the diffusion equation describing macroscopic concentration field evolution.

4.1 Historical Background and Physical Essence - Brownian Motion

The microscopic physical basis of diffusion phenomena is Brownian Motion. In 1827, botanist Robert Brown observed pollen particles suspended in water under a microscope and discovered they underwent ceaseless, irregular motion.

This phenomenon puzzled the scientific community for a long time, even triggering philosophical discussions about "vital force." Charles Darwin also mentioned this anecdote in his memoirs, describing Brown's "little secret" about this discovery. It wasn't until 1905 that Einstein, in his miracle year papers, completely revealed the physical nature of this puzzle. He pointed out that the random motion of suspended particles did not originate from vital force but was caused by countless imbalanced fluctuations of thermal collisions from surrounding solvent molecules (such as water molecules).

Lecture Slide Screenshot

Historical background of Brownian motion. The left side quotes Darwin's description of Brown's discovery; the right side shows Brownian motion trajectories of micron-scale particles filmed by Harvard University's Weitz Laboratory.

Einstein not only qualitatively explained the phenomenon but also established quantitative relationships between macroscopic diffusion coefficient \(D\) and microscopic physical quantities, the famous Einstein Relation:

\[ D = \frac{k_B T}{\gamma} \]

This formula reveals the physical origin of diffusion: - Driving force (molecular thermal motion): \(k_B T\) represents thermal energy, the energy source driving diffusion. At the cell biology scale (micrometer level), \(k_B T \approx 4 \text{ pN} \cdot \text{nm}\), which is the core energy scale governing all random molecular motion inside cells.

  • Resistance (dissipation): \(\gamma\) is the Stokes Friction Coefficient, representing the hindrance of the fluid environment to particle motion.

This theory not only laid the foundation for non-equilibrium statistical physics but was also one of the key pieces of evidence in human history confirming the existence of atoms.

4.2 Random Walk - Derivation from Discrete to Continuous

To rigorously connect microscopic randomness with macroscopic determinism mathematically, this lecture uses the 1D Random Walk model for derivation.

4.2.1 Discrete Master Equation

Consider a particle on a one-dimensional lattice, with physical parameters defined as: - \(\delta\): Lattice spacing (spatial step).

  • \(\tau\): Time step.

  • Jump rules: At each time step, the particle jumps left (\(x-\delta\)) with probability \(1/2\) and right (\(x+\delta\)) with probability \(1/2\). This is an unbiased isotropic random walk.

Let \(P(x, t)\) be the probability of the particle being at position \(x\) at time \(t\). According to probability conservation, a particle at position \(x\) at time \(t + \tau\) must have come from left neighbor \(x-\delta\) (jumped right) or right neighbor \(x+\delta\) (jumped left) at the previous time \(t\). Therefore, we can write the discrete Master Equation:

\[ P(x, t+\tau) = \underbrace{\frac{1}{2} P(x-\delta, t)}_{\text{From left}} + \underbrace{\frac{1}{2} P(x+\delta, t)}_{\text{From right}} \]

4.2.2 Continuum Limit and Taylor Expansion

To obtain the macroscopic equation, we need to take the Continuum Limit, i.e., let step size \(\delta \to 0\) and \(\tau \to 0\). At this point, the probability distribution \(P(x, t)\) becomes a smooth function, and we can perform Taylor Expansion on it.

Time term expansion (keeping to first order):

\[ P(x, t+\tau) \approx P(x, t) + \tau \frac{\partial P}{\partial t} \]

Spatial term expansion (keeping to second order): For left and right neighbor terms, expand around \(x\) respectively:

\[ P(x-\delta, t) \approx P(x, t) - \delta \frac{\partial P}{\partial x} + \frac{\delta^2}{2} \frac{\partial^2 P}{\partial x^2} \]
\[ P(x+\delta, t) \approx P(x, t) + \delta \frac{\partial P}{\partial x} + \frac{\delta^2}{2} \frac{\partial^2 P}{\partial x^2} \]

Substituting the above expansions into the discrete master equation:

\[ P + \tau \frac{\partial P}{\partial t} = \frac{1}{2} \left[ \left( P - \delta \frac{\partial P}{\partial x} + \frac{\delta^2}{2} \frac{\partial^2 P}{\partial x^2} \right) + \left( P + \delta \frac{\partial P}{\partial x} + \frac{\delta^2}{2} \frac{\partial^2 P}{\partial x^2} \right) \right] \]

Simplify the right side of the equation:

1.Zeroth-order term: \(\frac{1}{2}(P + P) = P\), cancels with left side.

2.First-order term: \(\frac{1}{2}(-\delta \partial_x P + \delta \partial_x P) = 0\). Physical meaning: The cancellation of first-order derivative terms reflects the Isotropy of the random walk, i.e., there is no directional Drift.

3.Second-order term: \(\frac{1}{2}(\frac{\delta^2}{2} \partial_{xx} P + \frac{\delta^2}{2} \partial_{xx} P) = \frac{\delta^2}{2} \frac{\partial^2 P}{\partial x^2}\).

Finally we obtain:

\[ \tau \frac{\partial P}{\partial t} = \frac{\delta^2}{2} \frac{\partial^2 P}{\partial x^2} \]

4.2.3 Macroscopic Diffusion Equation

Rearranging the above equation, separating coefficients:

\[ \frac{\partial P}{\partial t} = \left( \lim_{\delta,\tau \to 0} \frac{\delta^2}{2\tau} \right) \frac{\partial^2 P}{\partial x^2} \]

At this point, define the macroscopic Diffusion Coefficient \(D\):

\[ D \equiv \frac{\delta^2}{2\tau} \]

And replace the microscopic probability density \(P(x, t)\) with the macroscopic material concentration \(C(x, t)\), we obtain the classical Diffusion Equation:

\[ \frac{\partial C}{\partial t} = D \nabla^2 C \]

Physical meaning: This derivation process is essentially the physical manifestation of the Central Limit Theorem. Although individual particle motion is random and unpredictable (variance grows linearly with time, \(\langle x^2 \rangle \sim 2Dt\)), the statistical average behavior of large numbers of particles (concentration field) follows precise deterministic laws. This lays a solid mathematical foundation for using partial differential equations (PDEs) to study Turing patterns and other macroscopic self-organization phenomena in subsequent chapters.

5. Analytical Solutions, Boundary Conditions and Numerical Simulation of the Diffusion Equation

In the previous section, this lecture derived the macroscopic diffusion equation through the continuum limit of microscopic random walks. To deeply understand the physical characteristics of the diffusion process, this section will explore its analytical solution methods mathematically, particularly the application of Fourier transforms. Additionally, considering that biophysical systems are usually bounded, this section will also analyze the decisive influence of different boundary conditions on system steady-state behavior, and demonstrate through Python code practice the two core dynamics covered in this lecture: condensation phenomena in evolutionary games and diffusion processes in space.

5.1 Fourier Transform Method for Solution

For the diffusion equation defined on the unbounded region \(x \in (-\infty, \infty)\), the most effective analytical solution tool is the Fourier Transform. This method transforms partial differential equations (PDEs) into easily solvable ordinary differential equations (ODEs).

First, define the transform from real-space concentration \(C(x, t)\) to reciprocal space (k-space) and its inverse:

\[ \tilde{C}(k, t) = \int_{-\infty}^{\infty} C(x, t) e^{ikx} \, dx \quad (\text{Forward transform}) \]
\[ C(x, t) = \frac{1}{2\pi} \int_{-\infty}^{\infty} \tilde{C}(k, t) e^{-ikx} \, dk \quad (\text{Inverse transform}) \]

Apply Fourier transform to both sides of the equation \(\partial_t C = D \partial_x^2 C\). Using the Fourier transform property of derivatives \(\mathcal{F}[\partial_x^2 C] = -k^2 \tilde{C}\), the original equation is decoupled into independent ODEs for each wavenumber \(k\):

\[ \frac{\partial \tilde{C}(k, t)}{\partial t} = -D k^2 \tilde{C}(k, t) \]

The solution of this ODE exhibits simple exponential decay:

\[ \tilde{C}(k, t) = \tilde{C}(k, 0) e^{-D k^2 t} \]

Physical meaning: The exponential term \(e^{-D k^2 t}\) indicates that modes with high wavenumber \(k\) (i.e., high spatial frequency, fine structures) decay at a rate proportional to the square of their frequency. This means diffusion preferentially smooths sharp gradients and small details in the concentration field, causing spatial distributions to become increasingly Smoothed over time.

Through inverse Fourier transform and using the convolution theorem, we obtain the general solution in real space. This solution is the convolution of the initial distribution \(C(x, 0)\) with the Heat Kernel (i.e., the Green's function of the diffusion equation):

\[ C(x, t) = \int_{-\infty}^{\infty} C(x', 0) \frac{1}{\sqrt{4\pi D t}} \exp\left( -\frac{(x-x')^2}{4Dt} \right) \, dx' \]

For point source initial condition \(C(x, 0) = \delta(x)\), the solution simplifies to a time-evolving Gaussian distribution. Its variance \(\sigma^2 = 2Dt\) grows linearly with time, meaning the average displacement distance of diffusing particles grows as \(\sqrt{t}\)—this is the signature characteristic of Brownian motion.

5.2 Effect of Boundary Conditions

Real biophysical experiments often occur in finite containers or cells. Boundary Conditions (BCs) profoundly constrain the system's evolutionary trajectory and final steady state.

Boundary Condition Type Mathematical Expression Physical Meaning Steady State Solution (\(t \to \infty\))
No Flux (Neumann) \(\partial_x C\|_{x=0,L}=0\) Closed system, particles reflect at walls. System total mass conserved. Uniform distribution: \(C(x) = \text{const}\)
Dirichlet \(C(0)=C_L, C(L)=C_R\) Open system, boundaries connected to reservoirs with constant concentration (sources/sinks). Linear distribution: \(C(x) = Ax + B\) (constant net flux exists)
Periodic \(C(0)=C(L), C'(0)=C'(L)\) Circular geometry (such as bacterial circular DNA or ring reactors). Similar to unbounded case with discrete spectrum expansion

5.3 Code Practice

To demonstrate the two core dynamical processes covered in this chapter, the following Python code practice simulates two scenarios:

1.Replicator Dynamics: Simulates ALVE dynamics with 5 strategies, reproducing condensation and depletion phenomena.

2.Diffusion: Simulates diffusion of a one-dimensional Gaussian wave packet under no-flux boundary conditions.

import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp

# Set dark background theme
plt.style.use('dark_background')

# ================= PART 1: Replicator Dynamics (ALVE) =================
def run_replicator_simulation():
    # Antisymmetric interaction matrix (from lecture Slide 5)
    A = np.array([[ 0,  2,  1,  0, -1],
                  [-2,  0,  2,  1, -3],
                  [-1, -2,  0, -1,  4],
                  [ 0, -1,  1,  0, -4],
                  [ 1,  3, -4,  4,  0]])
    S = A.shape[0]

    def alve_dynamics(t, x):
        # ALVE equation: dx_i/dt = x_i * (Ax)_i
        # Note: In numerical computation, x may slightly deviate from the simplex due to errors, but this does not affect short-term demonstration
        fitness = A @ x
        return x * fitness

    # Initial condition: random distribution
    np.random.seed(42)
    x0 = np.random.rand(S)
    x0 /= np.sum(x0)

    # Solve ODE
    t_span = (0, 15)
    sol = solve_ivp(alve_dynamics, t_span, x0, t_eval=np.linspace(0, 15, 500))

    # Plotting
    fig, ax = plt.subplots(figsize=(10, 6))

    # Add a small epsilon to avoid log(0) error
    epsilon = 1e-10

    # Color cycle to distinguish different strategies
    colors = plt.cm.tab10(np.linspace(0, 1, S))

    for i in range(S):
        # Differentiate line styles based on final survival: solid for survivors, dashed for extinct
        is_survivor = sol.y[i, -1] > 1e-3
        style = '-' if is_survivor else '--'
        status = 'Condensate' if is_survivor else 'Depleted'
        label = f'Strategy {i+1} ({status})'

        ax.plot(sol.t, np.log(sol.y[i] + epsilon), 
                linestyle=style, label=label, linewidth=2, color=colors[i])

    ax.set_title('Replicator Dynamics: Condensation & Depletion', fontsize=16, color='white')
    ax.set_xlabel('Time', fontsize=14, color='white')
    ax.set_ylabel(r'$\ln(x_i)$', fontsize=14, color='white')
    ax.grid(True, alpha=0.2, linestyle='--')

    # Set legend text color
    legend = ax.legend(fontsize=10, loc='upper right')
    plt.setp(legend.get_texts(), color='white')

    plt.tight_layout()

    plt.savefig('replicator_dynamics.png') 
    plt.close() 

# ================= PART 2: 1D Diffusion Equation =================
def run_diffusion_simulation():
    L = 100.0
    Nx = 100
    D = 5.0
    T = 20.0

    dx = L / (Nx - 1)
    # Explicit difference stability condition: dt <= dx^2 / (2D)
    dt = 0.2 * dx**2 / D 

    x = np.linspace(0, L, Nx)

    # Initial condition: narrow Gaussian distribution at center
    u = np.exp(-((x - L/2)**2) / 20.0)

    # Store snapshots for plotting
    u_history = [u.copy()]
    t_snapshots = [0.0]

    steps = int(T / dt)
    plot_interval = steps // 10  # Record 10 time point snapshots

    for n in range(steps):
        u_new = u.copy()
        # Explicit finite difference scheme: u_new = u + alpha * (u_left - 2u + u_right)
        alpha = D * dt / dx**2
        u_new[1:-1] = u[1:-1] + alpha * (u[2:] - 2*u[1:-1] + u[:-2])

        # No flux boundary conditions (Neumann BCs): u[-1] = u[-2], u[0] = u[1]
        # This simulates particles bouncing off walls
        u_new[0] = u_new[1]
        u_new[-1] = u_new[-2]

        u = u_new

        if (n + 1) % plot_interval == 0:
            u_history.append(u.copy())
            t_snapshots.append((n + 1) * dt)

    # Plotting
    fig, ax = plt.subplots(figsize=(10, 6))

    # Use gradient colors to show temporal evolution: purple (early) to yellow (late)
    colors = plt.cm.plasma(np.linspace(0, 1, len(u_history)))

    for i, u_snapshot in enumerate(u_history):
        label = f't={t_snapshots[i]:.1f}' if i == 0 or i == len(u_history)-1 else None
        ax.plot(x, u_snapshot, color=colors[i], lw=2, label=label)

    ax.set_title(f'1D Diffusion (No Flux BC, D={D})', fontsize=16, color='white')
    ax.set_xlabel('Position x', fontsize=14, color='white')
    ax.set_ylabel('Concentration C(x)', fontsize=14, color='white')

    # Add colorbar to indicate time progression
    sm = plt.cm.ScalarMappable(cmap=plt.cm.plasma, norm=plt.Normalize(vmin=0, vmax=T))
    cbar = plt.colorbar(sm, ax=ax)
    cbar.set_label('Time', color='white', fontsize=12)
    cbar.ax.yaxis.set_tick_params(color='white')
    plt.setp(plt.getp(cbar.ax.axes, 'yticklabels'), color='white')

    ax.grid(True, alpha=0.2, linestyle='--')
    legend = ax.legend(loc='upper right', fontsize=10)
    plt.setp(legend.get_texts(), color='white')

    plt.tight_layout()
    plt.savefig('diffusion_equation.png') 

if __name__ == "__main__":
    run_replicator_simulation()
    run_diffusion_simulation()

Code Output

This figure shows the logarithmic frequency of 5 strategies changing over time. Bifurcation behavior is visible: strategies represented by solid lines (such as strategies 1, 3, 5) oscillate but overall maintain higher levels, constituting the Condensate; while strategies represented by dashed lines (such as strategies 2, 4) show linearly decreasing logarithmic frequencies (i.e., exponentially decaying values), quickly tending toward Depletion. This perfectly validates the predictions from Lyapunov analysis.

Code Output

This figure shows the evolution of an initial Gaussian wave packet (dark purple curve) over time. As time progresses (color changes from purple to yellow), the wave packet's peak decreases and width increases, reflecting the Smoothing effect of diffusion. Note that at the boundaries \(x=0\) and \(x=100\), the curves always maintain horizontal tangent lines (\(\partial_x C = 0\)), verifying correct implementation of No-flux boundary conditions and ensuring total mass conservation within the system.

Summary

This lecture completed two key theoretical leaps: from "local" to "global," and from "time" to "space."

First, the course solved the global stability problem of high-dimensional replicator dynamics by introducing Lyapunov functions (especially relative entropy), and profoundly revealed the mathematical isomorphism between strategy selection in evolutionary games and boson condensation in non-equilibrium physics.

Second, based on analysis of microscopic Brownian motion, the course rigorously derived the macroscopic diffusion equation using the random walk model, thereby laying a solid physical foundation for describing material transport in spatially extended systems.

These two theoretical pillars—nonlinear reaction dynamics (ALVE/Replicator) and spatial diffusion (Diffusion)—will formally converge in the next lecture (Lecture 9). Then, the course will deeply explore Relaxational Dynamics and introduce the famous Ginzburg-Landau Model and Model A. The analysis will focus on how, when "reaction" and "diffusion" mechanisms compete in space, the system breaks spatial symmetry, thereby emerging complex Interface Dynamics, Surface Tension, and fascinating spatiotemporal pattern structures. This marks the exploration of self-organization theory entering its most core chapter.