跳转至

Introduction

In Lecture 5, the analysis of two-dimensional dynamical systems (\(\dot{m} = f(m,c), \dot{c} = g(m,c)\)) was subject to a key physical constraint: mass conservation. This constraint manifests mathematically as \(g = -f\), leading to the conservation law \(m(t) + c(t) = n = \text{const}\). The dynamical consequences of this constraint are profound:

1.It forces the system's Jacobian matrix \(\mathcal{J} = \begin{pmatrix} f_m & f_c \\ -f_m & -f_c \end{pmatrix}\) to be necessarily singular (i.e., \(\det(\mathcal{J}) = 0\)).

2.This leads to the system necessarily having a zero eigenvalue (\(\sigma_1 = 0\)), making the system neutrally stable along the nullcline (i.e., the family of equilibrium points) direction.

3.All the system's dynamics is "compressed" onto the direction corresponding to the other eigenvalue \(\sigma_2 = f_m - f_c\), which is exactly parallel to the conservation line \(m+c=n\).

Therefore, although the system in Lecture 5 is formally two-dimensional, its dynamical essence is one-dimensional. Its evolution is strictly confined to a one-dimensional manifold (the conservation line). As a result, this system can only exhibit one-dimensional dynamical behavior (such as the saddle-node or transcritical bifurcations discussed in Lecture 3 and Lecture 4), and cannot produce any oscillatory behavior.

This lecture, by removing the mass conservation constraint, achieves the key transition from one-dimensional dynamics to "genuine" two-dimensional dynamics. The system's dynamics will be described by general equations \(\dot{u} = f(u, v)\) and \(\dot{v} = g(u, v)\), where the special relationship \(g=-f\) no longer exists between \(f\) and \(g\). The consequence is that the system is no longer bound to a single line but can evolve throughout the entire two-dimensional \((u, v)\) phase space.

The transition from conserving to non-conserving is a necessary step from simple dynamics (containing only fixed points and monotonic behavior) to complex dynamics (such as spirals, oscillations, and limit cycles). Two-dimensional space is the lowest dimension that allows trajectories to rotate and return to themselves—this is the geometric foundation of oscillatory phenomena. However, linear analysis will reveal that purely linear oscillation (center point) is a critically "fine-tuned" state (\(T=0\)) that is physically non-robust. Therefore, Non-linearity is crucial for producing stable, self-sustained oscillations.

This lecture will build a "toolbox" for analyzing and classifying two-dimensional system behavior. First, through Linear Stability Analysis (LSA), using the Jacobian matrix's Trace (T) and Determinant (\(\Delta\)), we will completely classify local behavior near fixed points (such as nodes, saddles, spirals). Second, we introduce Nullclines and Invariant Manifolds to construct the system's global phase portrait. Finally, we apply these tools to understand two of the most important and representative nonlinear oscillators: Hopf Bifurcation (the "birth" of oscillations) and the Van der Pol Oscillator (relaxation oscillations).

This lecture will intuitively demonstrate through examples the existence of Limit Cycles, which are isolated, stable periodic orbits—the mathematical embodiment of self-sustained oscillations. In the following Lecture 7, we will introduce the Poincaré-Bendixson Theorem, which provides a rigorous mathematical foundation for determining the existence of limit cycles in the two-dimensional plane.

1. Two-Dimensional Non-Conserving Systems and Linear Stability Analysis

The goal of this lecture is to establish a "toolbox" for analyzing and classifying "genuine" two-dimensional non-conserving systems. In Lecture 5, the constraint of mass conservation (\(g = -f\)) caused the system dynamics to be essentially one-dimensional. Now, by lifting this constraint, the system can evolve throughout the entire two-dimensional phase space, allowing more complex behaviors such as rotation and oscillation.

The first and most critical step in analysis is to find all equilibrium states (fixed points) of the system and establish a rigorous mathematical method to determine their local stability. This method is Linear Stability Analysis (LSA), which is a direct extension of the one-dimensional \(f'(u^*)\) criterion from Lecture 3 to two dimensions.

1.1 Fixed Points and Linearization

The dynamics of a general two-dimensional non-conserving system is described by a system of ordinary differential equations (ODEs):

\[ \begin{aligned} \dot{u} &= f(u, v) \\ \dot{v} &= g(u, v) \end{aligned} \]

where \(f\) and \(g\) are two independent (nonlinear) functions.

Fixed Points

The key features of a dynamical system are its fixed points \(\vec{u}^* = (u^*, v^*)\), at which the system stops evolving. By definition, fixed points are the intersections of two Nullclines (detailed in Section 2) and must simultaneously satisfy \(\dot{u} = 0\) and \(\dot{v} = 0\):

\[ f(u^*, v^*) = 0 \quad \text{and} \quad g(u^*, v^*) = 0 \]

This is the same concept as \(f(u^*)=0\) for one-dimensional systems, but solving changes from finding roots of a one-dimensional equation to finding solutions of a two-dimensional algebraic system.

Linearization

To understand the stability of a fixed point \(\vec{u}^*\)—whether the system returns to or moves away from the fixed point when slightly perturbed—we need to linearize around \(\vec{u}^*\).

First, introduce vector notation for more compact symbols:

\[ \vec{u} = \begin{pmatrix} u \\ v \end{pmatrix} \quad \text{and} \quad \vec{F}(\vec{u}) = \begin{pmatrix} f(u, v) \\ g(u, v) \end{pmatrix} \]

The system's dynamical equation is thus compactly written as \(\partial_t \vec{u} = \vec{F}(\vec{u})\).

Assume the current state \(\vec{u}(t)\) of the system is the fixed point \(\vec{u}^*\) plus a small time-varying perturbation \(\delta\vec{u}(t)\):

\[ \vec{u}(t) = \vec{u}^* + \delta\vec{u}(t) \]

Substituting this into the dynamical equation \(\partial_t \vec{u} = \vec{F}(\vec{u})\):

\[ \partial_t (\vec{u}^* + \delta\vec{u}) = \vec{F}(\vec{u}^* + \delta\vec{u}) \]

On the left side of the equation, since \(\vec{u}^*\) is a constant, its time derivative is zero, leaving only \(\partial_t \delta\vec{u}\).

On the right side, Taylor expand \(\vec{F}\) around \(\vec{u}^*\) and keep only linear terms:

\[ \vec{F}(\vec{u}^* + \delta\vec{u}) = \underbrace{\vec{F}(\vec{u}^*)}_{=0} + \left. J_F \right|_{\vec{u}^*} \delta\vec{u} + \mathcal{O}(\delta\vec{u}^2) \]

By the definition of a fixed point, \(\vec{F}(\vec{u}^*) = 0\). After ignoring higher-order terms \(\mathcal{O}(\delta\vec{u}^2)\), we obtain the linear dynamics equation for the perturbation:

\[ \partial_t \delta\vec{u} = J_F \delta\vec{u} \]

\(J_F\) is the Jacobian Matrix evaluated at the fixed point \(\vec{u}^*\). It consists of all partial derivatives of \(\vec{F}\) with respect to \(\vec{u}\):

\[ J_F = \begin{pmatrix} \partial_u f & \partial_v f \\ \partial_u g & \partial_v g \end{pmatrix}_{\vec{u}=\vec{u}^*} \]

Physical meaning: The Jacobian matrix \(J_F\) is the two-dimensional generalization of the one-dimensional derivative \(f'(u^*)\) from Lecture 3. In 1D, stability is determined by a single number (the slope); in 2D, stability is jointly determined by the local "flow field" (stretching, compression, or rotation) described by this \(2 \times 2\) matrix.

1.2 The Eigenvalue Problem

The previous step reduced a nonlinear problem to a constant-coefficient linear differential equation system: \(\partial_t \delta\vec{u} = J_F \delta\vec{u}\).

To solve it, we use an exponential Ansatz, because for linear systems, exponential functions are "eigenfunctions" of derivatives:

\[ \delta\vec{u}(t) \sim e^{\sigma t} \vec{v} \]

Here, \(\vec{v}\) is a constant vector (the direction of perturbation), and \(\sigma\) is a (possibly complex) scalar (the growth rate of perturbation).

Substituting this trial solution into the linearized equation:

\[ \partial_t (e^{\sigma t} \vec{v}) = J_F (e^{\sigma t} \vec{v}) \]
\[ \sigma e^{\sigma t} \vec{v} = e^{\sigma t} (J_F \vec{v}) \]

After canceling \(e^{\sigma t}\), the differential equation problem is transformed into an algebraic Eigenvalue Problem:

\[ J_F \vec{v} = \sigma \vec{v} \]

\(\sigma\) is the Eigenvalue of the Jacobian matrix \(J_F\), and \(\vec{v}\) is the corresponding Eigenvector.

Physical meaning:

Eigenvector \(\vec{v}\): Represents the "principal axis" directions of the system near the fixed point. Perturbations can be decomposed into components along these principal axis directions.

Eigenvalue \(\sigma\): Represents the temporal evolution behavior of perturbation components \(e^{\sigma t} \vec{v}\). The sign and real/imaginary nature of \(\sigma\) determine the fate of the fixed point.

The stability criterion now becomes:

\(\text{Re}(\sigma) < 0\) (for all eigenvalues): Perturbations in all directions decay exponentially over time (\(\delta\vec{u}(t) \to 0\)). The fixed point is stable.

\(\text{Re}(\sigma) > 0\) (for at least one eigenvalue): At least one direction's perturbation grows exponentially over time. The fixed point is unstable.

\(\text{Re}(\sigma) = 0\) (for some eigenvalue): Linear stability analysis fails. The system is in a state of neutral stability or criticality, and stability is determined by nonlinear terms. This is precisely the point where bifurcation occurs.

1.3 Classifying Fixed Points via \((T, \Delta)\)

For a \(2 \times 2\) matrix \(J_F\), its eigenvalues \(\sigma\) are given by the characteristic equation \(\det(J_F - \sigma I) = 0\). This is always a quadratic equation in \(\sigma\):

\[ \sigma^2 - (\text{tr}(J_F))\sigma + \det(J_F) = 0 \]

The solutions (i.e., the two eigenvalues \(\sigma_1, \sigma_2\)) can be completely expressed in terms of two matrix invariants of \(J_F\):

Trace (T):

\[ T = \text{tr}(J_F) = \partial_u f + \partial_v g = \sigma_1 + \sigma_2 \]

Determinant (\(\Delta\)):

\[ \Delta = \det(J_F) = (\partial_u f)(\partial_v g) - (\partial_v f)(\partial_u g) = \sigma_1 \sigma_2 \]

The two eigenvalues \(\sigma_{\pm}\) can be expressed using only these two invariants, i.e., as solutions to the quadratic equation:

\[ \sigma_{\pm} = \frac{1}{2} \left(T \pm \sqrt{T^2 - 4\Delta} \right) \]

The significance of this formula is that by computing only the two numerical values \(T\) and \(\Delta\) evaluated at the fixed point, one can completely classify all possible types of fixed points in two-dimensional systems.

Lecture Blackboard Screenshot

The \((T, \Delta)\) phase diagram above is used to classify fixed points of two-dimensional systems. This diagram shows the T (trace) and \(\Delta\) (determinant) plane, as well as regions separated by the parabola \(T^2=4\Delta\). Corresponding phase portraits (saddle points, nodes, spirals, centers) are drawn in different regions.

Below is the detailed classification based on the signs of \(T\), \(\Delta\), and the discriminant \(T^2 - 4\Delta\):

Lecture Blackboard Screenshot

1. \(\Delta < 0\): Saddle Point

Mathematics: Since \(\Delta = \sigma_1 \sigma_2 < 0\), the two eigenvalues must be real and have opposite signs (\(\sigma_+ > 0, \sigma_- < 0\)).

Physical meaning: The system is unstable in one direction (corresponding to the eigenvector \(\vec{v}_+\) of \(\sigma_+\), called the unstable manifold, with flow out) and stable in another direction (corresponding to the eigenvector \(\vec{v}_-\) of \(\sigma_-\), called the stable manifold, with flow in). This is the classic "saddle point" or "mountain pass."

2. \(\Delta > 0\): Nodes or Spirals

Mathematics: \(\Delta = \sigma_1 \sigma_2 > 0\), so both eigenvalues have the same sign for their real parts. Stability is completely determined by the sign of \(T = \sigma_1 + \sigma_2\):

  • \(T < 0 \implies\) Stable

  • \(T > 0 \implies\) Unstable

2a. \(T^2 > 4\Delta\) (Real Eigenvalues): Nodes

Mathematics: The discriminant is positive, so \(\sigma_{\pm}\) are two real numbers.

Physical meaning: Trajectories flow directly into or out of the fixed point without rotation.

  • \(T < 0\): Stable Node. All trajectories flow inward.

  • \(T > 0\): Unstable Node. All trajectories flow outward.

2b. \(T^2 < 4\Delta\) (Complex Eigenvalues): Spirals / Foci

Mathematics: The discriminant is negative, so \(\sigma_{\pm}\) are a pair of complex conjugates: \(\sigma_{\pm} = \frac{T}{2} \pm i\omega\), where \(\omega = \frac{1}{2}\sqrt{4\Delta - T^2}\).

Physical meaning:

  • \(\text{Im}(\sigma) = \pm i\omega\) (imaginary part) causes Oscillation or rotation.

  • \(\text{Re}(\sigma) = T/2\) (real part) determines whether the amplitude grows or decays.

  • \(T < 0\): Stable Spiral. Trajectories spiral inward converging to the fixed point.

  • \(T > 0\): Unstable Spiral. Trajectories spiral outward away from the fixed point.

3. Boundary Cases: \(\Delta = 0\) or \(T = 0\)

\(\Delta > 0\) and \(T = 0\): Center

Mathematics: \(\sigma_{\pm} = \pm i\omega\). Pure imaginary eigenvalues.

Physical meaning: This is the case of the Harmonic Oscillator. The system neither flows in nor out, but oscillates permanently along closed orbits (ellipses).

Importance: This is a "fine-tuned" system. In the \((T, \Delta)\) phase diagram, it is only a line (\(T=0\)), not a region. In nature, any tiny perturbation (such as friction) would make \(T \neq 0\), causing the system to become a stable or unstable spiral, and oscillations would either disappear or explode. This provides motivation for the nonlinear oscillators in Section 3.

\(\Delta = 0\): Degenerate Case

Mathematics: At least one eigenvalue is zero. This corresponds to the boundary of the parabola \(T^2 = 4\Delta\) (degenerate node) or the \(\Delta=0\) axis (saddle-node bifurcation, etc.). This is the situation encountered in Lecture 3 and Lecture 5, signaling that Bifurcation is about to occur.

2. Global Phase Portrait Analysis - Nullclines and Invariant Manifolds

In Section 1, Linear Stability Analysis (LSA) provided a powerful "mathematical microscope" for determining the local stability of fixed points (e.g., whether it is a saddle or a spiral). However, LSA has fundamental limitations: it is only valid in an infinitesimally small region near the fixed point. It cannot tell us what happens when trajectories move far from fixed points, nor can it describe the system's global structure (e.g., whether multiple fixed points exist, how trajectories flow from one fixed point to another).

To reconstruct the complete Global Phase Portrait, this section will introduce two key geometric tools: Nullclines and Invariant Manifolds.

2.1 Nullclines

Nullclines are the most fundamental and useful tools when drawing phase portraits.

Definition:

Nullclines (or "Nulllines") are defined as curves in phase space where one branch of the dynamical equations has zero rate of change.

  • u-Nullcline (u-NC): Defined by \(\dot{u} = f(u, v) = 0\).

  • v-Nullcline (v-NC): Defined by \(\dot{v} = g(u, v) = 0\).

Significance of Nullclines:

Nullclines are not just a few lines—they are the "skeleton" of phase space, revealing the fundamental structure of the flow field:

Fixed points: Fixed points are where the system completely stops evolving, so they must simultaneously satisfy \(\dot{u} = 0\) and \(\dot{v} = 0\). Geometrically, fixed points must be located at the intersections of all nullclines.

Flow field direction: Nullclines provide strict constraints on the direction of the vector field \(\vec{F} = (\dot{u}, \dot{v})\):

  • On the u-NC (\(\dot{u} = 0\)), the vector field becomes \((0, \dot{v})\). This means the flow field (i.e., the tangent to trajectories) must be purely vertical (up or down).

  • On the v-NC (\(\dot{v} = 0\)), the vector field becomes \((\dot{u}, 0)\). This means the flow field must be purely horizontal (left or right).

Region division: Nullclines divide the entire \((u, v)\) phase space into different regions. Inside each region, the signs (positive or negative) of \(\dot{u}\) and \(\dot{v}\) are constant. By determining a \((+, -)\) sign combination in each region (e.g., \(\dot{u} > 0, \dot{v} < 0\) means flow to the "lower right"), one can sketch the flow direction throughout the entire phase space.

2.2 Example Analysis

The lecture used an example system to demonstrate how to use nullclines.

\[ \begin{aligned} \partial_t u &= u - u^2 v \\ \partial_t v &= u - v \end{aligned} \]

u-NCs (\(\dot{u} = 0\)):

\[ u(1 - uv) = 0 \Rightarrow \mathbf{u = 0} \text{ (v-axis)} \quad \text{or} \quad \mathbf{v = \frac{1}{u}} \text{ (hyperbola)} \]

v-NC (\(\dot{v} = 0\)):

\[ u - v = 0 \Rightarrow \mathbf{v = u} \text{ (diagonal)} \]

Drawing the phase portrait:

Draw these three nullclines (\(v = u\), \(u = 0\), \(v = \frac{1}{u}\)) on the \((u, v)\) plane.

Lecture Blackboard Screenshot

Nullcline analysis example. The red line is the v-nullcline (\(\dot{v}=0\)), and the white curves are u-nullclines (\(\dot{u}=0\)). Their intersections \((-1, -1)\), \((0, 0)\), and \((1, 1)\) are the system's fixed points. White arrows outline the global flow field direction in phase space.

Analyzing flow field direction:

1.v-NC (red line, \(v=u\)):

  • On the right side of this line (e.g., point (2,1)), \(u > v\), so \(\dot{v} = u - v > 0\). This means in all regions to the right of v-NC, the flow field must have an upward component.

  • Similarly, on the left side of v-NC, \(u < v\), \(\dot{v} < 0\), the flow field points down.

2.u-NCs (white lines, \(u=0\) and \(v=1/u\)):

  • In the region between the two lines (e.g., first quadrant \(v < 1/u\)), \(1 - uv > 0\). Therefore \(\dot{u} = u(1 - uv) > 0\), and the flow field points right.

  • In regions outside the two lines (e.g., first quadrant \(v > 1/u\)), \(1 - uv < 0\). Therefore \(\dot{u} < 0\), and the flow field points left.

3.Combined analysis: By combining these basic directions, one can sketch the flow lines throughout the entire phase space. For example, in the first quadrant, in the region between \(v=u\) and \(v=1/u\), satisfying \(u < v\) (downward) and \(v > 1/u\) (leftward), so the net flow direction in this region is "lower left."

2.3 Invariant Manifolds

Nullclines tell us the direction of the flow field, while invariant manifolds are the actual "highways" or "watersheds" in phase space.

Definition: An Invariant Manifold \(\mathcal{M}\) is a subset of phase space (e.g., a curve) with the property that any trajectory starting from any point on \(\mathcal{M}\) will remain on \(\mathcal{M}\) forever for all future (and past) times.

Physical meaning: They are special "orbits" or "skeletons" in the flow field—once a trajectory enters, it never leaves. While technically any trajectory line is an invariant manifold, the term is usually reserved for those special manifolds that organize global dynamics, particularly those associated with saddle points.

Importance of saddle points: As analyzed in Section 1, a saddle point has a Stable Manifold (the direction where trajectories flow in, corresponding to \(\sigma < 0\)) and an Unstable Manifold (the direction where trajectories flow out, corresponding to \(\sigma > 0\)).

Separatrix: In the global phase portrait, the unstable manifold of a saddle point (such as the (0,0) point in this example) usually plays the role of a Separatrix. The separatrix divides phase space into regions with different fates, i.e., Basins of Attraction. Trajectories on one side of the separatrix may flow toward fixed point A, while trajectories on the other side may flow toward fixed point B (or infinity). Therefore, to understand a system's global behavior, the key is to find all fixed points (via nullclines), determine their stability (via LSA), and draw the (un)stable manifolds of saddle points to partition the entire phase space.

2.4 Python Code Practice

The following Python code practice reproduces the phase portrait from the lecture example, including nullclines, vector field (streamlines), and example trajectories. The code has been modified to use a black background theme.

# File: plot_nullclines_and_phase_portrait.py
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp

# --- Set dark background style ---
plt.style.use('dark_background')

# Define the system of ODEs from the lecture 
# Using the equation u_dot = u - u^2*v (consistent with board drawing)
def system(t, Z):
    """Dynamical system for the nullcline example."""
    u, v = Z
    # Ensure u is not exactly zero for v=1/u calculation
    eps = 1e-9
    if abs(u) < eps:
        u_dot = 0.0 # On the u=0 nullcline
    else:
        u_dot = u - (u**2) * v
    v_dot = u - v
    return [u_dot, v_dot]

# Create a grid for the phase portrait
u_range = np.linspace(-2, 4, 20)
v_range = np.linspace(-1, 3, 20)
U, V = np.meshgrid(u_range, v_range)

# Calculate the vector field (dU, dV) at each grid point
dU, dV = np.zeros_like(U), np.zeros_like(V)
ni, nj = U.shape
for i in range(ni):
    for j in range(nj):
        dU[i,j], dV[i,j] = system(0, [U[i,j], V[i,j]])

# --- Create the plot ---
fig, ax = plt.subplots(figsize=(10, 8))
ax.set_title('Phase Portrait with Nullclines (Lecture Example)', color='white')
ax.set_xlabel('u', color='white')
ax.set_ylabel('v', color='white')

# 1. Plot the vector field using streamplot
# Streamplot shows the flow direction
ax.streamplot(U, V, dU, dV, color='gray', linewidth=0.7, density=1.5, broken_streamlines=False)

# 2. Plot the Nullclines
# v-NC (v_dot = 0): v = u
v_nc1 = u_range
ax.plot(u_range, v_nc1, 'cyan', linestyle='-', linewidth=2, label=r'v-NC: $\dot{v}=0$ ($v=u$)')

# u-NCs (u_dot = 0): u = 0 and v = 1/u
# u=0 (the y-axis)
ax.axvline(0, color='magenta', linestyle='-', linewidth=2, label=r'u-NC: $\dot{u}=0$ ($u=0$)')

# v = 1/u (plot in two parts)
u_nc2_pos = np.linspace(0.1, 4, 100) # Avoid u=0
v_nc2_pos = 1 / u_nc2_pos
ax.plot(u_nc2_pos, v_nc2_pos, 'magenta', linestyle='--', linewidth=2, label=r'u-NC: $\dot{u}=0$ ($v=1/u$)')

u_nc2_neg = np.linspace(-2, -0.1, 100) # Avoid u=0
v_nc2_neg = 1 / u_nc2_neg
ax.plot(u_nc2_neg, v_nc2_neg, 'magenta', linestyle='--', linewidth=2)

# 3. Find and plot fixed points (intersections of NCs)
# (0,0) is an intersection
ax.plot(0, 0, 'wo', markersize=8, markeredgecolor='black', label='Fixed Point (0, 0)')
# (1,1) is an intersection (v=u and v=1/u)
ax.plot(1, 1, 'wo', markersize=8, markeredgecolor='black', label='Fixed Point (1, 1)')
# (-1,-1) is an intersection
ax.plot(-1, -1, 'wo', markersize=8, markeredgecolor='black', label='Fixed Point (-1, -1)')

# 4. Plot example trajectories using solve_ivp
t_span = [0, 10]
sol1 = solve_ivp(system, t_span, [0.8, 0.5], dense_output=True, method='RK45')
ax.plot(sol1.y[0], sol1.y[1], 'lime', linewidth=2.5, label='Trajectory 1')

sol2 = solve_ivp(system, t_span, [1.5, 2.5], dense_output=True, method='RK45')
ax.plot(sol2.y[0], sol2.y[1], 'yellow', linewidth=2.5, label='Trajectory 2')

leg = ax.legend()
for text in leg.get_texts():
    text.set_color('white')

ax.set_ylim(-1, 3)
ax.set_xlim(-2, 4)
ax.grid(True, color='gray', linestyle='--', alpha=0.5)
ax.tick_params(colors='white')
for spine in ax.spines.values():
    spine.set_edgecolor('white')

plt.show()
Code Output

Nullclines: The magenta curves (\(\dot{u}=0\)) and cyan curve (\(\dot{v}=0\)) form the skeleton of the phase portrait.

Fixed points: Their intersections (white dots) are the system's three fixed points: \((-1, -1)\), \((0, 0)\), and \((1, 1)\).

Vector field: The gray streamlines (streamplot) show the evolution direction at each point in phase space. One can clearly see that streamlines are purely horizontal when crossing the cyan line \(v=u\) and purely vertical when crossing the magenta lines \(v=1/u\) or \(u=0\).

Trajectories: The lime and yellow curves are two example trajectories. They start from different initial points but both evolve along the gray streamlines.

Stability: From streamlines and trajectories, we can see that \((1, 1)\) is a stable fixed point (stable node or spiral), with all nearby trajectories converging to it. \((0, 0)\) is a saddle point, with trajectories approaching it in one direction but moving away in another. \((-1, -1)\) appears to be an unstable fixed point (unstable node or spiral).

3. Nonlinear Oscillators and Hopf Bifurcation

In the \((T, \Delta)\) phase diagram of Section 1, we found that pure oscillation (center) is an extremely special case requiring the system to be precisely on the "knife edge" of \(T=0\). This is a "fine-tuned" system. In any real physical or biological system, tiny perturbations (such as friction, noise) would cause \(T \neq 0\), causing the system to "drift" into the region of \(T<0\) (stable spiral, oscillations decay to death) or \(T>0\) (unstable spiral, oscillations diverge infinitely).

So how does nature (such as biological clocks, cardiac pacemakers) produce stable, self-sustained oscillations?

The answer is: through Non-linearity. Nonlinear terms can provide a self-regulating mechanism: when amplitude is too small, the system "injects energy" to make it grow (like \(T>0\)); when amplitude is too large, the system "applies damping" to make it decay (like \(T<0\)).

This section will study the mathematical model of how stable oscillations are "born," namely the Hopf Bifurcation.

3.1 Hopf Bifurcation - The Birth of Oscillations

Hopf bifurcation describes how system dynamics, as a control parameter (denoted \(\mu\)) varies, transitions from a stable fixed point (e.g., a stable spiral) to an unstable fixed point, while simultaneously "giving birth to" a stable Limit Cycle.

The simplest nonlinear system describing this transition, its Normal Form, can be written in Cartesian coordinates \((u,v)\) as:

\[ \begin{aligned} \partial_t u &= \mu u - \omega v + (\rho u - \lambda v)(u^2 + v^2) \\ \partial_t v &= \mu v + \omega u + (\rho v + \lambda u)(u^2 + v^2) \end{aligned} \]

This equation looks complex, but each part has a clear meaning:

Linear terms: \(\mu u - \omega v\) and \(\mu v + \omega u\)

  • These terms determine the linear stability near the fixed point \((0,0)\).

  • Performing LSA on this linear part, the Jacobian matrix is \(J = \begin{pmatrix} \mu & -\omega \\ \omega & \mu \end{pmatrix}\).

  • The eigenvalues are \(\sigma_{\pm} = \mu \pm i\omega\).

  • \(\mu = \text{Re}(\sigma)\): This is the key control parameter. It represents the real part of the eigenvalues. When \(\mu < 0\), the fixed point is a stable spiral. When \(\mu > 0\), the fixed point becomes an unstable spiral. Hopf bifurcation occurs at the moment \(\mu\) crosses 0.

  • \(\omega = \text{Im}(\sigma)\): This is the frequency of linear oscillation.

Nonlinear terms: \((\dots)(u^2 + v^2)\)

These are the simplest (cubic) nonlinear terms, carefully constructed to respect the system's rotational symmetry and used to "capture" the exponential divergence caused by \(\mu > 0\), thereby forming a stable ring.

3.2 Coordinate Transformation - From \((u,v)\) to \((c, \theta)\)

The repeatedly appearing \(u^2 + v^2\) term in the equations strongly suggests the system has rotational symmetry. Therefore, using polar coordinates (\(c\) for amplitude, \(\theta\) for phase) is the natural analytical approach:

\[ u = c \cos\theta \quad v = c \sin\theta \]

(where \(c = \sqrt{u^2 + v^2}\) is the amplitude and \(\theta = \operatorname{atan2}(v, u)\) is the phase).

Substituting this transformation into the original equations, after (tedious but straightforward) algebraic operations, the system (miraculously) decouples into independent dynamics for amplitude and phase:

\[ \begin{aligned} \partial_t c &= \mu c + \rho c^3 \\ \partial_t \theta &= \omega + \lambda c^2 \end{aligned} \]

Physical meaning:

Amplitude equation (\(\dot{c}\)): \(\partial_t c = \mu c + \rho c^3\). This one-dimensional equation completely determines how the oscillation amplitude \(c\) grows or decays over time.

Phase equation (\(\dot{\theta}\)): \(\partial_t \theta = \omega + \lambda c^2\). This equation describes the system's speed of rotation (angular velocity) in phase space.

3.3 Amplitude Dynamics: \(\dot{c} = \mu c + \rho c^3\)

The stability and existence of oscillations is now reduced to a one-dimensional dynamics problem, namely analyzing the amplitude equation \(\dot{c}\). This equation \(\dot{c} = \mu c + \rho c^3\) (or written as \(\dot{c} = c(\mu + \rho c^2)\)) is the standard normal form of the Pitchfork Bifurcation from Lecture 3.

This connection is extremely profound: The amplitude dynamics of Hopf bifurcation (birth of oscillations in 2D) is mathematically equivalent to a pitchfork bifurcation (state bifurcation in 1D).

This one-dimensional system can be derived from a potential function \(V(c)\): \(\dot{c} = -\partial_c V(c)\). The potential function is:

\[ V(c) = -\frac{1}{2}\mu c^2 - \frac{\rho}{4}c^4 \]

This is precisely the famous Landau Potential or "Mexican Hat" Potential. The amplitude \(c\) will spontaneously evolve to seek the minimum of this potential \(V(c)\).

3.4 Classification - Supercritical and Subcritical

The type of Hopf bifurcation (i.e., whether the limit cycle is stable or unstable) depends on the sign of the nonlinear term \(\rho\).

1. Supercritical Hopf Bifurcation (\(\rho < 0\))

This is the most common and "smoothest" way for oscillations to be born.

\(\mu < 0\) (before bifurcation): \(\dot{c} = \mu c - |\rho| c^3\). The potential \(V(c)\) has only one minimum (stable fixed point) at \(c = 0\).

Physical meaning: Any small perturbation (\(c>0\)) will decay back to \(c=0\). In \((u,v)\) phase space, this means \((0,0)\) is a stable spiral.

\(\mu > 0\) (after bifurcation): \(\dot{c} = \mu c - |\rho| c^3\). The minimum at \(c = 0\) becomes a local maximum (unstable fixed point). The potential \(V(c)\) develops new symmetric minima at \(c^2 = -\mu/\rho\), namely:

\[ c^* = \sqrt{-\mu/\rho} \]

Physical meaning: The \(c=0\) (stationary) state becomes unstable. Any small perturbation causes the amplitude \(c\) to spontaneously grow until it "saturates" and stabilizes at the new nonzero amplitude \(c^*\). Since \(\dot{\theta} = \omega + \lambda (c^*)^2 \neq 0\), the system rotates with radius \(c = c^*\) at constant angular velocity \(\dot{\theta}\). This is exactly a stable Limit Cycle.

Lecture Blackboard Screenshot

Amplitude bifurcation diagram for supercritical Hopf bifurcation. The vertical axis is amplitude \(c\), the horizontal axis is control parameter \(\mu\). When \(\mu < 0\), the only stable fixed point is \(c=0\) (stationary). When \(\mu > 0\), \(c=0\) becomes unstable (dashed line), and a new stable branch \(c^* = \sqrt{-\mu/\rho}\) (solid line) is born, representing a stable limit cycle.

2. Subcritical Hopf Bifurcation (\(\rho > 0\))

This bifurcation is more "explosive" and discontinuous.

\(\mu > 0\): \(\dot{c} = \mu c + \rho c^3\). \(c = 0\) is unstable, and \(\dot{c}\) is positive for all \(c>0\), so amplitude grows without bound ("explosion").

Stabilization: To make the model physically reasonable (as in the cusp bifurcation of Lecture 4), higher-order stabilizing terms must be added (e.g., a \(\nu c^5\) term where \(\nu < 0\)) to "take over" the dynamics. A stabilized potential function is:

\[ V(c) = -\frac{1}{2}\mu c^2 - \frac{\rho}{4}c^4 + \frac{|\nu|}{6}c^6 \quad (\text{here } \rho > 0, |\nu| > 0) \]

Bistability: In certain parameter ranges (e.g., \(\mu\) slightly less than 0), this potential function can simultaneously have two stable minima: one at \(c = 0\) (non-oscillating state) and another at \(c = c^* > 0\) (oscillating state).

Hysteresis:

Path 1 (increasing \(\mu\)): When slowly increasing \(\mu\), the system stays at \(c = 0\) (stable stationary). When \(\mu\) increases to where the potential barrier at \(c = 0\) disappears (\(\mu = 0\)), the system suddenly jumps to the large-amplitude oscillating state at \(c = c^*\).

Path 2 (decreasing \(\mu\)): When slowly decreasing \(\mu\), the system stays at \(c = c^*\) (oscillating). It does not jump back at \(\mu = 0\), but remains "stuck" in the \(c = c^*\) potential well until this well itself disappears (at some \(\mu < 0\) value, saddle-node bifurcation occurs).

Result: The \(\mu\) value at which the system "turns on" oscillations differs from the \(\mu\) value at which it "turns off." This history-dependence is hysteresis, which is completely consistent with the cusp bifurcation behavior in Lecture 4.

Lecture Blackboard Screenshot

Amplitude bifurcation diagram for subcritical Hopf bifurcation (after stabilization). The S-shaped curve shows the bistable region. The system follows different paths when \(\mu\) increases (along lower branch) and decreases (along upper branch), forming a hysteresis loop.

3.5 Python Code Practice

The following code practice simulates supercritical Hopf bifurcation. It shows how the system transitions from a stable fixed point (\(\mu < 0\)) to a stable limit cycle (\(\mu > 0\)) when the control parameter \(\mu\) crosses 0.

import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp

# --- Set dark background style ---
plt.style.use('dark_background')

# Define the Hopf bifurcation normal form equations 
def hopf_system(t, Z, mu, rho, omega, lambda_):
    """Hopf bifurcation normal form in Cartesian coordinates."""
    u, v = Z
    c_sq = u**2 + v**2
    u_dot = mu * u - omega * v + (rho * u - lambda_ * v) * c_sq
    v_dot = mu * v + omega * u + (rho * v + lambda_ * u) * c_sq
    return [u_dot, v_dot]

# --- Parameters for Simulation ---
omega = 1.0     # Linear frequency
rho = -1.0      # Coefficient for supercritical (rho < 0)
lambda_ = 0.0   # Non-linear frequency correction (set to 0 for simplicity)
t_span = [0, 40]  # Simulation time (increased for better viz)
y0_in = [0.1, 0]   # Initial condition inside the limit cycle
y0_out = [2.0, 0]  # Initial condition outside the limit cycle

# --- Case 1: mu = -0.1 (Stable Spiral) ---
mu_neg = -0.1
sol_neg = solve_ivp(hopf_system, t_span, y0_out, 
                    args=(mu_neg, rho, omega, lambda_), 
                    dense_output=True, method='RK45')

# --- Case 2: mu = +0.1 (Limit Cycle) ---
mu_pos = 0.1
# Analytical limit cycle radius: c = sqrt(-mu/rho)
c_star = np.sqrt(-mu_pos / rho)
# Simulate trajectory starting inside
sol_pos_in = solve_ivp(hopf_system, t_span, y0_in, 
                       args=(mu_pos, rho, omega, lambda_), 
                       dense_output=True, method='RK45')
# Simulate trajectory starting outside
sol_pos_out = solve_ivp(hopf_system, t_span, y0_out, 
                        args=(mu_pos, rho, omega, lambda_), 
                        dense_output=True, method='RK45')

# --- Plotting ---
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 7), subplot_kw={'aspect': 'equal'})
fig.suptitle(r'Supercritical Hopf Bifurcation ($\rho = -1$)', color='white')

# Plot 1: mu = -0.1 (Stable Spiral)
ax1.set_title(rf'Case 1: $\mu = {mu_neg}$ (Stable Spiral)', color='white')
ax1.plot(sol_neg.y[0], sol_neg.y[1], 'cyan')
ax1.plot(sol_neg.y[0][0], sol_neg.y[1][0], 'co', label='Start (y0=[2,0])') # Start point
ax1.plot(0, 0, 'ro', label='Stable Fixed Point') # End point
ax1.set_xlabel('u')
ax1.set_ylabel('v')
leg1 = ax1.legend()
for text in leg1.get_texts(): text.set_color('white')
ax1.grid(True, color='gray', linestyle='--', alpha=0.5)
ax1.tick_params(colors='white')
for spine in ax1.spines.values(): spine.set_edgecolor('white')

# Plot 2: mu = +0.1 (Stable Limit Cycle)
ax2.set_title(rf'Case 2: $\mu = {mu_pos}$ (Stable Limit Cycle)', color='white')
# Plot the trajectories
ax2.plot(sol_pos_in.y[0], sol_pos_in.y[1], 'cyan', label='Start inside ($y_0=[0.1, 0]$)')
ax2.plot(sol_pos_out.y[0], sol_pos_out.y[1], 'lime', label='Start outside ($y_0=[2.0, 0]$)')
# Plot the analytical limit cycle
theta = np.linspace(0, 2*np.pi, 100)
ax2.plot(c_star * np.cos(theta), c_star * np.sin(theta), 'r--', linewidth=2, label=rf'Limit Cycle (c={c_star:.3f})')
ax2.plot(0, 0, 'rx', label='Unstable Fixed Point')
ax2.set_xlabel('u')
ax2.set_ylabel('v')
leg2 = ax2.legend()
for text in leg2.get_texts(): text.set_color('white')
ax2.grid(True, color='gray', linestyle='--', alpha=0.5)
ax2.tick_params(colors='white')
for spine in ax2.spines.values(): spine.set_edgecolor('white')

plt.show()

Code Output

Left panel (\(\mu = -0.1 < 0\)): This is before bifurcation. The \((0,0)\) fixed point is a stable spiral. The cyan trajectory starts from the external point \([2,0]\) and spirals inward to the red stable fixed point.

Right panel (\(\mu = +0.1 > 0\)): This is after bifurcation. The \((0,0)\) fixed point (red cross) is now an unstable spiral. The system has given birth to a stable limit cycle (red dashed circle, radius \(c^* \approx 0.316\)).

  • The cyan trajectory starts from inside the ring at \([0.1, 0]\), spiraling outward, and converges to the limit cycle.

  • The lime trajectory starts from outside the ring at \([2.0, 0]\), spiraling inward, and also converges to the same limit cycle.

  • This limit cycle is an attractor—it represents a robust, self-sustained oscillation whose amplitude and phase are independent of initial conditions.

4. Van der Pol Oscillator and Relaxation Oscillations

In Section 3, Hopf bifurcation showed how oscillations "smoothly" emerge from an unstable fixed point through supercritical pitchfork bifurcation amplitude dynamics. This is a general mathematical paradigm (Normal Form). This section will explore a classical model with important physical significance in engineering and biology—the Van der Pol Oscillator.

Its mechanism differs from Hopf bifurcation: instead of stabilizing amplitude through a potential function (like the Landau potential), it achieves energy self-regulation through a clever nonlinear damping term, thereby producing a robust limit cycle. More importantly, in the strong nonlinearity limit, it will introduce a completely new analytical method: Time Scale Separation.

Its (standardized) second-order differential equation form is:

\[ \ddot{x} + \gamma (x^2 - 1) \dot{x} + x = 0 \]

(Note: Here \(x\) is a variable, such as voltage in a circuit or membrane potential in a neuron. The lecture mentioned that this is equivalent to an oscillator with mass \(m=1\), spring constant \(k=1\), but with a very special damping term.)

4.1 Physical Meaning - Nonlinear Damping

The key to the system lies in the middle term \(\gamma(x^2 - 1)\dot{x}\)—this is a nonlinear damping term.

It is proportional to velocity \(\dot{x}\), so it is a damping force.

Its damping coefficient \(C(x) = \gamma(x^2 - 1)\) depends on position \(x\) (i.e., amplitude).

This dependence leads to a self-regulating mechanism:

When \(|x| < 1\) (small amplitude): \(x^2 - 1\) is negative. The damping coefficient \(C(x)\) is negative. The system exhibits negative damping (anti-damping). It does not dissipate energy but instead injects energy from outside (like a power source), causing amplitude to grow.

When \(|x| > 1\) (large amplitude): \(x^2 - 1\) is positive. The damping coefficient \(C(x)\) is positive. The system exhibits normal damping (like friction). It dissipates energy, causing amplitude to decay.

Physical meaning: This mechanism guarantees self-sustained oscillation. If amplitude is too small, the system "self-excites" to increase it; if amplitude is too large, the system "self-suppresses" to reduce it. Eventually, the system stabilizes on a limit cycle, where energy injected over one period exactly equals energy dissipated.

4.2 Time Scale Separation (\(\gamma \gg 1\))

When \(\gamma\) is large (strong nonlinearity), the system's behavior becomes very special. To analyze this case, the Liénard transformation is used. The original equation can be rewritten as:

\[ \frac{d}{dt} \left[ \dot{x} + \gamma \left(\frac{1}{3} x^3 - x \right) \right] = -x \]

Define a new variable \(y = \dot{x} + \gamma (\frac{x^3}{3} - x)\), and let \(F(x) = \frac{x^3}{3} - x\), the system becomes:

\[ \begin{aligned} \dot{y} &= -x \\ \dot{x} &= y - \gamma F(x) \end{aligned} \]

To analyze the \(\gamma \gg 1\) limit, rescale \(y\) as \(y \to \gamma y\) (Note: this is a common rescaling used to explicitly separate time scales), and the system becomes:

\[ \begin{aligned} \dot{x} &= \gamma (y - F(x)) \quad &(\text{FAST}) \\ \dot{y} &= -x / \gamma \quad &(\text{SLOW}) \end{aligned} \]

Physical meaning:

  • Fast dynamics: \(\dot{x}\) is proportional to \(\gamma\) (very large). This means \(x\) evolves very fast, unless \(y \approx F(x)\).

  • Slow dynamics: \(\dot{y}\) is proportional to \(1/\gamma\) (very small). This means \(y\) evolves very slowly.

The system thus has two distinctly different time scales.

4.3 Relaxation Oscillations

This fast-slow separation leads to a special, non-sinusoidal oscillation called Relaxation Oscillations.

  • x-Nullcline (slow manifold): The nullcline for the fast variable \(\dot{x}\) is given by \(\dot{x} = 0 \Rightarrow \gamma(y - F(x)) = 0\), namely:

    \[ y = F(x) = \frac{x^3}{3} - x \]

    This is an S-shaped cubic curve, called the slow manifold.

The system's dynamics can now be intuitively understood as a cycle of "Crawl" and "Snap":

  • "Snap" (fast): Suppose the system's initial state is not on the slow manifold \(y = F(x)\). Since \(\dot{x}\) is extremely large while \(\dot{y} \approx 0\), the system will almost horizontally, "instantaneously" Snap onto the slow manifold.

  • "Crawl" (slow): Once the system reaches near the slow manifold \(y = F(x)\), the fast dynamics \(\dot{x}\) becomes \(\approx 0\). The system is "stuck" on the slow manifold, and its evolution is dominated by the slow dynamics \(\dot{y} = -x/\gamma\)—the system begins to slowly Crawl.

The Cycle:

1.The system is on the right branch of \(y = F(x)\) (e.g., \(x \approx 2\)). At this point \(x > 0\), so \(\dot{y} < 0\). The system slowly crawls downward along the S-shaped curve.

2.The system crawls to the local minimum ("knee") of the S-shaped curve (\(x = 1\)).

3.Snap! At this point \(y\) tries to continue decreasing (\(\dot{y}<0\)), causing the system to leave the slow manifold. The fast dynamics \(\dot{x} = \gamma(y - F(x))\) immediately takes over. Since \(y < F(x)\), \(\dot{x}\) becomes a very large negative value, and the system snaps horizontally to the left until it "hits" the left branch of the S-shaped curve (at \(x \approx -2\)).

4.The system is on the left branch. At this point \(x < 0\), so \(\dot{y} > 0\). The system slowly crawls upward along the S-shaped curve.

5.The system crawls to the local maximum ("knee") of the S-shaped curve (\(x = -1\)).

6.Snap! \(y\) tries to continue increasing, \(\dot{x}\) becomes a very large positive value, and the system snaps horizontally to the right, returning to the right branch. The cycle repeats.

Lecture Blackboard Screenshot

Phase portrait of the Van der Pol oscillator in the strong nonlinearity limit (\(\gamma \gg 1\)). The S-shaped curve \(y=F(x)\) is the system's slow manifold (x-nullcline). Trajectories (arrows) manifest as slow "Crawl" on the slow manifold and instantaneous "Snap" between manifold branches, forming a relaxation oscillation cycle.

4.4 Period Calculation

The period \(T\) of this oscillation is mainly determined by the slow "crawl" process; the time of the "snap" process (\(\sim 1/\gamma\)) is negligible. The total period \(T \approx 2 T_{\text{crawl}}\) (using symmetry, calculate the crawl time for half a period).

\(T_{\text{crawl}}\) is the time the system crawls on the slow manifold, e.g., from the "landing point" \(x_0 \approx 2\) to the "takeoff point" \(x_1 = 1\) (the minimum point of the S-shaped curve).

1.During the crawl phase, system dynamics is dominated by the slow equation \(\dot{y} = -x / \gamma\).

2.Therefore, the time element \(dt = \frac{dy}{\dot{y}} = \frac{dy}{-x/\gamma} = -\frac{\gamma}{x} dy\).

3.Meanwhile, the system is on the slow manifold \(y = F(x)\), so \(dy = F'(x) dx\).

4.\(F(x) = \frac{x^3}{3} - x\), so \(dy = (x^2 - 1) dx\).

5.Substitute \(dy\) into the expression for \(dt\):

\[ dt = -\frac{\gamma}{x} (x^2 - 1) dx = -\frac{\gamma(x^2 - 1)}{x} dx \]

6.Integrate \(T_{\text{crawl}}\) (from \(x_0=2\) to \(x_1=1\)):

\[ T_{\text{crawl}} = \int_{t_0}^{t_1} dt = \int_{x_0=2}^{x_1=1} -\frac{\gamma(x^2 - 1)}{x} dx = \gamma \int_{1}^{2} \left(x - \frac{1}{x}\right) dx \]

7.Solve the integral:

\[ T_{\text{crawl}} = \gamma \left[ \frac{1}{2}x^2 - \ln(x) \right]_1^2 = \gamma \left( (\frac{1}{2} \cdot 4 - \ln 2) - (\frac{1}{2} \cdot 1 - \ln 1) \right) \]
\[ T_{\text{crawl}} = \gamma \left( (2 - \ln 2) - (\frac{1}{2} - 0) \right) = \gamma \left(\frac{3}{2} - \ln 2 \right) \]

8.Total period \(T = 2 T_{\text{crawl}} = 2\gamma \left(\frac{3}{2} - \ln 2 \right) \approx 1.614 \gamma\).

Physical meaning: The oscillation period \(T\) is proportional to the nonlinearity parameter \(\gamma\). The larger \(\gamma\), the stronger the nonlinearity, the slower the system "crawls" on the slow manifold (because \(\dot{y} \sim 1/\gamma\)), and thus the longer the period.

4.5 Python Code Practice

The following code practice simulates the Van der Pol oscillator. It converts the standard second-order equation into a standard first-order system (\(u=x, v=\dot{x}\)), and shows phase portraits for small \(\gamma\) (near-harmonic) and large \(\gamma\) (relaxation oscillation).

import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp

# --- Set dark background style ---
plt.style.use('dark_background')

# Van der Pol equation as a 1st order system
# Let u = x, v = x_dot
# Then u_dot = v
# And v_dot = -u - gamma * (u^2 - 1) * v = gamma * (1 - u^2) * v - u
def van_der_pol(t, Z, gamma):
    """Van der Pol oscillator 1st order system."""
    u, v = Z
    u_dot = v
    v_dot = gamma * (1 - u**2) * v - u
    return [u_dot, v_dot]

# Time span
t_span = [0, 100]
# Initial condition
y0 = [2.0, 0.0] # Start at x=2, x_dot=0

# --- Case 1: gamma = 0.5 (Weakly non-linear, near-harmonic) ---
gamma_weak = 0.5
sol_weak = solve_ivp(van_der_pol, t_span, y0, args=(gamma_weak,), 
                     dense_output=True, method='RK45')

# --- Case 2: gamma = 10 (Strongly non-linear, relaxation oscillation) ---
gamma_strong = 10.0
sol_strong = solve_ivp(van_der_pol, t_span, y0, args=(gamma_strong,), 
                       dense_output=True, method='RK45')

# --- Plotting Phase Portraits ---
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 7))
fig.suptitle(r'Van der Pol Oscillator Phase Portraits ($x$ vs $\dot{x}$)', color='white')

# Plot 1: gamma = 0.5
ax1.set_title(rf'$\gamma = {gamma_weak}$ (Near-harmonic oscillation)', color='white')
ax1.plot(sol_weak.y[0], sol_weak.y[1], 'cyan')
ax1.set_xlabel('x (Position)', color='white')
ax1.set_ylabel(r'$\dot{x}$ (Velocity)', color='white')
ax1.axis('equal')
ax1.grid(True, color='gray', linestyle='--', alpha=0.5)
ax1.tick_params(colors='white')
for spine in ax1.spines.values(): spine.set_edgecolor('white')

# Plot 2: gamma = 10.0
ax2.set_title(rf'$\gamma = {gamma_strong}$ (Relaxation Oscillation)', color='white')
ax2.plot(sol_strong.y[0], sol_strong.y[1], 'lime')
ax2.set_xlabel('x (Position)', color='white')
ax2.set_ylabel(r'$\dot{x}$ (Velocity)', color='white')
ax2.grid(True, color='gray', linestyle='--', alpha=0.5)
ax2.tick_params(colors='white')
for spine in ax2.spines.values(): spine.set_edgecolor('white')

plt.show()

# --- Plotting Time Series for gamma = 10 ---
fig_ts, ax_ts = plt.subplots(figsize=(12, 5))
ax_ts.set_title(rf'$\gamma = {gamma_strong}$ Time Series (Relaxation)', color='white')
# Use the solution from t=50 onwards to show the stable cycle
t_eval = np.linspace(50, 100, 500)
sol_strong_cycle = sol_strong.sol(t_eval)
ax_ts.plot(t_eval, sol_strong_cycle[0], 'lime', label='x(t) (Position)')
ax_ts.set_xlabel('Time', color='white')
ax_ts.set_ylabel('x', color='white')
leg = ax_ts.legend()
for text in leg.get_texts(): text.set_color('white')
ax_ts.grid(True, color='gray', linestyle='--', alpha=0.5)
ax_ts.tick_params(colors='white')
for spine in ax_ts.spines.values(): spine.set_edgecolor('white')

plt.show()

Code Output

Left panel (phase portrait, \(\gamma = 0.5\)): In the weakly nonlinear case, the limit cycle (cyan orbit) shape is close to circular, very similar to Hopf bifurcation with \(\mu > 0\) (near-harmonic oscillation).

Right panel (phase portrait, \(\gamma = 10\)): In the strongly nonlinear case, the limit cycle (lime orbit) shape in the phase portrait changes dramatically, reproducing "relaxation oscillations." The trajectory shows fast vertical "snaps" (rapid \(\dot{x}\) changes) near \(x \approx \pm 2\) and slow "crawls" (\(\dot{x} \approx 0\)) at the top/bottom. (Note: This \((x, \dot{x})\) phase portrait differs from the Liénard \((x, y)\) phase portrait in the \(y\)-axis, but the qualitative behavior is consistent.)

Code Output

(Time series, \(\gamma = 10\)): This figure shows the waveform of \(x(t)\) over time. It is completely not a sine wave, but consists of a slow "charging" phase (\(x\) slowly rises) and a fast "discharge" phase (\(x\) rapidly falls)—this is the typical characteristic of a relaxation oscillator.

Summary

Lecture 6 extended the analytical framework for dynamical systems. It transitioned from systems constrained by mass conservation (\(g=-f\)) in Lecture 5—whose dynamics is essentially one-dimensional due to \(\det(J)=0\)—to "genuine" non-conserving two-dimensional systems (\(g \neq -f\)). This transition makes \(\det(J) \neq 0\) possible, thereby allowing entirely new dynamical behaviors such as spirals (complex eigenvalues) and oscillations. The analytical tool chain established in this lecture, from local analysis (LSA) to global structure (nullclines), to specific nonlinear oscillations (Hopf and Van der Pol), constitutes the foundation of nonlinear dynamics.

The core of this lecture lies in revealing the necessity of nonlinearity for producing robust oscillations. Linear Stability Analysis (LSA) shows that purely linear oscillation (center, \(T=0\)) is a "fine-tuned," physically unstable critical state. Hopf bifurcation shows how nonlinear terms (like \(\rho c^3\)) can stably "give birth" to limit cycles through bifurcation (supercritical), or (subcritical) lead to bistability and hysteresis. The Van der Pol oscillator shows how nonlinear damping (\(\gamma(x^2 - 1)\dot{x}\)) can "design" a robust stable limit cycle through energy self-regulation (injection and dissipation). When nonlinearity (\(\gamma\)) is strong, this manifests as relaxation oscillations, characterized by fast ("snap") and slow ("crawl") time scale separation.

In this lecture, we "saw" the existence of limit cycles through analysis and simulation. But in non-conserving systems (without potential functions), how do we rigorously prove the existence, uniqueness, and stability of a limit cycle? Lecture 7 will directly address this question, introducing the powerful Poincaré-Bendixson Theorem. This theorem provides a rigorous mathematical criterion for determining the existence of limit cycles in the two-dimensional plane. After mastering this theorem, Lecture 7 will apply it to more complex biological and ecological models, including analysis of the RhoGTPase Oscillator (a key cell polarity model) and the classic Rock-Paper-Scissors game, revealing how periodic behavior emerges in these systems.