🧭 Chan’s Curiosity Log — October 28, 2025

6 minute read

Published:

Daily reflections on new papers, theories, and open questions.

🧩 Paper 1: Unveiling the Dimensionality of Networks of Networks

📄 arXiv:2510.20520

🔬 Background

This paper presents a unifying view of biological and artificial networks as composite systems assembled from distinct functional units.
A similar integrative logic underlies brain evolution, where conserved architectures (e.g., the basal ganglia) are reused to support increasingly complex cognitive functions.
Understanding the interplay between single-module structures and their combinations in generating emergent collective behavior remains a central challenge across disciplines.


📘 Key Definitions

Definition 1 – Spectral Dimension:
Defined through the scaling of the Laplacian eigenvalue density.
The spectral dimension governs asymptotic and thermodynamic behaviors on the network (e.g., return probability of random walks, relaxation times, and critical phenomena) and behaves like a genuine geometric dimension.
Typically, in the thermodynamic limit of infinite systems: \(\rho(\lambda) \sim \lambda^{d_s / 2 - 1}, \quad \lambda \to 0,\) where $( d_s )$ is the spectral dimension describing how the density of low-lying Laplacian eigenvalues scales with the eigenvalue itself.


In finite-size systems, one can also define an alternative measure of dimensionality: \(D = \frac{\left(\sum_{i=1}^N \lambda_i\right)^2}{\sum_{i=1}^N \lambda_i^2}.\) This ratio provides a finite-size estimator of the network’s “effective dimension.”
Intuitively, it quantifies how spread out or concentrated the Laplacian spectrum is:

  • If all eigenvalues are similar (indicating a highly homogeneous, isotropic structure), ( D \approx N ), corresponding to a fully connected or uniform network.
  • If the spectrum is dominated by a few small eigenvalues (as in low-dimensional or modular systems), ( D ) becomes much smaller, reflecting a lower effective dimensionality.
  • D is invariant under global rescaling of eigenvalues, and thus encodes purely structural (not dynamical) information.

In practice, ( D ) behaves similarly to the participation ratio in random-matrix theory: it tells us how many modes effectively contribute to the Laplacian spectrum, or equivalently, how “extended” the collective modes are across the network.

Interpretation:
While ( d_s ) captures the asymptotic scaling of eigenvalue density in the continuum limit, ( D ) offers a finite-size diagnostic of the network’s internal connectivity balance.
Both are crucial for understanding diffusion, synchronization, and renormalization behavior in networked systems.


Definition 2 – Fiedler Dimension:
At finite but large ( N ) (mesoscopic scales), additional structural indicators appear.
The smallest non-zero Laplacian eigenvalue (the Fiedler eigenvalue) governs relaxation, diffusion, and synchronization robustness.
The Fiedler dimension describes how this eigenvalue approaches zero as ( N ) increases.
In homogeneous networks, Fiedler and spectral dimensions coincide.

Definition 3 – Bundled Networks:
A relevant class of inhomogeneous composite structures formed by superpositions of translationally invariant lattices.
They exhibit anomalous diffusion and serve as analytically tractable examples of networks of networks.
While their spectral dimensions are known, their mesoscopic Fiedler scaling remains poorly characterized.


💡 Key Idea

Using the Laplacian Renormalization Group (LRG), the authors show that in composite networks, the Fiedler dimension decouples from the spectral dimension.
They introduce a general RG framework to determine the emergent Fiedler dimension in bundled networks, deriving analytical expressions for both spectral and Fiedler dimensions in a broad class of modular systems.


🌱 Why It’s Interesting

Some neural networks can be regarded as networks of networks.
This paper provides a way to use renormalization group techniques to study their emergent geometry and collective behavior — potentially offering new insights into how RG ideas apply to deep or modular architectures.


❓ Open Questions / Worth Exploring

  1. What is the physical or functional significance of the difference between Fiedler and spectral dimensions?
    Could this framework generalize to more complex or asymmetric networks?
  2. Could one classify universality classes based on these dimensions — perhaps analogous to how double descent transitions organize learning dynamics in my current research?

🧩 Paper 2: Prediction of Neural Activity in Connectome-Constrained Recurrent Networks

📄 Nature Neuroscience (2025)

🔬 Background

A major goal in theoretical neuroscience is to link the connectivity of large neural networks with their emergent dynamics.
Traditionally, the inverse problem seeks to infer connectivity from observed activity, but this is difficult due to parameter degeneracy.
Recently, comprehensive synaptic connectome datasets have enabled researchers to approach the forward problem — predicting neural dynamics given connectivity, despite uncertainty in biophysical parameters.


💡 Key Idea

The authors train a teacher and student network sharing the same synaptic matrix but differing in single-neuron parameters (e.g., nonlinear activation parameters reflecting biophysical heterogeneity).
Both perform the same task, and the authors measure:

  • the similarity of teacher and student neural activity, and
  • the similarity of their single-neuron parameters.

📘 Conclusions

  • Multiple sets of single-neuron parameters can produce distinct activity patterns that solve the same task.
  • When connectivity constraints are combined with partial neural recordings, this degeneracy is reduced.
  • Even with accurate activity reconstruction, neuron-level parameters are not fully recovered — implying some parameters are “stiff” (strongly affect dynamics) while others are “sloppy” (weakly affect dynamics).

🌱 Why It’s Interesting

This introduces a novel teacher–student paradigm emphasizing connectivity constraints rather than weight values.
It suggests that identical connectivity can yield drastically different dynamics — a profound insight for understanding degeneracy and functional equivalence in learning systems.


❓ Open Questions / Worth Exploring

  1. If identical connectivities can produce very different dynamics, could the distribution of connectivity serve as a performance measure?
    What common features exist among well-generalizing networks?
  2. Could gating behaviors create multiple functional solutions even with random connectivity — and might a unifying theory describe this degeneracy?
  3. What is the minimal model capable of performing a task?
    Is degeneracy itself a functional property that supports flexibility in learning?

🧠 Reflection

Both papers touch on degeneracy and emergent structure — whether in modular composite networks or connectome-constrained neural systems.
They invite a deeper question: When do multiple microscopic realizations yield equivalent macroscopic behavior — and when do they diverge?
Perhaps the answer lies at the intersection of spectral geometry and learning dynamics, where RG, topology, and generalization all converge.


Tags:
#networkscience #neuroscience #learningdynamics #renormalizationgroup #spectraldimension #doubleDescent #ChanCuriosityLog