======================================================================== IS IT CREDIBLE? ------------------------------------------------------------------------ A Report on "Chaotic Balanced State in a Model of Cortical Circuits" by van Vreeswijk and Sompolinsky (1998) Reviewer 2 | April 7, 2026 ======================================================================== ======================================================================== DISCLAIMER ======================================================================== This report was generated by Reviewer 2, an automated system that uses large language models to assess academic texts. It did not have any input from a human editor and any claims made in it should be verified by a qualified expert. I am wiser than this person; for it is likely that neither of us knows anything fine and good, but he thinks he knows something when he does not know it, whereas I, just as I do not know, do not think I know, either. I seem, then, to be wiser than him in this small way, at least: that what I do not know, I do not think I know, either. Plato, The Apology of Socrates, 21d To err is human. All human knowledge is fallible and therefore uncertain. It follows that we must distinguish sharply between truth and certainty. That to err is human means not only that we must constantly struggle against error, but also that, even when we have taken the greatest care, we cannot be completely certain that we have not made a mistake. Karl Popper, 'Knowledge and the Shaping of Reality' ======================================================================== OVERVIEW ======================================================================== Citation: van Vreeswijk, C., & Sompolinsky, H. (1998). Chaotic Balanced State in a Model of Cortical Circuits. *Neural Computation*, Vol. 10, No. 6, pp. 1321–1371. URL: https://ieeexplore.ieee.org/document/7275560 Abstract Summary: This article investigates the hypothesis that the temporal irregularity observed in cortical neurons in vivo is due to a balance of excitatory and inhibitory currents. The authors present an analytical solution of a network model with excitatory and inhibitory populations of simple binary units, revealing a new cooperative stationary state termed a 'balanced state'. Key Methodology: Analytical solution of a mean-field theory for a network model with excitatory and inhibitory populations of simple binary units, numerical simulations, and comparison with experimental data. Research Question: What are the conditions under which a network evolves to a state in which excitatory and inhibitory inputs are balanced, what are the characteristics and functional advantages of this state, and how does it relate to the temporal irregularity of cortical neurons? ======================================================================== EDITOR'S NOTE ======================================================================== Reviewer 2 raises several philosophical objections to the mathematical abstractions used in the article, particularly the synaptic scaling and the asymptotic limits. The authors could robustly defend these choices by emphasizing that the model is fundamentally a statistical physics approach to neural dynamics. The scaling of synapses by the inverse square root of the connectivity index is not a biological axiom but a necessary mathematical device to ensure a non-trivial thermodynamic limit where variance and mean scale together. Without this, the network would trivially saturate or fall silent. Similarly, taking the network size to infinity before the connectivity index is the standard, rigorous method for isolating the asynchronous state from finite-size noise. The authors might push back on the reviewer's assertion that this structurally suppresses synchrony; rather, it allows them to prove that asynchronous chaos exists as a distinct, stable dynamical phase independent of finite-size correlations. Regarding the biological plausibility of the external input requirements and the threshold distributions, the authors could frame these as deliberate abstractions that yield specific, testable predictions. The article already transparently discusses the discrepancy with lateral geniculate nucleus input, which shows intellectual honesty (p. 1363). This could be strengthened by explicitly stating that the model defines a theoretical upper bound on network performance. For the threshold distribution, rather than viewing the necessity of a bounded distribution as a weakness, it could be presented as a strong prediction of the model: to achieve the empirically observed unimodal skewed firing rates, the biological threshold distribution might need to be bounded. The authors could address the critique of the infinite Lyapunov exponent and tracking speed limits by acknowledging they are artifacts of the discrete binary units, while arguing that they serve as mathematically tractable proxies for the high-gain, continuous dynamics of real neurons. The reviewer's critique of the tracking capability comparison has some merit, as comparing the balanced network to a weak-feedback network does guarantee a faster relaxation time by design. The authors could address this by clarifying that the comparison is meant to illustrate the qualitative difference in dynamic regimes rather than to serve as a competitive benchmark. To address the critique regarding the lack of quantitative support for Poissonian statistics, the authors might consider calculating the coefficient of variation for the interspike intervals. Even if the hard refractory period lowers the coefficient slightly below unity, providing this metric could empirically validate the claim of approximate Poissonian behavior and satisfy the reviewer's request for quantitative rigor. Finally, the reviewer has provided a highly valuable service by identifying a series of specific algebraic and typographical errors in the equations. Correcting these would significantly improve the mathematical rigor of the article without altering any of the core conclusions. The errors in the Heaviside function arguments, the missing logarithmic factors in the asymptotic scaling, and the index typos are minor but essential fixes. By implementing these mathematical corrections and slightly softening the claims about linearization at low firing rates, the authors could satisfy the reviewer while preserving the integrity and ambition of the theoretical framework. ======================================================================== SUMMARY ======================================================================== Is It Credible? van Vreeswijk and Sompolinsky present a foundational theoretical model of cortical circuits, proposing that the highly irregular, Poisson-like firing observed in vivo can emerge entirely from deterministic internal network dynamics. The authors claim that in a sparsely connected network with strong synaptic weights, a "balanced state" dynamically emerges where massive excitatory and inhibitory currents cancel each other out. This balance leaves the net input fluctuating around the neuronal threshold, generating an "asynchronous chaotic state" (p. 1321). Furthermore, the authors argue that this balanced state provides a functional advantage, allowing the network to track time-dependent external inputs on time scales much shorter than the integration time of a single neuron. The credibility of these claims rests heavily on the specific mathematical scaling assumptions used to construct the mean-field theory. The authors scale the synaptic connections as proportional to the inverse square root of the connectivity index, and the external inputs as proportional to the square root of the connectivity index. While this is a standard technique in statistical physics to achieve a non-trivial thermodynamic limit, it mathematically forces the excitatory and inhibitory terms to cancel out if the neurons are to remain in a responsive, non-saturated regime. Thus, the balanced state is somewhat axiomatic to the model's construction rather than a purely spontaneous biological phenomenon. Furthermore, the emergence of the asynchronous state relies on a specific limit ordering---taking the network size to infinity before the connectivity index---which structurally drives the probability of shared inputs to zero, effectively pre-ordaining the absence of synchrony. Translating these mathematical requirements to biological reality introduces several tensions. The model dictates that the external input to the network must be massive to maintain the balanced state. The authors transparently acknowledge that this conflicts with some experimental evidence suggesting that external inputs, such as those from the lateral geniculate nucleus to the visual cortex, constitute only a fraction of the net input (p. 1363). Additionally, the use of binary threshold units introduces artifacts. For instance, the claim that the network is chaotic is based on an infinitely large Lyapunov exponent, which the authors admit is a mathematical artifact of the discrete step-function dynamics rather than a feature of continuous biological neurons (p. 1351). Similarly, the impressive fast-tracking capability is demonstrated by comparing this high-gain network to a low-gain network, which is somewhat tautological, and the absolute speed limit of this tracking is dictated by the model's algorithmic update rate rather than physical biological constraints. The article also provides a detailed statistical characterization of the balanced state, notably predicting a skewed, long-tailed distribution of firing rates that matches experimental observations. However, this result is highly sensitive to the assumed distribution of neuronal thresholds. The authors show that a bounded threshold distribution yields the realistic skewed shape, but an unbounded Gaussian distribution causes the network to collapse into a frozen, bimodal state at low activity levels (p. 1346). Furthermore, the analytical framework relies on the Central Limit Theorem to assume Gaussian input fluctuations, an approximation that the authors note weakens in the biologically relevant low-firing-rate regime. Finally, the derivations contain several algebraic and typographical errors, particularly in the summation indices for variance calculations, the asymptotic expansions for low rates, and the equations defining the rate distributions, though these do not appear to invalidate the broader qualitative conclusions. Ultimately, the article presents a highly credible and influential theoretical proof-of-concept. While the strict scaling parameters, the reliance on binary units, and the idealized external inputs represent extreme biological simplifications, they successfully isolate a mechanism by which strong recurrent feedback can dynamically stabilize a network into a state of asynchronous, irregular activity. The article clearly defines the boundaries and requirements of this state, even when those requirements highlight gaps between the mathematical theory and empirical observation. ## The Bottom Line The article provides a foundational theoretical framework demonstrating how cortical networks can generate irregular, chaotic activity through a dynamic balance of strong recurrent excitation and inhibition. However, the emergence of this asynchronous balanced state is heavily dependent on specific mathematical scaling assumptions, discrete neuron models, and limit orderings that partially pre-ordain the outcomes. While the direct biological mapping of its strict parameter requirements remains a subject of ongoing investigation, the work stands as a highly credible and influential proof-of-concept in computational neuroscience. ======================================================================== POTENTIAL ISSUES ======================================================================== Axiomatic nature of the balanced state: The article presents the "balanced state" as an emergent property, but it is heavily constrained by the model's fundamental scaling assumptions. By defining synaptic weights as $O(1/\sqrt{K})$ and external inputs as $O(\sqrt{K})$, the net input equations (e.g., $u_E = (E m_0 + m_E - J_E m_I)\sqrt{K} - \theta_E$) mathematically require the $O(\sqrt{K})$ terms to cancel for the neuron to remain in a responsive, non-saturated regime (p. 1329). While this $1/\sqrt{K}$ ansatz is standard methodology in theoretical physics to find a non-trivial thermodynamic limit, it means the balance is a mathematical precondition imposed on the model rather than a purely spontaneous biological phenomenon. The authors do acknowledge that this scaling differs from conventional mean-field theories (which use $1/K$), explicitly justifying it as a necessary mathematical step to study the asynchronous state in the large $K$ limit (p. 1359). Contradiction regarding external input magnitude: The theoretical framework requires the external input to the network to be large, scaling with $\sqrt{K}$, to maintain the balanced state. The authors explicitly note that if the external input is only of order 1, the mechanism requires extreme fine-tuning of interaction strengths (p. 1360). However, the authors also acknowledge that this requirement conflicts with biological evidence, citing Ferster, Chung, and Wheat (1996) and Stratford et al. (1996), which suggest that input from the lateral geniculate nucleus to layer 4 cortical cells is only a fraction of the net input (p. 1363). The authors explicitly state that "Further experimental clarification of this issue is called for," leaving a significant gap between the model's mathematical requirements and current empirical data (p. 1363). Structural suppression of synchrony via limit ordering: The emergence of an "asynchronous chaotic state" is heavily influenced by the mathematical order of operations (p. 1321). The authors explicitly state: "Technically, we will first take the limit $N \to \infty$ and then the limit $K \to \infty$" (p. 1327). Taking the thermodynamic limit $N \to \infty$ while holding $K$ fixed forces the probability of any two neurons sharing a common input ($K^2/N$) to exactly zero. Since shared inputs are a primary structural driver of synchronization in recurrent networks, this specific limit ordering structurally eliminates a key source of synchrony. While this is a rigorous method in statistical mechanics to isolate asynchronous states, it makes the finding of asynchrony somewhat pre-ordained by the mathematical setup. The authors explicitly acknowledge that in reality, networks have a large fixed size and connectivity, making the distinction between full and sparse connectivity "problematic," but they adopt this limit ordering "Technically" to simplify the mean-field equations (p. 1327). Definition of chaos in binary networks: The article characterizes the balanced state as "chaotic" due to an "infinitely large" Lyapunov exponent (p. 1351). This infinite divergence is a mathematical artifact of using binary threshold units (Heaviside step function), which possess infinite gain at the threshold (p. 1325). The authors are transparent about this limitation, explicitly acknowledging that the standard definition of chaos is "technically inapplicable to a system with discrete degrees of freedom such as ours" (p. 1349) and that the divergence is "related to the discreteness of the degrees of freedom" (p. 1351). Nevertheless, applying the term "chaos" to what is essentially noise amplification in a high-gain digital switch may overstate the dynamical complexity when compared to continuous biological neurons. Breakdown of Gaussian assumption at low rates: The mean-field theory relies on the Central Limit Theorem to assume that the total synaptic input to a cell obeys a Gaussian distribution (p. 1329). This requires the expected number of active inputs, $K m_k$, to be large. However, the authors specifically analyze the "low rate limit" where $m_k \ll 1$ (p. 1331). If the mean rate $m_k$ becomes extremely small, the number of active inputs drops, and the input distribution will become highly skewed (Poisson-like) rather than Gaussian. The authors acknowledge this boundary condition, stating the Gaussian assumption is a "good approximation" only "as long as $m_k \gg K^{-1}$" (p. 1332), but this indicates the analytical framework weakens precisely in the sparse-firing regime it attempts to model. Failure of linearization in the low-rate regime: The article claims the network "linearizes the population responses to the external drive" (p. 1321), based on the leading-order equations (Eqs. 4.3 and 4.4, p. 1330). However, this linearity is an asymptote that fails in the biologically relevant low-firing-rate limit. The authors explicitly acknowledge this limitation: "When $m_0$ becomes sufficiently small (i.e., of order $1/\sqrt{K}$ or less), the strong nonlinearity in the single neuron dynamics reveals itself in a strong nonlinearity in the population response" (p. 1332). This suggests the "linearization" claim is not robust across all operating regimes. Sensitivity to threshold distribution assumptions: The model's ability to produce a biologically realistic, unimodal, skewed firing rate distribution is highly sensitive to the assumed shape of the neuronal threshold distribution. The authors show that an unbounded distribution (like a Gaussian) causes the network to settle into a "frozen state" with a bimodal rate distribution at low activity (p. 1346). A bounded distribution (e.g., uniform) is required to produce the fluctuating, skewed distribution. The authors explicitly set out to test this and acknowledge that the state "depends not only on the width of the threshold distribution but also on the form of its tail" (p. 1344). This finding highlights a degree of fragility, as the model's core qualitative output depends heavily on an unmeasured, fine-grained assumption about parameter tails. Fragility to inhibitory time constant variations: The stability analysis in Section 6.3 reveals that the balanced state requires the inhibitory time constant $\tau$ to be below critical thresholds ($\tau_L$ and $\tau_G$). If $\tau$ exceeds $\tau_G$, the system enters an "Unbalanced limit cycle" characterized by massive oscillations in the net input of order $\sqrt{K}$ (p. 1343). In a network where $K$ is large, such oscillations would be biologically pathological. While identifying these boundaries is a standard dynamical systems result, the fact that the entire balanced framework collapses into massive oscillations if the effective inhibitory time constant is slightly too slow imposes a strict and potentially fragile constraint on the model's biological applicability. Unverified ergodicity assumption: To characterize the statistics of the balanced state, the authors separate the variance of synaptic inputs into a quenched spatial component ($\beta_k$) and a temporal component ($\alpha_k - \beta_k$) (pp. 1334--1335). This separation relies on defining $m_i^k$ as the "time-averaged activity rate of the $i$th cell" (Eq. 5.4), which implicitly assumes the system is ergodic (time averages equal ensemble averages). However, the authors compare their model to asymmetric spin glasses (p. 1359). While assuming ergodicity is standard practice for analyzing asynchronous chaotic states, a rigorous mathematical demonstration that the $1/\sqrt{K}$ scaling guarantees ergodicity is absent, leaving the separation of variances theoretically uncertain. The authors attempt to address the validity of their averaging methods by showing that a completely deterministic model yields the same mean-field equations as the stochastic one (p. 1368), though this does not constitute a formal mathematical proof of ergodicity for the $1/\sqrt{K}$ scaling. Extreme biological simplifications: The model relies on highly abstracted components, including binary threshold units, instantaneous current pulses for synapses, and random, sparse connectivity. These simplifications ignore dendritic integration, continuous membrane potentials, synaptic plasticity, and structured cortical motifs. The authors explicitly acknowledge these limitations, noting the definition of a spike in a binary model is imperfect because a prolonged active state acts as a continuous signal "even though no new spike is emitted" (p. 1326). They also discuss the need to extend the theory to integrate-and-fire dynamics (p. 1362) and networks with "more interesting connectivity architecture" (p. 1363). Idealized external inputs: The model generates temporal irregularity from purely internal dynamics by assuming a "large, temporally regular input from external sources" (p. 1321). Real cortical inputs are stochastic spike trains. The authors acknowledge that using regular input is a methodological choice to test if "intrinsic deterministic dynamics... is sufficient to generate strong variability" (p. 1323). However, excluding noisy external inputs leaves it uncertain how dominant the internally generated chaos would be in a fully realistic biological environment. Reliance on unpublished work for a central comparative claim: A key argument for the novelty of the $1/\sqrt{K}$ synaptic scaling is its purported superiority over the conventional $1/K$ (weak synapse) scaling. The authors claim that the weak-synapse model "breaks down in the presence of inhomogeneity in the local thresholds" and that the network state "becomes increasingly frozen" (p. 1361). However, the proof for this critical comparative assertion is not provided in the text; instead, it relies entirely on a citation to an unpublished manuscript (van Vreeswijk and Sompolinsky, cited on p. 1360). Relying on unpublished work to support a central argument undermines the self-containment and verifiable rigor of the article. Structurally biased tracking comparison: The article argues that a functional advantage of the balanced state is its ability to track time-dependent inputs rapidly. This is demonstrated by comparing the balanced network (with $1/\sqrt{K}$ scaling) to an "unbalanced network" with $1/K$ scaling (pp. 1354--1356). However, this comparison is somewhat tautological. The $1/\sqrt{K}$ scaling inherently creates very strong feedback, which mathematically guarantees a much faster relaxation time compared to a network designed with weak $O(1/K)$ feedback. While this comparison serves a pedagogical purpose to highlight the consequences of the scaling choice, it compares a high-gain system to a low-gain system to conclude that the former is faster. Tracking speed limit bound by algorithmic clock: In quantifying the network's ability to track external inputs, the authors derive a strict bound on the rate of change: $-m_k < \tau_k \frac{dm_k}{dt} < 1 - m_k$ (p. 1354). This absolute speed limit is dictated by the algorithmic update rate $\tau_k$, which is an abstraction representing the mean interval between consecutive updates (p. 1326). In a biological network, tracking speed would be bounded by physical constraints such as axonal delays and synaptic rise times. While the derived limit is mathematically rigorous within the defined model, it is an artifact of the discrete update scheme rather than a biological constraint. Incorrect summation indices in variance equation: Equation 3.12 on page 1329 contains a malformed sum for the variance of the input fluctuations. The equation is written as $\alpha_k(t) = \sum_{l,l'=1}^2 \sum_{j,j'=1}^{N_l} [(\delta(J_{kl}^{ij} \sigma_l^j(t)))^2]$. The summand only depends on indices $l$ and $j$, yet the sum is over $l, l'$ and $j, j'$. Summing over the extra indices $l'$ and $j'$ incorrectly multiplies the result by the total network size. Assuming independent connections cause cross-terms to vanish, the correct expression should be a single double-sum: $\sum_{l=1}^2 \sum_{j=1}^{N_l} [(\delta(J_{kl}^{ij} \sigma_l^j(t)))^2]$. Incorrect denominator in variance derivation: On page 1329, the text below Equation 3.12 states intermediate steps in the variance calculation that use the wrong denominator. The text states $[(J_{kl}^{ij} \sigma_l^j(t))^2] = J_{kl}^2 m_l / N$ and $[(J_{kl}^{ij} \sigma_l^j(t))]^2 = J_{kl}^2 m_l^2 K / N^2$. Both expressions incorrectly use the total network size $N$ in the denominator instead of the presynaptic subpopulation size $N_l$. Using the correct connection probability $K/N_l$, the first expectation should be $(J_{kl}^2/N_l) m_l$. Sign error in mean input term: Equation 5.1 on page 1333 defines the instantaneous activity with a negative sign on the mean input $u_k$: $\sigma_k^i(t) = \Theta(-u_k + \sqrt{\beta_k} x_i + \sqrt{\alpha_k - \beta_k} y_i(t))$. Because $u_k$ is defined as the mean input relative to threshold (Eq. 3.11), and the other terms are zero-mean fluctuations, the correct expression for the total input should be $u_k + \sqrt{\beta_k} x_i + \sqrt{\alpha_k - \beta_k} y_i(t)$. This sign error propagates to Equation 5.11, though the correct form is used later in Equation 5.17. Incorrect subscript in temporal fluctuation equation: Equation 5.8 on page 1335 expresses the temporal fluctuations of the input with an incorrect subscript inside the sum: $u_k^i(t) - \langle u_k^i \rangle = \sum_{l=1}^2 \sum_{j=1}^{N_l} J_{kl}^{ij} (\sigma_k^j(t) - m_k^j)$. The sum is over the presynaptic populations $l$ and their neurons $j$. The state variable and mean rate inside the sum belong to the presynaptic neuron, so their population index must be $l$. The equation incorrectly uses the postsynaptic population index $k$. The correct expression is $(\sigma_l^j(t) - m_l^j)$. Missing square in variance term: In Equation 7.5 on page 1346, the term representing the combined standard deviation of the quenched fluctuations is written incorrectly as $\sqrt{\Delta + \beta_k}$ within the argument of the $H$ function: $q_k = \int Dx H\left(\frac{-u_k - \sqrt{\Delta + \beta_k} x}{\sqrt{\alpha_k - \beta_k}}\right)^2$. Because $\Delta$ is defined as the standard deviation of the threshold distribution (Eq. 7.3), its variance is $\Delta^2$. The total quenched variance is the sum of the variance from connectivity ($\beta_k$) and the variance from the threshold distribution ($\Delta^2$). The correct term should therefore be $\sqrt{\Delta^2 + \beta_k}$. Missing logarithmic factor in asymptotic expansion: Equation 7.10 on page 1346 states an incorrect asymptotic relationship for the mean input $u_k$ in the low activity limit: $u_k + \Delta/2 = O(\sqrt{m_k})$. A self-consistent evaluation of the integral in Equation 7.9 in the low $m_k$ limit requires the lower integration limit to be of order $O(\sqrt{|\log m_k|})$. This implies the correct relation is $u_k + \Delta/2 = -O(\sqrt{m_k |\log m_k|})$. The printed equation is missing the $\sqrt{|\log m_k|}$ factor. Incorrect logarithmic term placement in rate bound: Equation 7.14 on page 1348 gives the upper bound of the rate distribution with an incorrectly placed logarithmic term: $m_+ \propto \Delta \sqrt{\alpha_k / |\log(\alpha_k)|}$. Deriving $m_+$ from the relation $m_k \approx \frac{\sqrt{\alpha_k}}{\Delta A} m_+$ (where $A \approx \sqrt{|\log m_k|}$) yields $m_+ \propto \Delta \sqrt{\alpha_k |\log \alpha_k|}$. The equation incorrectly places the logarithmic term in the denominator instead of the numerator. Sign error and missing Jacobian in rate distribution formula: Equation 7.15 on page 1348, which gives the general rate distribution, contains two algebraic errors: $\rho_k(m) = \sqrt{2\pi} P\left(-\sqrt{\alpha_k}(h(m) + \tilde{h}_k)\right) e^{h^2(m)/2}$. First, substituting $u_k = \sqrt{\alpha_k} \tilde{h}_k$ into $\theta = u_k - \sqrt{\alpha_k} h(m)$ yields $\theta = -\sqrt{\alpha_k}(h(m) - \tilde{h}_k)$. The equation incorrectly has a plus sign instead of a minus sign inside the parenthesis. Second, the Jacobian $|d\theta/dm|$ is $\sqrt{2\pi \alpha_k} e^{h(m)^2/2}$. The equation is missing the $\sqrt{\alpha_k}$ factor in the leading coefficient. Imaginary quantity and sign errors in asymptotic formula: Equation 7.17 on page 1349 contains mathematically ill-defined terms and sign errors: $\rho_k(m) \propto P\left(-\sqrt{\alpha_k}\left(\sqrt{2\log(m)} - \tilde{h}_k\right)\right) \frac{1}{m\sqrt{\log(m)}}$. Because the rate $m$ is between 0 and 1, $\log(m)$ is strictly negative, making the square roots $\sqrt{\log(m)}$ imaginary. The expressions must use the absolute value, $|\log m|$. Additionally, based on the correct threshold substitution, the argument of $P(\cdot)$ should be $\sqrt{\alpha_k}(\sqrt{2|\log m|} + \tilde{h}_k)$. The printed equation incorrectly has a minus sign before the entire expression and a minus sign before $\tilde{h}_k$. Incorrect coefficient in Lyapunov exponent derivation: Equation 8.9 on page 1351 gives the leading order expansion for the distance between trajectories, $D_k$, with an incorrect coefficient: $\tau_k \frac{dD_k}{dt} = \frac{2}{\pi} e^{-u_k^2/2\alpha_k} \frac{1}{\sqrt{\alpha_k}} \sqrt{\alpha_k - \gamma_k}$. Expanding the bivariate normal integral used to calculate the overlap $Q_k$ yields a leading correction term with a coefficient of $1/(\pi\sqrt{2})$. When accounting for the factor of 2 in the definition $D_k = 2(m_k - Q_k)$, the resulting coefficient in the differential equation should be $\frac{\sqrt{2}}{\pi}$, not $\frac{2}{\pi}$. Double subtraction of threshold in autocorrelation derivation: In Equation A.14 on page 1367, the threshold $\theta_k$ is subtracted twice: $\Theta(u_k + \sqrt{\beta_k} x_1 + \sqrt{\alpha_k - \beta_k} x_2 - \theta_k)$. The mean input $u_k$ is already defined in Equation A.8 to include the subtraction of the threshold ($u_k = \dots - \theta_k$). The expression in A.14 therefore subtracts the threshold a second time. This error propagates to the final line of the equation, which incorrectly includes $\theta_k$ in the numerator of the $H$ function argument. Presentation and clerical issues: There are several minor presentation and notation issues throughout the text. First, on page 1325, the text states the connection probability from population $l$ to a neuron in population $k$ is $K/N_k$; for the expected number of inputs to be $K$, this probability must be $K/N_l$. Second, on page 1330, the text states "to leading order in $k$" when discussing an asymptotic expansion, but this should refer to the large connectivity index $K$, not the population index $k$. Third, the notation for neuron indices drifts inconsistently, sometimes using subscripts for the population and superscripts for the neuron (e.g., $\sigma_k^i(t)$), and sometimes the reverse (e.g., $m_i^k$). Fourth, the caption for Figure 8 states parameters are "as in Figure 3" ($J_E=2, J_I=1.8$), but the text states the critical time constants for this figure are $\tau_L=1.61$ and $\tau_G=1.50$; calculating $\tau_G$ with the Figure 3 parameters yields approximately 1.897, indicating a mismatch between the stated parameters and the text (p. 1343). Finally, on page 1366, the text discussing the distribution $p_l(n)$ incorrectly uses the index $k$ ($[n_k] = \dots = K m_k$) instead of the correct presynaptic index $l$. ======================================================================== FUTURE RESEARCH ======================================================================== Biological validation of synaptic scaling: Future research should focus on empirically testing the scaling relationship between synaptic strength, connectivity density, and external input magnitude in living cortical circuits. Advanced optogenetic and multi-electrode array techniques could be used to measure whether the ratio of feedforward to recurrent input aligns with the strong-coupling assumptions required by the balanced state model, resolving the tension regarding the magnitude of external inputs. Continuous dynamics and realistic topologies: Future theoretical work could build on this foundation by rigorously analyzing the balanced state in networks of conductance-based, continuous-variable neurons (such as integrate-and-fire or Hodgkin-Huxley models) without relying on the infinite-gain artifacts of binary units. Incorporating biologically realistic, spatially structured connectivity motifs rather than purely random sparse graphs would also help determine if the asynchronous chaotic state persists without the strict mathematical limit orderings used in the original mean-field derivations. ======================================================================== COPYEDITING ======================================================================== The manuscript presents a highly influential and mathematically elegant theoretical framework demonstrating how a deterministic balance between strong excitation and inhibition can generate irregular, chaotic activity in neural networks. While the statistical physics approach provides a rigorous foundation for isolating the asynchronous state, there are several algebraic and typographical errors in the equations that require correction to ensure mathematical accuracy. Additionally, some of the broader claims regarding linearization, power-law distributions, and Poissonian statistics would benefit from slight softening or additional quantitative support to better align with the derived mathematical limits and biological realities. - **p. 1321** "The activity levels of single cells are broadly distributed, and their distribution exhibits a skewed shape with a long power-law tail." The derived formula in Equation 7.12 is proportional to an inverse law with a logarithmic correction, not a strict power law. Consider revising to: "The activity levels of single cells are broadly distributed, and their distribution exhibits a skewed shape with a tail that follows an inverse law with a logarithmic correction." - **p. 1321** "The internal synaptic inputs act as a strong negative feedback, which linearizes the population responses to the external drive despite the strong nonlinearity of the individual cells." This claim is presented as a general feature, but the text later acknowledges on page 1332 that this linearization breaks down in the biologically relevant regime of low firing rates. Consider softening this claim to explicitly acknowledge the boundary conditions, perhaps by revising to: "The internal synaptic inputs act as a strong negative feedback, which linearizes the population responses to the external drive at higher rates, though this linearization breaks down at biologically relevant low firing rates." - **p. 1321** "It is shown that the autocorrelations decay on a short time scale, yielding an approximate Poissonian temporal statistics." While the analytical form of the interspike interval distribution is provided later, there is no quantitative benchmark to support the claim of Poisson-like variability. Consider adding a brief calculation or mention of the coefficient of variation for the interspike intervals to quantitatively support this claim, even if the hard refractory period lowers the coefficient slightly below unity. - **p. 1325** "The connection between the $i$th postsynaptic neuron of the $k$th population and the $j$th presynaptic neuron of the $l$th population, denoted $J_{kl}^{ij}/\sqrt{K}$ with probability $K/N_k$ and zero otherwise." To achieve the stated average of $K$ connections from the presynaptic population $l$, the probability must be defined relative to the presynaptic population size. Consider revising the probability term from $K/N_k$ to $K/N_l$. - **p. 1329** "The population average $[J_{kl}^{ij}]$ is equivalent to a quenched average over the random connectivity and is therefore equal to $J_{kl}\sqrt{K}/N$, yielding equation 3.5." Following the correction to the connection probability, the denominator here should reflect the presynaptic population size rather than the total network size. Consider revising $J_{kl}\sqrt{K}/N$ to $J_{kl}\sqrt{K}/N_l$. - **p. 1329** "$\alpha_k(t) = [(\delta u_k^i(t))^2] = \sum_{l,l'=1}^2 \sum_{j,j'=1}^{N_l} [(\delta(J_{kl}^{ij} \sigma_l^j(t)))^2]$" The summation indices are malformed because the equation includes double summations over $l, l'$ and $j, j'$, but the summand only contains the indices $l$ and $j$. Consider writing this as a single sum or rewriting the summand as a cross-product to mathematically justify the double summation. - **p. 1333** "$\sigma_k^i(t) = \Theta\left(-u_k + \sqrt{\beta_k} x_i + \sqrt{\alpha_k - \beta_k} y_i(t)\right)$" The argument of the Heaviside function incorrectly includes a negative sign before $u_k$. Because $u_k$ is defined as the mean input relative to the threshold, the total input is $u_k$ plus fluctuations. Consider revising the first term inside the parenthesis to $+u_k$. - **p. 1335** "$u_k^i(t) - \langle u_k^i \rangle = \sum_{l=1}^2 \sum_{j=1}^{N_l} J_{kl}^{ij} (\sigma_k^j(t) - m_k^j)$" The temporal fluctuation term for the presynaptic neurons incorrectly uses the postsynaptic index $k$ instead of the presynaptic index $l$. Consider revising the term in the parentheses to $(\sigma_l^j(t) - m_l^j)$. - **p. 1346** "$q_k = \int Dx \left[ H\left( \frac{-u_k - \sqrt{\Delta + \beta_k} x}{\sqrt{\alpha_k - \beta_k}} \right) \right]^2$" The total quenched standard deviation is written as $\sqrt{\Delta + \beta_k}$, but because $\Delta$ is defined as the standard deviation, its variance is $\Delta^2$. Consider revising the numerator to use $\sqrt{\Delta^2 + \beta_k}$. - **p. 1346** "$u_k + \Delta/2 = O(\sqrt{m_k})$" This asymptotic scaling drops a crucial logarithmic factor necessary for evaluating the integral. Consider revising to include the missing logarithmic factor, such as $u_k + \Delta/2 = O(\sqrt{m_k |\log m_k|})$. - **p. 1348** "$m_+ \propto \Delta \sqrt{\alpha_k / |\log(\alpha_k)|} \gg m_k$" The upper bound $m_+$ incorrectly places the logarithmic factor in the denominator. Consider revising the expression to place the logarithmic factor in the numerator, such as $m_+ \propto \Delta \sqrt{\alpha_k |\log \alpha_k|} \gg m_k$. - **p. 1348** "$\rho_k(m) = \sqrt{2\pi} P\left( -\sqrt{\alpha_k} (h(m) + \tilde{h}_k) \right) e^{h^2(m)/2}$" There is a sign error in the argument of $P$, and the prefactor is missing the Jacobian term. Consider revising the argument to $(h(m) - \tilde{h}_k)$ and adding the missing Jacobian term $\sqrt{\alpha_k}$ to the prefactor. - **p. 1349** "$\rho_k(m) \propto \frac{P\left( -\sqrt{\alpha_k} (\sqrt{2\log(m)} - \tilde{h}_k) \right)}{m\sqrt{\log(m)}}$" The term $\sqrt{2\log(m)}$ is imaginary for rates between 0 and 1. Consider revising this to use the absolute value, changing it to $\sqrt{2|\log m|}$. - **p. 1353** "$\tau_k \frac{dm_k^\infty(t)}{dt} = -m_k^\infty(t) + H\left( \frac{-u_k^\infty(t) + \sum_l J_{kl} m_l^1(t)}{\sqrt{\alpha_k^\infty(t)}} \right)$" There is a sign error before the summation inside the $H$ function; the numerator should subtract the summation term. Consider changing the plus sign before the summation to a minus sign in both Equation 9.6 and the subsequent Equation 9.7. - **p. 1367** "$F_k = \int Dx_1 \int Dx_2 \int Dx_3 \Theta(u_k + \sqrt{\beta_k} x_1 + \sqrt{\alpha_k - \beta_k} x_2 - \theta_k)$" The argument of the Heaviside function $\Theta$ subtracts $\theta_k$, but $u_k$ already includes this subtraction as defined in Equation A.8, making the extra $-\theta_k$ redundant and incorrect. Consider removing $-\theta_k$ from the arguments of the Heaviside functions in Equation A.14. ======================================================================== PROOFREADING ======================================================================== - **Page 1353**: "**equilibrium rate for m0(t) = m0, which are given by**" -> "**equilibrium rates for m0(t) = m0, which are given by**" (Subject-verb agreement) - **Page 1355**: "**the thresholds Tk chosen so that**" -> "**the thresholds Tk are chosen so that**" (Missing verb) - **Page 1368**: "**Determinstic**" -> "**Deterministic**" (Spelling error) ======================================================================== (c) 2026 The Catalogue of Errors Ltd Licensed under CC BY 4.0 https://creativecommons.org/licenses/by/4.0/ isitcredible.com ========================================================================