Reversing AI Model Collapse by Simulating Bounded Rationality

https://arxiv.org/abs/2512.01354

Comments

JIANGZHONGJIEDec 5, 2025, 4:59 PM
Hi HN,

I'm the author of this paper/project. I am a humanities researcher turned quant architect, working solo.

The Problem: I noticed that LLMs are suffering from "Model Collapse" because they optimize for statistical smoothness. They are too perfect, which makes them dumb and easily detectable.

The Solution: Instead of cleaning data, I went back to Herbert Simon's "Bounded Rationality". I built a pipeline (PMCSF) that injects mathematical "cognitive noise" (e.g., sentence length oscillation, hesitation) back into the generation process.

The Results:

Anti-Detection: It achieves a Jensen-Shannon divergence of 0.0614 against human text (vs 0.44 for standard AI).

Financial Alpha: In a blind backtest of the 2015 Crash, the cognitive signal (MDI) predicted the liquidity freeze, reducing drawdown by 47%.

Safety Note: Because this architecture can effectively generate undetectable disinformation and manipulate sentiment, I have redacted the core prompts and safety constraints in the open release. I believe we need to build the "Radar" (detection) before distributing the "Missile".

Happy to answer questions about the Neuro-Symbolic architecture or the backtest data!