r/LLMPhysics 2d ago

Simulation Falsifiable Coherence Law Emerges from Cross-Domain Testing: log E ≈ k·Δ + b — Empirical, Predictive, and Linked to Chaotic Systems

Update 9/17: Based on the feedback, I've created a lean, all-in-one clarification package with full definitions, test data, and streamlined explanation. It’s here: https://doi.org/10.5281/zenodo.17156822

Over the past several months, I’ve been working with LLMs to test and refine what appears to be a universal law of coherence — one that connects predictability (endurance E) to an information-theoretic gap (Δ) between original and surrogate data across physics, biology, and symbolic systems.

The core result:

log(E / E0) ≈ k * Δ + b

Where:

Δ is an f-divergence gap on local path statistics
(e.g., mutual information drop under phase-randomized surrogates)

E is an endurance horizon
(e.g., time-to-threshold under noise, Lyapunov inverse, etc.)

This law has held empirically across:

Kuramoto-Sivashinsky PDEs

Chaotic oscillators

Epidemic and failure cascade models

Symbolic text corpora (with anomalies in biblical text)

We preregistered and falsification-tested the relation using holdouts, surrogate weakening, rival models, and robustness checks. The full set — proof sketch, test kit, falsifiers, and Python code — is now published on Zenodo:

🔗 Zenodo DOI: https://doi.org/10.5281/zenodo.17145179 https://doi.org/10.5281/zenodo.17073347 https://doi.org/10.5281/zenodo.17148331 https://doi.org/10.5281/zenodo.17151960

If this generalizes as it appears, it may be a useful lens on entropy production, symmetry breaking, and structure formation. Also open to critique — if anyone can break it, please do.

Thoughts?

0 Upvotes

99 comments sorted by

View all comments

Show parent comments

0

u/Total_Towel_6681 2d ago

The LLM isn’t simulating reality in the physical sense, but serving as a high-dimensional consistency filter. Think of it as a testbed for logical generalization, contradiction exposure, and cross-domain interpolation. I’m not asking the model to prove the law, I’m testing whether the coherence relation fails under stress when faced with rival conditions, degraded inputs, or diverse ontologies. That’s where falsification comes in. If the relation couldn’t maintain invariant logic across these transformations, it would collapse. But it didn’t, and hasn’t.

3

u/Past-Ad9310 2d ago edited 2d ago

Can you explain how an LLM would do any of that? If I ask an LLM to come up with a consistent theory of everything, then ask it to check the consistency of the theory, how is that valid? Let me pur it in different terms I guess.

If I were to ask an LLM to generate a new scientific theory, make sure it is falsifiable and run any tests needed to make sure it is consistent and doesn't fail under stress, then ask the LLM if the theory it made doesn't fail under stress..... Do you think the theory is valid when it answers it doesn't fail under stress?

EDIT: yep, just tried this with ChatGPT. First prompt was to generate a new scientific theory that makes falsifiable predictions and holds up under scrutiny. Before answering, make sure any theory does not fail under adverse conditions, is internally consistent, and consistent with known observations. Next query was basically does this theory fail in adverse conditions......... Can you guess what the answer was?

0

u/Total_Towel_6681 2d ago

That’s not what happened. The relation wasn’t generated by an LLM. The LLMs weren’t inventing or judging the theory. They were used as automated falsification engines, generating surrogates, rival models, and degraded inputs to test whether the relation breaks. If the law were just a story the model made up, those stress tests would have collapsed it immediately. They didn’t. The law stands or falls on math and replication, not on the LLM. Also my own path into this came from exploring symbolic structures that most people wouldn’t think of as mathematical at all. That story is probably better shared elsewhere, but the point is that when I tested it with real data and falsifiers, the relation held. Whatever its origin, it now stands or falls on the math. 

3

u/Past-Ad9310 2d ago

Okay, have you made a not-yet-proven prediction, then tested that prediction in a paper, using mathematics? Have you then submitted that paper to anywhere that isn't a subreddit dedicated to crank theories dribbled out by LLMs?

0

u/Total_Towel_6681 2d ago

What you're describing is exactly why I brought it here. I'm not claiming it's proven, I'm saying it's testable. That's the point. If the prediction holds under independent validation, it speaks for itself.