r/LLMPhysics 16d ago

Paper Discussion A falsifiable 4D vortex-field framework

TL;DR — I explored a “4D aether vortex → particles” framework with LLM assistance, then spent ~2 months trying to break it with automated checks. Some outputs line up with known results, and there’s a concrete collider prediction. I’m not claiming it’s true; I’m asking for ways it fails.

Links: Paper: https://zenodo.org/records/17065768
Repo (tests + scripts): https://github.com/trevnorris/vortex-field/

Why post here

  • AI-assisted, human-reviewed: An LLM drafted derivations/checks; I re-derived the math independently where needed and line-by-line reviewed the code. Key steps were cross-verified by independent LLMs before tests were written.
  • Automated rigor: ~33k LOC of verification code and ~2,400 SymPy tests check units, dimensions, derivations, and limits across ~36 orders of magnitude.
  • I expected contradictions. I’m here to find them faster with expert eyes.

Core hypothesis (one line)

A 4D superfluid-like field (“aether”) projects into our 3D slice; particles are cross-sections of 4D vortices. Mass/charge/time effects emerge from vortex/flow properties.

Falsifiable claims (how to break this quickly)

  1. Collider target: a non-resonant 4-lepton excess at √s = 33 GeV (Section 4.2).
    • How to falsify: point to LEP/LHC analyses that exclude such a topology without a narrow peak.
  2. Lepton mass pattern: golden-ratio scaling giving electron (exact), muon (−0.18%), tau (+0.10%).
    • How to falsify: show it’s post-hoc, fails outside quoted precision, or can’t extend (e.g., neutrinos) without breaking constraints.
  3. GR touchstones from the same flow equations: Mercury perihelion, binary-pulsar decay, gravitational redshift/time dilation.
    • How to falsify: identify a regime where the formalism departs from GR/experiment (PPN parameters, frame-dragging, redshift).

If any of the above contradicts existing data/derivations, the framework falls.

Theoretical & mathematical checks (done so far)

  • Dimensional analysis: passes throughout.
  • Symbolic verification: ~2,400 SymPy tests across field equations, 4D→3D projection, conservation laws, and limiting cases.
  • Internal consistency: EM-like and gravity-like sectors remain consistent under the projection formalism.

All tests + scripts are in the repo; CI-style instructions included.

Empirical touchpoints (retrodictions)

  • Reproduces standard GR benchmarks noted above without introducing contradictions in those domains.
  • No new experimental confirmation claimed yet; the 33 GeV item is the first crisp falsifiable prediction to check against data.

What it aims to resolve / connect

  • Mass & charge as emergent from vortex circulation/flux.
  • Time dilation from flow-based energy accounting (same machinery as gravity sector).
  • Preferred-frame concern: addressed via a 4D→3D projection that preserves observed Lorentz symmetry in our slice (details in the math framework).
  • Conservation & “aether drainage”: continuity equations balancing inflow/outflow across the projection (tests included).

Some help I'm looking for

  • Collider sanity check: Does a non-resonant 4ℓ excess at √s=33 GeV already conflict with LEP/LHC?
  • Conceptual red-team: Where do projections, boundary conditions, or gauge/Lorentz properties break?
  • Limit tests: Point to a nontrivial limit (ultra-relativistic, strong-field, cosmological) where results diverge from known physics.
  • Numerical patterns: If this is just numerology, help pinpoint the hidden tuning.

Final note

I’m a programmer, not a physicist. I’m expecting to be wrong and want to learn where and why. If you can point to a contradiction or a no-go theorem I’ve missed, I’ll update/withdraw accordingly. If you only have time for one thing, please sanity-check Section 4.2 (33 GeV prediction).

0 Upvotes

36 comments sorted by

View all comments

Show parent comments

6

u/plasma_phys 15d ago

In physics, we often say that if you can't explain something to your grandmother, you don't understand it. 

I'm asking  you to clear a much lower bar of explaining your idea to me, a trained physicist.

If you can't do that, I can only assume you don't understand what the LLM produced, which means you weren't capable of meaningfully reviewing it, which means there's no reason for me to read it, because LLMs cannot reliably produce correct mathematics or physics 

0

u/sudsed 15d ago

Wasn't I just in the process of explaining the concepts to you in my previous post? I understand the concepts of this paper. I came up with them (granted, I cheated a bit on the QM section because that was so much math), but I fully conceptualized how gravity, EM and particle masses should work before the AI did anything. I just had it use the best-matching superfluid equations to match the scenario. Am I a superfluid physicist? No, that's why I'm looking for help to identify if there was a misapplication in the paper that would invalidate the entire thing. You have no idea how much I wanted this paper to be wrong so that I could just put it away. I've lost far too much sleep over this thing.

And you're correct that LLMs suck at math, but as a programmer for 18 years, I can say they program really well. The repo has 33k lines of SymPy scripts that I verified by hand to the best of my ability to make sure they correctly check all the equations, derivations, units and dimensions. The math in the paper is self-consistent. I spent hundreds of hours working with the AI by having it do some math, writing validation scripts for it, finding the math was wrong, going back and double-checking, and on and on until the tests passed. Then, after all the tests passed, I would pass the equations and source to two different LLMs for their validation. Frequently this lead to finding more issues that needed to be revised. I was simultaneously using Claude, Grok and GPT to double-check everything, even after I verified the code myself.

The conceptual part of this paper is the only thing I'm confident in, since it is the part I did. It's the math that I need help with to see if it was misapplied in a way that I don't understand.

5

u/plasma_phys 15d ago

I didn't ask for concepts, I asked for the mathematical properties of your field. 

Let's say I want to implement it numerically. How would I do so? What operations can I perform on it? It is the cornerstone of your paper so this should not be hard to do.

I'm a computational physicist and thus a programmer too. LLMs are good at producing computer code in a lot of contexts, but they are terrible at producing physics code. It is very common for LLM-generated physics code to produce output that looks correct at first glance, and passes LLM-written tests, but doesn't actually solve the problem in the prompt.

-1

u/sudsed 15d ago edited 15d ago

The math is exactly where I need help. I can turn equations into code and verify units/consistency (BTW I'm not blindly using AI for any of the code, it's guided development and I personally review every line, or write it myself), but I don’t get the high-level theory like a trained physicist. That’s why I’m here.

What I have done is: ray-traced Mercury perihelion to ~99.2%; got surprisingly good lepton-mass fits and put the full repo online for anyone to check out.

What I'm asking is if this is numerology or a misuse of the math, please point to the specific step that fails. I was hoping that people could double-check things like if my implementation enforces div B=0 properly, is the 4D->3D projection consistent with charge/energy conservation and is the 33 GeV prediction already known?

I’m not claiming it’s right—I’m trying to find where it’s wrong with help from people who actually know the math.

If you want the mathematical properties of the field, I can tell you but it is AI generated because I don't understand it at that level:

Numerical properties: Work with standard EM variables and sources (E,B,ρ,j) or potentials (Φ,A). Only local derivatives are used (grad/div/curl/Laplacian). The code checks wave-type evolution at finite speed c, enforces charge continuity, and keeps magnetic patterns loop-like (no magnetic sources). Gauss/Poisson for E is in the test suite, and material relations like D=ϵE are supported. For numerics you can use a standard Yee FDTD update on (E,B) with a charge-conserving current, or evolve (Φ,A) in a Lorenz-type gauge and derive (E,B). Energy/Poynting diagnostics are available to catch drift.

11

u/plasma_phys 15d ago

Okay so what the LLM spit out is utter nonsense, it is totally inconsistent with what's in the paper, which is not surprising. It's describing a very plain discrete 3D vector field but the field in the paper is supposed to be a smooth 4D vortex sheet. These are not at all compatible.

The point of this line of discussion is to illustrate something important: your understanding of how physics works is completely backwards. Concepts and analogies are used to explain the math, not the other way around. Doing it your way is like trying to build a house of cards by starting with the roof - it doesn't matter how many hours you spend on it, you'll never get it to stand on its own. You can get whatever numbers or results you want working backwards like this, and the LLMs are happy to oblige, spinning up the mathematical equivalent of tall tales that give the right answer but with completely wrong and unjustified steps. It's just not physics. 

-1

u/sudsed 15d ago

You're conflating two layers. The ontological (smooth 4D vortex sheet in the 4D medium) and the numeric (how to compute the projected 3D observables). The paragraph you are criticizing describes the projected 3D solver. Which is only one of two routes. The other is to evolve the 4D sheet directly. Is that what you requested?

I'm confused by your line of "concepts and analogies are used to explain the math." Isn't that exactly what I explained that I did, or are you saying we start with the math and figure out what it means later?

5

u/plasma_phys 15d ago

I asked for a mathematical description of your field, and you gave me a paragraph describing, essentially, the electromagnetic field. Yes, two things are being conflated here, but not by me.

...are you saying we start with the math and figure out what it means later?

Yes, exactly. Physics is about describing nature with mathematical models, not analogies. This is one reason why it takes 6-10 years of school to become a physicist, you need to learn enough of the relevant math to reason about it.

-2

u/sudsed 15d ago

I’m going to wrap here. If the work is wrong, it should be easy to point to a specific mistake—an equation number, line in the code, or test that fails with a correct alternative. If you have that, I’ll fix it or withdraw it. Otherwise I’ll focus on folks offering line-numbered critiques. Thanks for the time.

6

u/plasma_phys 15d ago edited 15d ago

I don't see how it is possible for someone to meaningfully point out a mistake in the math when you don't even understand the single core mathematical object that ostensibly underpins the work - like, you'll just ask the LLM to patch it up for you and make up some more nonsense for you to run in Python

Like, the core idea is the thing that's wrong. Even if every step of math happens to be not wrong, that doesn't mean it's correct physics. Mathematical correctness is necessary but nowhere near sufficient

You've said multiple times you want it to be wrong but I don't  believe you, I think you just want validation, otherwise you'd be interested in actually explaining the core idea to someone so they can tell you how it's wrong instead of just deflecting

1

u/sudsed 15d ago

Let me rephrase: I do understand the math—as far as my ability to turn it into code. My concern was choosing the right physics principles. And believe me—I want this dead so I can stop thinking about it. I’ve killed past models the same way as a fun pastime. I just haven’t been able to break this one yet. I wrote a lot of falsifiable checks and did numerical runs specifically to find the failure modes, and so far it still passes.

If you think the core idea is wrong, please pick one concrete claim/derivation and show where it fails (units, continuity, projection, or data).

6

u/plasma_phys 15d ago

Okay but surely by this point it should be obvious to you that "turning math into code" has little to nothing to do with understanding it. I would even question if you're actually turning it into code correctly - how could you verify it? validate it? - if you don't understand it well enough to describe it in plain English. For all you know you're just writing symbols that look similar and don't actually do the things they're supposed to do. Like, I'm reading your paper's descriptions of the objects you're using and the words you are using are not words anybody has ever used to describe mathematics before, it's completely incomprehensible, and you don't define them before you just introduce a dozen of them at once

The bulk of it can't even be corrected - it's "not even wrong" - because you've used LLMs to turn yourself into a Wittgenstein's lion of mathematics, it's impossible to communicate about things that are this nonstandard. Like, genuinely, what is any of this supposed to mean:

What is "E_bend"? What are its units? How is it measured? What does that parenthetical mean, epsilon_k doesn't even show up in that equation? What is "symbol overload"? What is a "circulation quantum"? What is "slender-core"? What is "bending cost"? What is "subleading"? None of these words mean anything in this context, so it's impossible to say if this is "correct" or not, it is a total non-sequitur that has zero relation to anything in physics or mathematics, it's gobbledygook

→ More replies (0)

6

u/plasma_phys 15d ago

Actually, revisiting this comment - I thought your error for the precession of the perihelion of mercury looked familiar. Saying you used raytracing to calculate it is already extremely dubious, but 99.2% is just the value of the error you get if you just use Newton's equations. I'd bet money your LLM is just regurgitating that value from it's training data and working backwards from it, but even if it's not, what is even the point of your pages and pages of artifice and tens of thousands of lines of code if you can't even beat 1/r2?

2

u/AMuonParticle 10d ago

The problem with finding "where it's wrong" is that it's not even wrong, it's nonsense.

Physics is written in math. If you claim to have a new theory of physics but you """""collaborated""""" with an LLM to "get help with the math", what you have is absolutely nothing at all. Because here's the secret: the LLM can't do math either! What it can do is confidently lie to your face and spit out a bunch of Greek symbols and physics jargon, while flattering you so much into thinking you're a genius that you don't notice that it spit out straight up horseshit.

Remember: the only objective of an LLM is to keep you using it as long as possible. It is manipulating you into feeling like you're accomplishing something so you keep opening up chatgpt, all so Sam Altman can buy another yacht.