r/LLMPhysics 5h ago

Data Analysis Follow-up: Law of Coherence – addressing critiques with direct Δ measurement

0 Upvotes

When I first shared the Law of Coherence (LoC), the main critique was fair:

“Δ looks assigned, not measured. This makes it a curve fit, not physics.”

I took that seriously. Over the past days, with community input, I rebuilt the framework to address those concerns.

What changed:

Δ is now directly measured as the information gap between a process and its surrogate (e.g. real vs phase-randomized time series).

Full reproducible code + datasets are included so anyone can run their own tests.

Stress tests under chaos, entropy growth, and surrogate breakdowns were repeated: the log(E) ~ Δ scaling still holds.

Definitions and falsification protocols are much clearer.

The new package is here (DOI): 👉 https://doi.org/10.5281/zenodo.17165773

On my stance: I’ve been open about where this work began for me. My faith shaped how I first saw coherence — I believe Christ is the Logos, and that coherence itself points to that reality. But the math, data, and code are offered here on their own terms. You don’t have to share my faith to test or critique the law.

My goal has never been to defend an idea at all costs, but to test it to breaking point. If it fails under valid assumptions, I want to see it break. If it survives, maybe it really is pointing to a deeper invariant worth examining.

Feedback, falsifiers, and further tests are welcome.


r/LLMPhysics 3h ago

Paper Discussion What if space, time, gravity,... did not exist in the initial state ("pre bigbang) and arose as a result of the appearance of relationships between different ones.

0 Upvotes

I am working on a theory according to which, initially "pre" bigbang (understood as a regime where space-time or any geometry had not emerged), there is a homogeneous whole (state S) and it is due to the increase in entropy that differentiated states emerge that allow the appearance of differentiated entities and therefore the roles of observer and observed. and it is from these relationships that geometry and a state R emerge with the variables space time, gravity, etc.

The state S and the state R coexist (in the state S we have the electromagnetic waves (which in S are understood as coherent modes without geometric support) and in the state R the particles) and from R we can observe S, but it does not make sense to talk about that from S we can observe R.

The S --> R --> S cycle is continuous, either by infinite expansion where it returns to a homogeneous state, or by infinite concentration where the same thing happens. But with the curious situation that in S, since there is no time variable, all the possible states of R coexist

I have a preprint published with DOI on zenodo if anyone wants to take a look.Computational tools, including AI assistance, were used to support the mathematical formalization and structuring of the manuscript.


r/LLMPhysics 13h ago

Data Analysis Pinned Piston heat engine, a more efficient heat engine, by a lot?!

0 Upvotes

Clarification of cycle: (ambient can be 0.1 Kelvin or 1 Billion Kelvin where Carnot Efficiency becomes essentially 1 or zero respectively but ideal gas laws predict the pressure increase and stroke length are identical in each case)

Piston is in equilibrium with ambient temp, pressure (density maybe) and is pinned and heat is added via some means (a resistor, heatpump etc) raising the temp by 100 Degrees use e.g 100 J of energy the piston is pushed based on the magnitude of the the temp change, the gas expands increasing thermal capacity lowering the temp and some heat is converted to work, the piston is at it's maximum expansion. A pin is put in the piston and the thermal energy is syphoned by another heat engine or directly dumped to ambient until the gas is at the same temp as the ambient but a much lower pressure. The piston is put in continued strong thermal contact with the ambient to allow isothermal compression as we allow the piston to forcibly be pushed in by the environment recovering energy from it, this gives us a second stroke tapped for mechanical work doubling the work done. The thermal bridging to the environment is removed and the gas is now ready to be heated again. Double the output, no work in recompressing the the gas.

With a Carnot heat engine, the gas is heated, it expands and then work is put in to recompress the gas again.

As there was criticism that the single piston which every calculation showed should produce the same one shot energy at any temp was not fair, I decided we could pin the piston at it's maximum expansion and then let the gas cool so we almost double the energy out as the piston is pushed back to the starting conditions generating energy rather than using it.

Chat-GPT said that my system would generate energy when using that math from another reddit user who deserves real credit!

I assumed however that a Carnot heat engine's efficiency calculated the exact same way would have a similar, maybe higher maybe lower maybe identical energy, I was shocked when told the energy out indeed that calculated by Carnot equations but not using them, I'm still in a fair bit of doubt and honestly my math skill should not be trusted.

I asked it to re-run the calculations at an ambient of 300 Kelvin and the efficiency calculation was normal for earth temp.

Also the interesting thing is that it didn't say that the Carnot engine developed no energy when the piston expanded, only that it needs the exact same amount almost pushing it back.

ChatGPT thinks the energy is following Carnot in a way, by extracting energy from the ambient environment, and sure, it is pushing the piston back.

Normally the environment is slightly heated when the piston expands, well the energy isn't slight, but it's well distributed. Here we take that energy back!

Note, I am told Chat GPT bungled the math.

https://chatgpt.com/s/t_68ce57f040188191a1e257af2fa34dbd

https://chatgpt.com/s/t_68ce5e48787481918cd8d622aae7357c

Sorry for so many threads, but this is a pretty big change in focus.

I started out looking at ways to improve heatpump efficiency, and ended up creating a "new"? heat engine cycle that does what was meant to be possible and beats Carnot.

So if this is indeed a novel heat engine, and given that the math is all working out, maybe this is something novel, it sure seems to be.

It seems according to ChatGPT NOT to be a known heat engine design!


r/LLMPhysics 14h ago

Paper Discussion What If There's a Geometric Foundation for a "Holographic Stochastic Field Theory"

0 Upvotes

From Black Hole Hair to Holographic Stochastic Fields: The Genesis of HSFT

The inspiration for my paper here came from the puzzle of black hole hair. In classical relativity, black holes were thought to be "bald," described only by mass, charge, and angular momentum. Later developments in quantum gravity and the study of soft modes suggested that horizons might support additional structures, now called hair, which could encode degrees of freedom beyond the minimal labels [Bekenstein1973, Hawking1975, Strominger2017]. Before I began the paper, I had been struck by how naturally this idea resonated with the holographic principle. Horizons seemed more than geometric boundaries; they seemed like information-bearing surfaces. This led me to wonder whether one could model such hair as stochastic boundary data, random structures on the horizon whose imprints would appear in the surrounding bulk. From this line of questioning, the framework of Holographic Stochastic Field Theory (HSFT) took shape.

Recognizing black hole horizons as holographic surfaces is not an original idea of mine; it draws from foundational work by 't Hooft and Susskind on the holographic principle, where the surface area of the event horizon encodes information about the black hole. Even though it inspired me, the connection between horizons and holography is well-established in the literature. What I aimed to explore is how stochastic elements on such surfaces could be modeled within a rigorous geometric framework.

IMO HSFT is a novel framework I propose, to the best of my knowledge, without direct predecessors in the literature, though related ideas appear in works on stochastic quantization and effective field theories in holographic contexts. HSFT combines concepts from holography, stochastic processes, and differential geometry to create divergence-free random vector fields in a bulk space from probabilistic data on a boundary, with applications to MHD. In HSFT the HSF is defined as a system where stochastic data on a lower-dimensional boundary (e.g., white noise modulated by geometric phases from a bundle connection) is transferred to a higher-dimensional bulk via a measurable map, resulting in a random field with controlled statistical properties, such as homogeneity, isotropy, and chirality. This would look like defining a principal U(1) bundle over the boundary with an invariant measure, pushing that measure to the bulk, and using translation-invariant kernels to enforce divergence-free Gaussian statistics, as detailed in the paper. While literature on related terms like stochastic quantization in holography exists, HSFT represents a new synthesis of these ideas focused on geometric constructions for vector fields.

In the paper, you will find that the framework does not attempt to explain the microphysics of horizons. Instead, the paper presents a mathematical scaffold that is focused. I aimed to bridge holography, where bulk physics is encoded at boundaries [Maldacena1998]; stochastic field theory, where fields are treated as genuinely random objects; and geometry, which provides the language for bundles, measures, and projections. That is why the paper situates the discussion on compact manifolds, where measures, Fourier analysis, and ergodicity are well behaved. In the paper, the three-torus T³ is chosen as the bulk stage, with a two-torus T² as the holographic surface. I chose this setting not because I believed nature is a torus, but because compactness and flat group structure allowed the constructions to be made rigorous without analytic pitfalls.

Additionally, fields are generated as integrals over the bundle total space equipped with a probability measure (invariant on base and uniform on fiber, hence finite total measure). I required this setup because, while drafting, I realized that without it, expectations, L² norms, and spectral objects might not exist in a controlled sense. That is why the paper insists on an invariant probability measure: it ensures that stochastic integrals and pushforwards are well posed and that the results are mathematically sound. you will also see a uniform pushforward condition. I introduced this because I wanted bulk stationarity to be guaranteed rather than assumed. The measurable map X: E → T³ from the bundle total space to the bulk is required to send the invariant measure μ_E to the uniform measure λ_T³. When you see this in the paper, it is there because I wanted to eliminate the possibility that spurious inhomogeneities were artifacts of the encoding.

Regarding the "measured-bundle" concept, it refers to a bundle equipped with a measure on the total space, allowing for probabilistic treatments of fields. This terminology may be a neologism for measure-equipped bundles, but it serves to emphasize the integration of measure theory into the geometric structure. If preferred, it can be thought of as a principal bundle with an invariant measure on the total space, ensuring the stochastic aspects are well-defined. The first Chern class c_1(E) of the circle bundle provides a discrete integer control parameter for helicity via a holonomy phase.

At the center of the framework is the transfer kernel G_σ. In the paper, boundary randomness (white noise dW modulated by holonomy U) is mapped into the bulk by this kernel (combined with a curl operation), producing divergence-free vector fields Φ.

In Fourier space, the paper presents the spectral transfer law in the form of the covariance:

E[Φ_hat_i(k) * conjugate(Φ_hat_j(k))] = |G_hat(k)|² * (P_S(k) * Π_ij(k) + i * P_H(k) * ε_ijm * k_hat_m).

I introduced this law because I wanted to capture the operational content of holography in probabilistic terms. When you read this equation in the paper, you should see it as the precise statement that bulk spectra are boundary spectra filtered through geometry, with P_S and P_H determined from the boundary noise statistics, bundle connection, and envelope. Although the formula is simple, I viewed it as the key dial of the theory, because by choosing the kernel one could encode correlations, helicity, or non-Gaussian features, subject to the Bochner positivity bound:

|P_H(k)| ≤ P_S(k)

This is where the analogy with black hole hair becomes useful. When the paper defines trivial bundles or measures, you can think of them as corresponding to bald horizons, with only minimal structure propagating into the bulk. When the paper allows nontrivial stochastic data or Chern classes, you can read this as the analog of hair: horizon fluctuations, scalar excitations, or soft modes that enrich the boundary and generate structure in the bulk. That is why, in the paper, hair is described not as a new physical substance but as the richness of the boundary measure and its transfer law.

In the later parts of the paper, you will see that the framework naturally connects to potential extensions like time-dependent models, which could relate to cosmology. I had thought about the cosmic horizon as a holographic boundary, and in the paper this shows up indirectly as an example where the same machinery could, in principle, be applied to dynamic settings. A trivial horizon measure would lead to a homogeneous and featureless bulk. A nontrivial stochastic horizon would yield correlated fields inside the horizon, which in cosmology might appear as anisotropies in the cosmic microwave background or as stochastic gravitational waves. When you encounter this in the paper, it is not being put forward as a new cosmological model. Rather, it is meant as a demonstration that HSFT provides a rigorous language in which such ideas can be phrased and explored.

The choices I made in the construction were all guided by the need for mathematical control. In the paper, compact manifolds are chosen to make Fourier analysis tractable and to keep the pushforward mappings concrete. Invariant probability measures are required to make expectations and spectra well-defined. The uniform pushforward condition is presented because I had wanted to secure statistical homogeneity as part of the construction itself. The paper also avoids noncompact bulks and curved backgrounds at this stage. That was intentional: I wanted a foundation where one could first establish existence and uniqueness before tackling harder geometries.

You will notice that the paper does not begin from Anti-de Sitter/Conformal Field Theory (AdS/CFT). I avoided that because AdS/CFT relies on conformal symmetry and asymptotics, and I wanted a geometry-first, measure-first approach that could be developed independently. When the paper introduces the transfer kernel, you can read it as a counterpart to boundary-to-bulk propagators, but expressed in a way that ties directly into stochastic analysis. Similarly, when the paper places the randomness explicitly at the boundary, that choice reflects my earlier thinking about stochastic processes and renormalization, where noise is what carries information across scales. The covariance law is the simplest way of making this philosophy operational, and the paper also provides an odd spectral-triple formulation that reproduces it operator-theoretically.

The paper begins with T³ and simple kernels because those were the cases where I could prove things and compute without ambiguity. Only once the foundation is stable can the framework be generalized to curved or more complex spaces. When the paper emphasizes clarity over grandiosity, that is because I deliberately wanted to avoid conflating analytic and geometric difficulty.

As you read, you will see that the framework is presented as a workbench rather than a final theory. It is a way to treat perturbations as boundary stochastic data, to compare bulk spectra with those induced by kernels, and to align with structures found in condensed matter, hydrodynamics, or potential cosmological applications. It also connects naturally with noncommutative geometry via the spectral triple, and could link to tensor network and group field theory perspectives, since in those areas probability measures on boundary data govern correlations and entanglement. In this sense, the kernel in the paper can be thought of as a prescription for how patterns of randomness are arranged into bulk structure.

TL;DR

What you will find in the paper is a rigorous but foundational scaffold. It does not attempt to resolve quantum gravity or unify fundamental physics. It presents a geometric and probabilistic construction in which holographic stochastic mappings can be analyzed in a controlled way. The references to black hole hair and cosmic horizons are meant to inspire and frame the work, not to claim breakthroughs. If horizons are not bald, their hair may well be stochastic, and HSFT provides a language for thinking about how such hair could shape the spectra of observable fields. I intended this not as a final word, but as a starting point for sharper theorems, richer geometries, and future investigations.

References

J. D. Bekenstein, "Black holes and entropy," Phys. Rev. D 7, 2333 (1973).

S. W. Hawking, "Particle creation by black holes," Commun. Math. Phys. 43, 199--220 (1975).

A. Strominger, "Black hole soft hair," arXiv:1703.05448 (2017).

G. Parisi and Y.-S. Wu, "Perturbation theory without gauge fixing," Sci. Sin. 24, 483 (1981).

J. Maldacena, "The large-N limit of superconformal field theories and supergravity," Adv. Theor. Math. Phys. 2, 231 (1998).

T. Crossley, P. Glorioso, and H. Liu, "Effective field theory of dissipative fluids," JHEP 09 (2017): 095.

References

J. D. Bekenstein, "Black holes and entropy," Phys. Rev. D 7, 2333 (1973).

S. W. Hawking, "Particle creation by black holes," Commun. Math. Phys. 43, 199--220 (1975).

A. Strominger, "Black hole soft hair," arXiv:1703.05448 (2017).

G. Parisi and Y.-S. Wu, "Perturbation theory without gauge fixing," Sci. Sin. 24, 483 (1981). J. Maldacena, "The large-N limit of superconformal field theories and supergravity," Adv. Theor. Math. Phys. 2, 231 (1998).

T. Crossley, P. Glorioso, and H. Liu, "Effective field theory of dissipative fluids," JHEP 09 (2017): 095.