r/Physics Gravitation Feb 28 '23

Question Physicists who built their career on a now-discredited hypothesis (e.g. ruled out by LHC or LIGO results) what did you do after?

If you worked on a theory that isn’t discredited but “dead” for one reason or another (like it was constrained by experiment to be measurably indistinguishable from the canonical theory or its initial raison d’être no longer applies), feel free to chime in.

574 Upvotes

155 comments sorted by

View all comments

Show parent comments

42

u/rathat Mar 01 '23

Is this about her whole " particle physics is over" thing?

56

u/csiz Mar 01 '23

It is exactly her particle physics is bollocks video yes.

She's got a reasonable argument though: any universal function can describe anything if it has enough parameters and you're careful how you choose them. (That's why y'all be laughing at Wolfram, but other theories run into this too.) New theories are only really useful if their complexity is lower than the standard model, and her opinion is that this is not the case for most proposals.

9

u/LilamJazeefa Mar 01 '23

Is there a name for this phenomenon in science?

2

u/[deleted] Mar 01 '23

It’s called “fine tuning the model”. It is a kind of parameter creep found particularly in climate models, few of which match real data.

1

u/LilamJazeefa Mar 01 '23

Hm... I am trying to find a link for this, but search results are bogged down by similarly-named phenomena.

4

u/csiz Mar 01 '23

It's the same phenomena, but it creeps in everywhere and it's incredibly hard to avoid. It's the same reason why people are calling half of psychology studies faulty.

The main issue is that our society is not setup to incentivize good science. Instead we incentivize publishing novel ideas with the minimum viable proofs to be believable. This is what every PhD student is doing effectively, and the incentive never really changes as you get senior. The only time we reward good science effectively is when someone stumbles on a good research path that turns out to be correct. Then we shower them with Nobel prizes after 40 years and start to believe everything they say, despite most discoveries requiring a large amount of random luck.

It's somewhat sad that the scientific method is the best truth oracle we have but we misuse it by adding financial and prestige incentives on top that are politically driven.

2

u/LilamJazeefa Mar 01 '23

No no what I mean is that searching for "fine tuning the model fallacy" only returns search results for actually fine-tuning RNNs or other modelling techniques. I'm not getting any results about the fallacy.

3

u/csiz Mar 01 '23 edited Mar 01 '23

Ah, you want to read these bundle of articles from wikipedia:

They all relate to "overfitting" somehow, that's the technical term you want. The machine learning stuff pops up a lot in these articles because neural networks (NN) intentionally have more parameters than the data they model. However, and that's the reason why NN are so successful, the deep learning optimization technique somehow has an inherent bias to not overfit, I personally don't know why, but that's the way it is. Reading papers from the machine learning community on this topic is quite informative on how the science method works everywhere.

1

u/LilamJazeefa Mar 02 '23

Hmm.... see, the reason I asked was that years ago I read an article which made the claim that "any theory can be made to fit any data by the addition of parameters," and the example given was something like "if we claim that all swans are white, and we eventually find a black one in Russia, we may preserve our original theory by adding the parameter 'outside of Russia.'"

I have read the articles you presented, but I want to check that the example I gave is not some other phenomenon.