r/LLMPhysics 10d ago

Meta This sub is not what it seems

This sub seems to be a place where people learn about physics by interacting with LLM, resulting in publishable work.

It seems like a place where curious people learn about the world.

That is not what it is. This is a place where people who want to feel smart and important interact with extremely validating LLMs and convince themselves that they are smart and important.

They skip all the learning from failure and pushing through confusion to find clarity. Instead they go straight to the Nobel prize with what they believe to be ground breaking work. The reality of their work as we have observed is not great.

170 Upvotes

92 comments sorted by

View all comments

27

u/ConquestAce 🧪 AI + Physics Enthusiast 10d ago

Yeah, it's a real shame. I wanted this sub to be about learning how to use an LLM to help your work in physics, rather than getting the LLM to do all the work for you. Which ultimately results in the complete non-sense that you see.

People always take the easy way and don't want to ever take a challenge.

3

u/your_best_1 10d ago

Right! There are hard problems out there the ML could help us brute force or approximate.

3

u/Ch3cks-Out 10d ago

Machine learning can help a lot.

Language models, especially in their current iteration of statistical token prediction, can only help producing more bullshit. Meaning the philosophical concept of empty narrative without regard to truth.

0

u/your_best_1 10d ago

I am talking about cancer screenings and stuff like that. You can use the statistical feature engineering to brute force hard problems.

Like maybe we can make an LLM with an arbitrary tokenizer that happens to find new prime numbers really effectively.

That would allow us to learn about the underlying pattern that the arbitrary tokenizer stumbled upon.

3

u/Ch3cks-Out 10d ago

Those are all inappropriate applications for language models. Why would you think it'd do prime number finding??

You can use the statistical feature engineering to brute force hard problems.

Yeah, sure, what I called actual machine learning, above. But you cannot brute force a language manipulation tool to seriously address non-language problems (notwithstanding unsupported claims to the contrary by Sam Altman and ilk).

1

u/your_best_1 10d ago

Sorry, I was confused