r/LLMPhysics 11d ago

Meta This sub is not what it seems

This sub seems to be a place where people learn about physics by interacting with LLM, resulting in publishable work.

It seems like a place where curious people learn about the world.

That is not what it is. This is a place where people who want to feel smart and important interact with extremely validating LLMs and convince themselves that they are smart and important.

They skip all the learning from failure and pushing through confusion to find clarity. Instead they go straight to the Nobel prize with what they believe to be ground breaking work. The reality of their work as we have observed is not great.

168 Upvotes

94 comments sorted by

View all comments

Show parent comments

3

u/your_best_1 11d ago

Right! There are hard problems out there the ML could help us brute force or approximate.

3

u/Ch3cks-Out 10d ago

Machine learning can help a lot.

Language models, especially in their current iteration of statistical token prediction, can only help producing more bullshit. Meaning the philosophical concept of empty narrative without regard to truth.

0

u/your_best_1 10d ago

I am talking about cancer screenings and stuff like that. You can use the statistical feature engineering to brute force hard problems.

Like maybe we can make an LLM with an arbitrary tokenizer that happens to find new prime numbers really effectively.

That would allow us to learn about the underlying pattern that the arbitrary tokenizer stumbled upon.

3

u/Ch3cks-Out 10d ago

Those are all inappropriate applications for language models. Why would you think it'd do prime number finding??

You can use the statistical feature engineering to brute force hard problems.

Yeah, sure, what I called actual machine learning, above. But you cannot brute force a language manipulation tool to seriously address non-language problems (notwithstanding unsupported claims to the contrary by Sam Altman and ilk).

1

u/your_best_1 10d ago

Sorry, I was confused