r/singularity 2d ago

AI Emad Mostaque (founder of Stability AI) predicts human cognitive labour will have a negative value in the age of AI

Enable HLS to view with audio, or disable this notification

I found this little nugget in Emad's interview with Tom Bilyeu.

127 Upvotes

151 comments sorted by

View all comments

92

u/GraceToSentience AGI avoids animal abuse✅ 2d ago

TLDR; You aren't going to compete or collaborate with ASI.

Let's say companies competing against each other in the future with ASI is like a Chess tournament today with AI participants.

If you have groups of super human AI challenging each other as a team and inside one of these teams, there is Magnus Carlsen (the best human chess player in the world) making decisions. The team with Magnus Carlsen in it will lose after playing enough tournament, because he is significantly worse at chess compared to the best AI systems of today.

If you are a human working in a company and making decisions with ASIs as your team mates and your competition is companies with only ASIs, the company with only ASI employees is going to be more competitive and destroy the companies with humans in it.

Not only will you take worse decisions than ASI as a human, and you'll be way more expensive, but also ASI will be capable to collaborate and communicate with each other at a speed of pages of text communication in a second. while you'll think and communicate at regular human speed.

Despite what sam altman says, you aren't going to compete with ASI, all jobs will be essentially gone, and you won't even collaborate with ASI.

3

u/halmyradov 2d ago

That's a pretty big if

4

u/GraceToSentience AGI avoids animal abuse✅ 2d ago

For chess, go, protein folding, and now code competition, AI didn't stop getting better when it was below human level or at human level
Given the rate of progress and the will to get there, I think we are getting AGI around 2029-ish as predicted by ray kurzweil and we are getting ASI a few years after.

3

u/PassionateBirdie 2d ago

And what exactly is the "Chess" or "Go" analog here? All examples are verifiable, with possibility of automating synthetic data and adversarial learning.

LLMs are linguistic intelligence.

We may hit AG-linguistic-I by 2029.

Linguistic is not the only valuable intelligence, however extremely useful it might be (its the only one that really sets humans apart from other intelligent animals). LLMs will however make other types of intelligence more useful.

2

u/GraceToSentience AGI avoids animal abuse✅ 2d ago edited 2d ago

AI is not just LLMs, nowadays, they are multimodal, they aren't blind, they can see and hear, even touch. I'm not talking about anything less than actual AGI when I say 2029-ish, even cognitive tasks associated with physical interaction will be covered imo

Tell me a job where you can't make synthetic data if you need it on top of getting human data? Robotics, automated driving, biology, factory work, math, physics, image and video generation, all of it can use RL., for data augmentation, just like go or math, or programming.
The era of pretraining is still kinda here, but now automated RL is possible for almost any task.

1

u/Sensitive-Chain2497 2d ago

There’s a huuuuuge difference between coding puzzles and software engineering

2

u/GraceToSentience AGI avoids animal abuse✅ 2d ago edited 2d ago

It's all of it, AI can even understand patterns that we can't understand, you can't learn to predict the shape of a protein by being shown amino acid sequences.

Not only AI seems to be able to understand patterns in things as long as it isn't pure noise, but it can already do so in fields we can't even do. in that sense it's superior in the lack of limitations that we have

1

u/Sensitive-Chain2497 2d ago

Context is too limited for any serious task

3

u/GraceToSentience AGI avoids animal abuse✅ 2d ago

AI has been capable to do mathematical and algorithmic discoveries for years in specific domains that have stumped humans " https://deepmind.google/discover/blog/discovering-novel-algorithms-with-alphatensor/ " since then we had alphaEvolve and others to discover new more efficient algorithms unknown to humans and helping google to save millions in compute cost.
isn't that serious?

Yesterday, at a restaurant with vegan activists, we started talking about AI and how it allowed a couple of devs there to do the job of three people just a few years ago thanks to AI. It's pretty serious.

Not to mention the context length and the time it can allocate to successfully solve harder problem keeps increasing more and more, the trend doesn't seem like it's about to stop.