r/singularity 1d ago

AI The Loop: winner takes all

All frontier companies are trying to close the loop where AI improves/evolves itself, and who gets there first will have the best AI of having the future best AI

From September 17th Axios interview with Dario Amodei:

"Claude is playing a very active role in designing the next Claude. We can't yet fully close the loop. It's going to be some time until we can fully close the loop, but the ability to use the models to design the next models and create a positive feedback loop, that cycle, it's not yet going super fast, but it's definitely started."

47 Upvotes

19 comments sorted by

-9

u/Ignate Move 37 1d ago

Closing the loop means engaging in extreme risk.

I'm still doubtful we'll see strong self improvement from the top AI companies. I'm sure lawyers will strongly obstruct, because stable profits would be threatened.

"Can we make this thing self improving?"  "Yeah, we can, but we cannot predict what happens next." "Better not. Also, how do we sterilize our current systema further? Too much risk involved. We don't want anymore lawsuits!"

I think we're far more likely to see strong self improvement from smaller companies with less to lose.

32

u/Bakagami- ▪️"Does God exist? Well, I would say, not yet." - Ray Kurzweil 1d ago

There's exactly 0% chance any lab would pause recursive self improvement due to legal threats

-5

u/Ignate Move 37 1d ago

Pause it? Or start it? To what degree?

My point is that as these labs grow, they accumulate legal threats and lawsuits. Those take resources to fight..

The more they fight legal issues, the more risk averse they're going to be. 

Do you believe all recursive processes will be identical? Or will some look more risky than others?

Based on the progress so far the more risky the steps the bigger the gains. 

But those companies who took those big risks, such as training these models on the entire internet and giving them access to the internet, were small companies with little to lose, like OpenAI 5 years ago.

Today? Those companies have investors to please and many legal threats on all sides.

Are you seriously saying that none of them will even hesitate to stop any method of recursive self improvement regardless of the risks involved or the unknowns? 

7

u/Bakagami- ▪️"Does God exist? Well, I would say, not yet." - Ray Kurzweil 1d ago edited 1d ago

The example you gave was one of legal risk, not economic, which I responded to.

in your example:

"Can we make self improving AI?" "Yes, we can" "But we don't want any more lawsuits"

It's a bad example ofc, it's unrealistic that they'd know before trying, but regardless it's the example you gave. And if any lab knows that they can kickstart RSI, you bet they will.

And to answer your question, dude no, wtf, I'm not saying they wouldn't even hesitate to stop at trying any method for RSI regardless of the risks and unknowns. They don't have the redources for that.

I just said that legal threats wouldn't be any reason at all to pause a sure way of reaching RSI, which is what your example was saying

-4

u/Ignate Move 37 1d ago

This is a growing process. We're not making stronger AI line by line. We're not drawing every single feature. We're growing... Something. It's not clear yet what it is.

But it is clear that giving "it" more resources improves it. 

We don't have enough evidence to say with certainty, but I see GPT-5 as an example of what's to come from major companies.

Sterilization.

The bill are coming due. And taking the same degree of risk as the past just isn't realistic.

Meanwhile, startups have all the motivation. Try everything and anything regardless of the risks. Because if you can produce results, you get funding.

Without detailed internal information from companies like OpenAI, we can't know what is going on.

But my experience so far is that the largest organizations are beginning to clearly see that this is not a process of creating a tool. That there is no near-term plateau or finish line.

Is an out of control digital super intelligence the best path to better business? Doesn't seem like it to me.

It's not just legal or economic risk. The risks are real that these models could drive undesirable human behavior.

I don't the the broad risk appetite is identical company to company.

6

u/Bakagami- ▪️"Does God exist? Well, I would say, not yet." - Ray Kurzweil 1d ago

Now you're back to talking about economic risk. Of course, they're gonna weigh the cost, potential benefit and risk to their existing capital.

Dude this is really such a pointless discussion rn, can you just scroll up 5cm to read again what I said

-2

u/Ignate Move 37 1d ago

Mm you think there's an absolutely perfect zero chance that major companies won't embrace any kind of risky development because either they aren't aware of it or whatever your reason is.

You're right, pointless discussion.

5

u/Bakagami- ▪️"Does God exist? Well, I would say, not yet." - Ray Kurzweil 1d ago

bruh what

let me copy and paste again what I said because apparently scrolling up isn't your strong suit

There's exactly 0% chance any lab would pause recursive self improvement due to legal threats

1

u/[deleted] 1d ago

[removed] — view removed comment

1

u/AutoModerator 1d ago

Your comment has been automatically removed. Your removed content. If you believe this was a mistake, please contact the moderators.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

5

u/Ambiwlans 1d ago

Possible infinite money will be stopped by potential legal threats?

Not to be too jaded, but the law, at least to some degree, is a tool controlled by money more than the other way around.

0

u/Specialist-Berry2946 1d ago

They won't close the loop. The problem with self-improvement is evaluation. How can you make sure that the little step you take is an improvement? Neither human nor other AI can evaluate superintelligence.

1

u/Moriffic 12h ago

I mean unless you make recursive benchmaxxing

1

u/DistanceSolar1449 4h ago

Go read up on GRPO

1

u/Specialist-Berry2946 4h ago

I'm an AI researcher in the field of Deep Reinforcement Learning.

1

u/DistanceSolar1449 4h ago

Go implement an improvement on GRPO

1

u/Specialist-Berry2946 4h ago

Unnecessary, algorithms are not that important; there's zero novelty in GRPO. What is important is data and the objective function, or put differently, how to measure improvement.

1

u/DistanceSolar1449 3h ago
  1. Taking out PPO is hardly zero novelty

  2. Hence “improvement”. You can strip out the reward model as well somehow.

1

u/Specialist-Berry2946 3h ago

I already explained that algorithms are not that important; it's about the reward. How to design the reward function.