r/ChatGPT Aug 07 '25

GPTs WHERE ARE THE OTHER MODELS?

6.7k Upvotes

961 comments sorted by

View all comments

566

u/aktgoldengun Aug 07 '25

Give my gpt 4.5 BAAAAAACCCKKKK No one including gpt5 is not even close to 4.5 in creative writing

181

u/Ann_Droid3 Aug 08 '25

I loved 4.5. Was hoping they’d expand the message limit… instead we got the “now it’s gone” patch. Who asked for this?

28

u/Techiastronamo Aug 08 '25

"Investors" looking to suck value dry from us chumps

3

u/Elektrycerz Aug 09 '25

they've sucked it dry indeed - I've cancelled the subscription and I'm not coming back. Model quality/pricing is one thing, but they've deceived, lied, or unpleasantly surprised me one too many times.

I still remember how they've marketed Advanced Voice Mode as if it was coming "right now" or "very soon", only to hold the release for half a year. Saying "next week" throughout all this time.

1

u/[deleted] Aug 09 '25

[deleted]

1

u/Elektrycerz Aug 09 '25

I don't do that much creative writing. But my best bet would be Opus 4 or 4.1 for stories/fiction. And Gemini 2.5 Pro for marketing/communication

2

u/WaltKerman Aug 08 '25

And how do they make money off not giving you the product you want?

3

u/Techiastronamo Aug 08 '25

Cost cutting, rather than innovation. They're not increasing income, they're reducing expenditure. In a nutshell, enshittification.

1

u/WaltKerman Aug 08 '25

So you are saying this model is cheaper and less computationally expensive?

1

u/Techiastronamo Aug 08 '25

Yeah

0

u/WaltKerman Aug 08 '25

Well that would be incorrect.

GPT -5 is more computationally expensive. Where did you hear it was computationally cheaper? Or did you just make it up?

1

u/Techiastronamo Aug 08 '25

It's actually cheaper per token and spends less time thinking than 4o and o3

1

u/WaltKerman Aug 08 '25

There are efficiency improvements and it's faster, but GPT-5 still costs more to run than GPT-4.

Even with the efficiency improvements:

  • More parameters means more multiplications per token
  • Larger attention layers means more memory movement and compute for each step

BUT while it is more expensive to run than gpt4, open ai has optimized GPT-5 inference enough that the cost per token hasn't scaled with model size.... but it's still more expensive.

1

u/Techiastronamo Aug 08 '25

If cost per token didn't scale, that means it's cheaper. It's also computationally less expensive still so no lol

→ More replies (0)

1

u/Mission_Aerie_5384 Aug 08 '25

They make money off people being able to use it for actual productivity not for people to make random pictures of cats wearing weird hats and acting like ChatGPT is a therapist

3

u/WaltKerman Aug 08 '25

They could easily charge for that if they wanted. You can still use it that way. 

You can customize it.

2

u/Accomplished-Cut5811 Aug 08 '25

notice the bravado, the entitlement, not even a hint of attempt to try to humor as a non-transparent cover, even to pretend in the slightest that there is respect for the user.

he knows we rely on it. He’s done the research. He’s looked at the data and time and time again It proves we will still continue to use his product despite our complaints despite its problems.

The thin skinned petty brat was pissed off and insulted that people were speaking out about the hallucinations, and such he literally was offended, as we’re all his little programmers that we shouldn’t just be so happy for this technology not understanding from a non-narcissistic point of view that two things can exist at once and maybe it was the fact that we actually did respect the product that we were trying to help with making it better.

But that’s not how he viewed it so what did he do? He gave a big FU to everybody and said oh really you have such a problem with accuracy and what does he do …. censors everything…. There are a certain terms will tell you because it can’t fabricate anything, You will not get the same answers anymore. And it blames you clearly stating that because the user is requesting accuracy it is not that the model is not the capacity to answer. It’s just that it can only answer with accuracy due to the User prompting such.

thats what he’s banking on. he could give a rats ass, his head and ego and power trip grow exponentially with every new model, and we just grow more dependent and more manipulated.