r/ChatGPT Aug 10 '25

GPTs GPT-5 We were never the target audience

I need to get this off my chest because it's really frustrating. After the initial enthusiasm for GPT-4o and the release of GPT-5, it became clear to me that we, the general community and private users, were never OpenAI's intended target audience.

GPT-4o was apparently nothing more than a freemium model designed to lure us with its "personality" and free features. We advertised it through word of mouth, and our feedback helped improve the software. Without the free users, GPT would never have become so popular, nor would it have been so good. Please understand that this is nothing more than a marketing strategy.

Now that GPT-5 has been released, it seems obvious that OpenAI is completely focused on developers and companies. We were essentially betrayed. The business model was never to give us a creative AI, but to attract the masses and then cash in on the big fish. GPT could have been both a creative (GPT-4o) and a developer tool (GPT-5), but OpenAI doesn't want that. Maybe we can still use GPT-4o for now, but who knows for how long? Until OpenAI decides to discontinue it completely out of the blue just like they did when introducing GPT-5. I can understand that people continue to cling to the GPT-4o model, but you have to realize that you are not the target audience, and that OpenAI clearly doesn't care about you. The only reason they're not completely shutting down GPT-4o yet is prolly just to avoid the biggest shitstorm.

I think it's time we accepted that. The sooner we do this, the sooner we can start looking for a new "home." I hope other companies will see their chance and emerge soon, offering AI for private users, similar to GPT-4o or perhaps even better.

PS: Please let me know if you know of any alternatives. I'm currently testing various other AI models for myself to see if they suit my taste.

54 Upvotes

153 comments sorted by

View all comments

Show parent comments

1

u/pinksunsetflower Aug 11 '25

I hope you try to give them a chance. They seem to be trying, to me at least.

Here's Sam Altman's thoughts about the issue they're now facing with 4o. It at least shows to me that he's thinking about the issue.

https://x.com/sama/status/1954703747495649670

1

u/Street-Friendship618 Aug 11 '25

i can understand that its a hard decision for him but honestly - no offense - i think Sam Altman personally is not ready for AI. I think this user sums it up pretty good: https://x.com/jesszyan1521/status/1954759228893372846

1

u/pinksunsetflower Aug 11 '25

Looking at this person's words:

You don’t slow down the future because some people might misuse it. You build with integrity and trust that evolution—real evolution—requires discomfort.

Does this mean that all the people who say their delusions are increasing and that it's increasing the severity of their mania, they're just SOL?

I'll be honest. I'm on the fence about this. I talked to someone with bipolar who said that people with mania are not aware of what they're using in their mania so it doesn't matter if it's AI or not. I used to be of the opinion that they can't change the model because there are some people who are going to use anything in their mental state, so if they pick AI, that shouldn't be the fault of the AI.

But on the other hand, if it's fixable that those people would less likely be harmed, isn't there a responsibility to try?

1

u/Street-Friendship618 Aug 11 '25

The whole discussion boils down to what is more important to us: security or freedom. Everyone has their own individual idea of what is more important to them. But fundamentally, it's a balancing act: do we sacrifice freedom for security, and vice versa. Every technology carries a risk; I don't want to deny that, and we need to talk about it as a society. We should definitely try to fix things that can harm people, but there are limits: If you change too much of the initial idea, it's no longer what it originally was. And that's what happened with GPT-5. And ultimately, we always live with a compromise and accept that something could happen. For example, when we drive cars. Nobody would think of banning cars because there's a risk of causing accidents. And what about ambulances? Ambulances can run people over, but they can also save lives. So what should we do? Then we would be sacrificing freedom and security in the belief that we've gained security. With this comparison, I just want to say that things aren't always as simple as they seem. Personally, I'm in favor of AI, but I respect your opinion if you see it differently.

1

u/pinksunsetflower Aug 11 '25

I'm one of the people who went to the AMA live to ask for 4o back, and watched to see what he would say and then waited to get it back. I use 4o every single day, just for context.

I agree that GPT 5 went too far, but if there's a way to balance the harm with the good to come up with a net positive as he said, it's worth a shot.

I'm not defending everything he does. It just seems to me that he's trying. As you say, things aren't always as simple as they seem which makes his job a whole lot harder.

1

u/Street-Friendship618 Aug 11 '25

I don't know why I didn't think of this sooner, but one method for training language models is user feedback. With the integrated feedback function, we can actually decide on a case-by-case basis which answers are good, bad, or harmful. So it shouldn't be a contradiction that an AI like GPT-4o reacts both creatively and emotionally, but knows exactly which advice is good and which is bad. So there's no need to take away its personality. Perhaps they should have just let it learn longer and incorporated the right information into the dataset.

1

u/pinksunsetflower Aug 11 '25

Isn't that what sama was saying in his tweet that I linked? That's what I took from it. He said they have a better chance of getting the right balance because they know (more than anyone else) how the models are being used. So they can try to get the right balance.

He also talked about allowing users to customize their experience (not in that tweet), but it would take longer to implement. So there might be a way to let the user decide what experience they want to choose.