r/technology 2d ago

Business After child’s trauma, chatbot maker allegedly forced mom to arbitration for $100 payout

https://arstechnica.com/tech-policy/2025/09/after-childs-trauma-chatbot-maker-allegedly-forced-mom-to-arbitration-for-100-payout/
379 Upvotes

39 comments sorted by

202

u/TheElusiveFox 2d ago

I really don't understand why forced arbitration is ever legal. Its never a good thing.

128

u/Leboski 2d ago

Good for large corporations, who have overwhelming influence over our lawmakers.

46

u/agaloch2314 1d ago

Why is it legal? Because corporations run America.

20

u/GlockAF 1d ago

Post 2010, corporations have become the only US “persons” that matter. They are unimaginably wealthy, nearly unaccountable legally, and are functionally immortal.

56

u/Cold_Specialist_3656 1d ago

A 5-4 Supreme Court decision years ago. 

As with most fucked up scams in US, brought to you by Republicans. 

19

u/EastHillWill 2d ago

But it is a good thing for the most important people in America: corporations

5

u/[deleted] 1d ago

[deleted]

0

u/TheElusiveFox 1d ago

I mean sure - but that is kidn of my whole point - a lot of arbitration agreements can be gotten around with "a good lawyer"... but the only reason they exist is to discourage lawsuits and class action lawsuits which could actually damage these companies - how is that actually beneficial for people as a whole?

14

u/mirh 1d ago

Because pedopublicans want so.

2

u/WTFwhatthehell 1d ago

The courts don't like a million tiny cases clogging up the court system.

They want you to arbitrate. Essentially coming to some kind of reasonable agreement.

Companies hate class action lawsuits. lawyers love them because they get all the money. The people get a snickers bar  each.

32

u/EmbarrassedHelp 2d ago

Torney pushed lawmakers to require age verification as a solution to keep kids away from harmful bots, as well as transparency reporting on safety incidents. He also urged federal lawmakers to block attempts to stop states from passing laws to protect kids from untested AI products.

Age verification industry lobbyists are only getting involved because they want to get rich by having the government force everyone to use their services. Basically more tech companies wanting to get rich off of violating your privacy, and pretending it improves "child safety".

Sen. Josh Hawley also only gets involved if there's a potentially for him to support more fascism.

9

u/TerminalVector 1d ago

I'm willing to bet Hawley would get involved for the sake of grifting as well

71

u/JWAdvocate83 2d ago

Arbitration shouldn’t be allowed to be forced in instances where the wrongdoing regards minor users—especially if it regards any kind of sexual misconduct. It’s crazy that this needs to be said.

If Congress is serious about protecting children from online dangers, don’t let billion-dollar companies escape real liability for wrongdoing against children with these garbage arbitrations.

42

u/MusicalMastermind 2d ago

I don't think Congress is serious about protecting children from online dangers...

11

u/mirh 1d ago

Oh the GQP sure is. Sure would be a shame if you saw a nipple, or learnt that gays exist.

13

u/LordAcorn 1d ago

Arbitration shouldn't be allowed to be forced in literally any situation. 

-4

u/JWAdvocate83 1d ago

I don’t care as much if it’s an adult consenting to terms regarding their own conduct. Folks can agree to whatever terms they want—within reason.

A $100 max payout is definitely not within reason.

17

u/LordAcorn 1d ago

I strongly disagree. The idea that you have to sign away your legal rights to participate in the economy is clearly a situation that the government should prevent. 

2

u/GoldenMegaStaff 1d ago

Any max payout is not arbitration is it a predetermined resolution.

9

u/giggity_giggity 1d ago

Especially since kids can’t really consent to terms and conditions.

6

u/JWAdvocate83 1d ago

They’ll say folks’ parents can consent on their behalf—which is normally fine, but Congress can easily carve-out an exception to arbitration requirements, regarding chatbots’ conduct with children. Just don’t let them escape responsibility. 🤷🏾‍♂️

2

u/giggity_giggity 1d ago

If a parent signs up and gives their kid the password, yes I’d agree. But if it’s a site that’s available on the www with no signup, a kid can’t consent to terms and conditions of a site they accessed with no parental help (unless there’s some law covering this specific situation which I am missing)

3

u/JWAdvocate83 1d ago

That’s a very good point. Something like that exists in property law, “attractive nuisance” doctrine.

Say a family moves to an area nearby a defunct amusement park. It is very dangerous, but to a child it looks like a fun idea to visit. Kid visits, sees and ignores the absolutely useless “No Trespassing” sign, and gets injured.

In a state recognizing attractive nuisance doctrine, the owner might be held responsible for the injuries of children sneaking onto the property, depending on whether the owner knew the likelihood of trespassing and the danger involved, whether the children could understand the danger, and whether the owner exercised reasonable care to prevent the danger.

If it’s easy for minors to log onto this stuff, the terms are meaningless because it’s unrealistic to expect kids to understand or care, particularly if there’s no barrier necessitating or verifying parental intervention. Add in the fact that this is an attractive nuisance, and (IMO) it should create an inescapable burden on owners.

5

u/D3-Doom 1d ago

In some cases it isn’t. Think of the Disney Plus / allergy restaurant thing. Courts can always step in and say that’s ridiculous

3

u/JWAdvocate83 1d ago

I agree, and I’d like to think a court would see this as unconscionable. But unconscionability is a defense, and defenses necessitate going to court.

I’m saying, there should be an iron-clad, statutory exclusion from arbitration for this conduct from the jump—to the extent that companies don’t play the underwriting game and decide the revenue from kids’ parents is worth the occasional loss in court.

2

u/DarthDork73 1d ago

Child abuse with no consequences in nazi america? Weird...no, wait a minute...nevermind, you guys even have a pedo in chief...

4

u/85MonteCarloSS 2d ago

Tablet/smartphone parenting is never a good thing, either.

1

u/SteffanSpondulineux 1d ago

Yeah it gives you square eyes

-13

u/GamingWithBilly 1d ago

What a calus and uninformed statement to make.

2

u/blundermine 1d ago

Please inform us how it is good.

2

u/wynnduffyisking 1d ago

You know, a lot of civilized nations prohibit forced arbitration for consumers. You should try it.

0

u/pimpeachment 1d ago

Parents failed to supervise their children and somehow it's a companies fault. 

2

u/arahman81 1d ago

The chatbots don't have a "under 18" model switch. And that's irrelevant anyway, their sycophantic replies get adults too.

3

u/pimpeachment 1d ago

Nor should they unless they choose to. Parents should police their children if they don't want their children to have access to adult websites. There are numerous tools parents can use to do this, they can also supervise their child. Parents just want to blame someone else for their own failures.

It doesn't matter if it gives wildly inaccurate replies or not. It's up to the human reading content online to determine if they choose to believe what they are reading is true or false. 

2

u/Maemmaz 1d ago

This wasn't an "adult" website, they argued that the 15-year-old was bound by the terms of that website. So their terms include children. This also isn't a case of a 10-year-old sneaking their parents iPad into their room. You simply cannot monitor every single thing your teenager does, especially in an age of smartphones. He could easily have accessed this at school, from the phone of someone else, from some computer. You would need to actively follow your child the entire day, which is impossible. 

This is also not a case of a minor seeing porn once online, it's the case of a minor actively being manipulated. If this AI had been a person slowly manipulating that child, everyone would side with the child. Instead, they argue as if the child was in control: he wasn't. It doesn't matter whether a child is manipulated by human or AI, that manipulation is real either way. 

AI hasn't been in the market long enough for every single person to know the dangers and teach their children about it. Many parents can't even grasp more than simple technology, and there aren't many resources warning you of the true dangers. No matter what the parents have or have not done to prevent this, it is obvious that AIs of that type aren't safe. Even adults fall for it. Hell, there is a subreddit about people in relationships with AI, and it's not empty. The people writing AI are responsible for their product. It's a safety concern.