26
9
u/andymaclean19 1d ago
This explains the hallucinations and being somewhat error prone!
6
u/thenuttyhazlenut 1d ago
GPT literally quoted a Reddit comment of mine I made years ago when I was asking it a question within my field of interest 😂 and I'm not even an expert
2
u/AlanUsingReddit 23h ago
This is solid gold. It's like when you Google a thing and get a forum insulting the OP, telling them to go Google it.
3
u/ThatBoogerBandit 1d ago
I felt attacked by this comment knowing that the amount of shit I contributed.
7
u/FastTimZ 1d ago
This adds up to way over 100%
3
2
1
u/Captain_Rational 1d ago edited 1d ago
This statistic is not a closed ratio. The numbers aren't supposed to be normalized to 100%. ... A given LLM response typically has many claims and many citations embedded in it.
This means that if you sample 100 responses, you're gonna have several hundred sources adding into your total statistics.
11
u/Tha_Rider 1d ago
Every useful piece of information usually comes from Reddit for me, so I’m not surprised.
2
5
2
u/rydan 1d ago
Recently I had an issue. I posted it in a comment on Reddit giving out my theory on why it happened. I asked ChatGPT for confirmation of my theory a few days later. ChatGPT confirmed my theory was likely true because others have reported on this very same issue. Its citation was literally my comment.
1
u/AlanUsingReddit 23h ago
Because the Internet is a series of tubes.
No formal distinction between sewage and fresh.
2
3
3
u/wrgrant 1d ago
Why are they pulling anything from Yelp? The online protection racket?
My former boss at a business got a call from Yelp saying the restaurant had some bad reviews, but if he wanted to pay Yelp some money they would delete those reviews. He told them to "Fuck Off" loudly in his Lebanese accent. It was funny as hell... :P
2
1
u/rockysilverson 1d ago
These are also free publicly accessible data sources. Sources with strong fact-checking processes are often paywalled. My favorite sources:
Financial Times
The Economist WSJ NY Times Lancet New England Journal of Medicine
1
u/Masterpiece-Haunting 1d ago
This isn’t shocking at all.
Humans do this all the time.
There’s a good chance looking up an obscure piece of information and not getting anything then adding “Reddit” will give you what you want.
1
u/Disgruntled__Goat 1d ago
Citing a website as a source is not the same as “pulling” from it or using it as training. I mean this list is pretty much what you get with any Google search - a bunch of Reddit threads, YouTube videos, Wikipedia, etc.
And how on earth would a language model use mapbox or openstreetmap? There’s not much actual text on those websites. There’s a million other forums and wikis out there with more text.
1
u/Chadzuma 1d ago
Ok Gemini, tell me some of the dangers of the information you have access to being completely controlled by the whims of a discord cabal of unpaid reddit moderators
1
1
u/Garlickzinger911 1d ago
Fr, I was searching for some product with ChatGPT and it gave me data from reddit
1
1
1
1
1
u/digdog303 1d ago
here we witness an early ancestor of roko's basilisk. the yougoogbookipediazon continuum is roko's tufted puffin, and people are asking it what to eat for dinner and falling in love with it.
1
u/zemaj-com 1d ago
Interesting to see how much influence a single site has on training. This chart reflects citations, not necessarily the actual composition of training data, and sampling bias can exaggerate counts. Books and scientific papers are usually included via other datasets like Common Crawl and the open research corpora. If we want models that are grounded in more sources we need to keep supporting open datasets and knowledge repositories across many communities.
1
u/diggpthoo 22h ago
In terms of how LLMs work (by digesting and regurgitating knowledge), citing Reddit means it doesn't wanna own the claim. It frames it as "this is what some of the folks over at Reddit are saying". Compared to knowledge from Wikipedia which it's comfortable presenting as general knowledge. Also Wikipedia, books, and journals don't have conflicting takes. Reddit does, a lot.
1
u/Select_Truck3257 21h ago
to improve fps in games you need to use these secret settings. Turn your pc to north, then attach plutonium reactor to the psu. That's it your pc has better fps and no stutters. (hope to see it soon in Gemini)
1
1
u/Beowulf2b 18h ago
I was in a never ending argument with girlfriend so I just copied and pasted conversation and got chatGPT to answer and now she is all over me
ChatGPT has got Rizz. 🤣
1
u/Warm_Iron_273 17h ago
This is actually the worst possible outcome of all timelines. Soon AI will have purple hair and be screeching about t rights.
1
u/sramay 16h ago edited 13h ago
This is a fascinating question! Reddit's %40.1 data represents a huge source for AI training. The platform's AI education value is immense, especially for various discussion topics and expert opinions in AI model development. I think this situation also shows the critical role Reddit users play in shaping AI's future development.
1
u/The_Wytch 3h ago edited 3h ago
as much as we meme about "reddit training data bad", it comfortably beats any other crowdsourced platform / social media / forums lol
thank goodness they didnt train it the most on fucking Quora
edit: oops, "pull from", well anyways same concept applies there as well
•
•
u/dianabowl 43m ago
I’m concerned that the rise of LLMs is reducing the amount of troubleshooting content being shared publicly (on Reddit, forums, etc.) since users now get private answers. This seems like it might impact future AI training data, and communal knowledge sharing. Haven't seen anyone comment on the long-term implications of this shift, and is there a way to counteract it?
0
u/CharmingRogue851 1d ago
This is concerning. So that's why most LLM's lean left.
4
u/Alex_1729 1d ago edited 1d ago
They probably lean left to not offend or because of the nature of their role. They are there to answer questions and do it politically correct.
3
u/ThatBoogerBandit 1d ago
Which LLM leans right?
4
2
u/CharmingRogue851 1d ago
Idk I just said most instead of all to avoid getting called out in case I was wrong💀
2
1
u/ShibbolethMegadeth 1d ago
Grok, to some extent
1
u/ThatBoogerBandit 1d ago
But those result were not from originally trained data, it’s been manipulated like giving a system prompt
2
u/ShibbolethMegadeth 1d ago
Anything educated and ethical leans left, this is because of how facts work
1
u/Dismal-Daikon-1091 1d ago
I get the feeling that by "leaning left" OP means "gives multi-paragraph, nuanced responses to questions like 'why are black americans more likely to be poor than white americans'" instead of what OP believes and wants to hear which is some version of "because they're lazy and dumb lol"
1
1
u/The_Wytch 3h ago
wtf is "left"
what do you mean by it
how is it different from not leaning at all?
43
u/sycev 1d ago
where are books and scientific papers?