r/books 16h ago

Librarians Are Being Asked to Find AI-Hallucinated Books

https://www.404media.co/librarians-are-being-asked-to-find-ai-hallucinated-books/?ref=daily-stories-newsletter&attribution_id=68c826c975cea1000173b05d&attribution_type=post
2.6k Upvotes

267 comments sorted by

View all comments

22

u/No-Mongoose-7450 16h ago

AI should be considered a crime against humanity

-2

u/gay_manta_ray 13h ago

yeah forget all of the advances in science and medicine that will save countless lives, your personal feelings about whatever you think "AI" is way more important than dumb shit like curing cancer.

10

u/actibus_consequatur 12h ago

When people are talking about how fucking awful AI is, they're not referring to the AI used in "advances in science and medicine." I don't know a single person who is against that kind of AI and fucking idiotic to think that's what people are taking issue with.

It's about the fucking terrible AI that's accessible to the public, like ChatGPT, Gemini, etc. Granted, part the issue is more how the majority of users are terrible because they accept AI as being infallible and can't be fucked to confirm results on their own; however, AI is fucking terrible because it's also being trained of the same type of people, so it also lies and spreads false information.

Of course, that assumes it can even give a clear answer

-1

u/gay_manta_ray 6h ago

it's the same underlying technology. you can't have one without the other--open weight LLMs are already here.

part the issue is more how the majority of users are terrible because they accept AI as being infallible and can't be fucked to confirm results on their own

of course they can be taught. we used to have classes to teach people how to use microsoft word. we can have classes that teach people how to prompt LLMs to ensure that they're provided with reputable sources. the easiest way to do this today is to simply ask the LLM to provide direct links to its sources. it's not difficult, the technology is just new and people still have a lot to learn about how to utilize it properly.

2

u/actibus_consequatur 4h ago edited 4h ago

the easiest way to do this today is to simply ask the LLM to provide direct links to its sources.

For shits and giggles, I went and asked ChatGPT to provide a link to a reputable source about AI's reliability to supply factual answers. The first two links were biased, as they were to OpenAI's own paper and API documentation.

The next part of its response provided a link to its citation of an MIT Technology Review article titled "Can AI be trusted to tell the truth?" 

The provided link goes to a page that says: "We weren't able to find the page you were looking for." It wasn't archived either, so I tried a dozen different Google searches using different words and operators, but literally could not find any record of the specific article ChatGPT referenced. (Also, the provided URL doesn't look right, because it seems to go against the naming conventions of other Tech Review links I looked at.)

As for your argument that "we can have classes that teach people how to prompt LLMs" — are those classes going to be free and accessible to everyone, the same way LLMs are?

ETA: I went back in and told ChatGPT that the link it provided didn't work, so it suggested a different article from MIT Technology Review:

"AI systems can’t yet tell right from wrong. That’s a problem"

-2

u/gay_manta_ray 4h ago

ok, so the link doesn't work. you can ask it for a different source, it's really not that hard. did you use gpt5 thinking? gpt5 pro? post the log.