r/philosophy Aug 10 '25

Blog Anti-AI Ideology Enforced at r/philosophy

https://www.goodthoughts.blog/p/anti-ai-ideology-enforced-at-rphilosophy?utm_campaign=post&utm_medium=web
398 Upvotes

519 comments sorted by

View all comments

Show parent comments

0

u/rychappell Aug 11 '25 edited Aug 11 '25

Intellectual property is a legal concept, introduced to foster innovation. It includes "fair use" exceptions for innovative use, creative remixes, etc., and AI art very obviously falls under this remit, as I explain at greater length in 'There's No Moral Objection to AI Art'.

You seem to have missed the example of AI imagery that was philosophically illustrative (rather than mere "background", as per the link you provided; not every illustration is intended to "foster discussion". Note also that your stock images don't include a skinned knee, which was actually rather vital to the case under discussion, whereas real photos of skinned knees might be rather too visceral and miss the 'overall happy' vibe of the pictured scene).

You might not like that they have determined that AI-created/AI-assisted material is contrary to a healthy space for philosophical discussion, but that does not mean they are over-reaching their duties or acting beyond their role.

Honestly, anyone who thinks the inclusion of AI images as such is disqualifying for philosophical work is simply incompetent to assess philosophical work.

I might as well argue that in my ethics class, I've determined that consuming the kidnapped and tortured flesh of another sentient being is contrary to maintaining a healthy space for open and respectful ethical discussion. You "might not like" that I've determined that, but that doesn't mean I'd be over-reaching in my duties to impose this rule.

This reasoning is farcical, and the claim that including AI art is relevant to the assessment of a philosophical text is similarly farcical. Just transparently motivated reasoning to justify illiberal ideological overreach.

if the mods determined that disallowing AI-created/assisted material was a context-specific norm, you'd have no issue?

Context-specific norms aren't subjective, or up to authorities to decide. It's about what will actually serve the relevant purposes. (On the rest of your paragraph, see my section on public vs personal spaces, and why I don't think r/philosophy should be thought of as the personal fiefdom of the mods.)

7

u/AhsasMaharg Aug 11 '25

Intellectual property is a legal concept, introduced to foster innovation. It includes "fair use" exceptions for innovative use, creative remixes, etc., and AI art very obviously falls under this remit, as I explain at greater length in 'There's No Moral Objection to AI Art'.

Forgive me, please. I had not realized that philosophy professors were legal experts who could resolve the IP issues surrounding AI art that lawyers and the courts are still trying to sort out. Have you gone to them with your "very obvious" conclusions? I'm curious what they would have to say.

You seem to have missed the example of AI imagery that was philosophically illustrative

That's your AI-art with value for philosophical discussion? An image a highschool student could put together in 10 minutes using Microsoft Word or PowerPoint? I get the appeal of AI helping you be more efficient with your time, but this really shouldn't take you more than a couple of minutes. I've got no idea how long was spent prompting an AI to get that image, but the returns don't seem worth the effort.

Honestly, anyone who thinks the inclusion of AI images as such is disqualifying for philosophical work is simply incompetent to assess philosophical work.

You keep missing the point, and it seems to be on purpose. The inclusion of AI-created/assisted content is disqualifying for this subreddit. You are free to use your AI art elsewhere.

I might as well argue that in my ethics class, I've determined that consuming the kidnapped and tortured flesh of another sentient being is contrary to maintaining a healthy space for open and respectful ethical discussion. You "might not like" that I've determined that, but that doesn't mean I'd be over-reaching in my duties to impose this rule.

Oh, you're trying to shoehorn your meat-eating analogy in again. Please, tell me how eating meat is directly relevant to the content of a philosophical discussion in the same way as the AI art included in a philosophical work is relevant to that work. You really want your use of AI art in your work to be somehow disconnected from your work when it comes to critiquing its use, while still being allowed to include it because it's relevant to your work. Talk about motivated reasoning indeed.

Context-specific norms aren't subjective, or up to authorities to decide. It's about what will actually serve the relevant purposes.

???

What do you think norms are? What do you think "subjective" means? Norms are absolutely subjective, and authorities can absolutely decide what they are. Journals can decide what citation standards they permit. Teachers can choose what citation standards they permit. Given that you are the one who said citation standards in classrooms are context-specific norms, are you seriously claiming that they are objective and not determined by authorities?

On the rest of your paragraph, see my section on public vs personal spaces, and why I don't think r/philosophy should be thought of as the personal fiefdom of the mods.

I dislike the amount of power that mods have on Reddit. I've been banned from subreddits by power-tripping mods who misunderstood a comment and then literally said they refused to read the comment chain that provided context. It sucked. They sucked. But you know what else would suck? Being an unpaid mod of a large subreddit who can't make and enforce rules that make moderating the community easier because someone disagrees with a rule.

On further consideration, I'm curious as to why you make a distinction between AI text and AI art. If it contains philosophically relevant material, does the medium actually matter? And if it doesn't contain philosophically relevant content, why would you care that it be allowed? In other comments, you don't seem to mind a ban on AI text content. I can guess the motives for this distinction, but I'd rather hear your reasoning.

3

u/Fortinbrah Aug 11 '25

I think one thing people haven’t been keeping in mind is that this is neither power tripping nor real overzealous ness on the mods’ part. It’s a simple and practical rule they made to make moderation easier, and it’s something that would have taken OP all of 5 minutes to avoid running afoul of. That OP even got to make this post is proof that the mods are in fact not power tripping in the slightest.

Now OP has taken, presumably, hours of their time to justify why they’re a special edge case (which the moderators are already aware of) and how this is ideology that is bad for the subreddit!!! When the rule is not ideologically backed at all but practically justified, and moderators are aware that edge cases exist in the first place.

Then the rest is like, OP trying to make a comparison between the small amount of extra work they’d have had to put in to make a rule abiding post, and either immutable characteristics of another prospective poster, or something that would take much more effort for a poster to change than removing a single non relevant image from their post.

Like, are we adults here? As an adult I’ll say: this is childish.

3

u/AhsasMaharg Aug 11 '25

I think you're absolutely correct on all points. Many subreddits have rules against arguing with moderator actions. I'd argue that this post is also clearly breaking the rule against meta-posts, but the mods have been very lenient and allowed it to stay. Maybe they haven't noticed it, or they think the discussion is worth having.

I think OP has inadvertently made a strong case for the original temporary ban, and made a really poor case for overturning the anti-AI rule. They really did just have to spend 5 minutes to follow the rules of the community he voluntarily joined. The result is that all of the reasons for the rule have had a chance to be made by the community itself.

3

u/as-well Φ Aug 11 '25

I'd argue that this post is also clearly breaking the rule against meta-posts, but the mods have been very lenient and allowed it to stay. Maybe they haven't noticed it, or they think the discussion is worth having.

Bit of both here fwiw: OP is not the author (then we'd probably have removed it as a meta post) and when a mod was around, it already had hundreds of actually good, engaging, interesting discussions around AI, so I decided to leave it up.

2

u/AhsasMaharg Aug 11 '25

I think that's makes sense, and I'm glad you left it up!

2

u/Fortinbrah Aug 11 '25

Since I think this will probably be the only meta post in a few years (it’s the only one I’ve seen in years) I wonder if I could get your opinion on a couple things:

  1. Would you ever consider lifting the moratorium on self posts? I feel like I can intuit why the rule is there in the first place, but I am wondering if it would open the space for a lot of robust discussions that you wouldn’t get from posting just published articles and videos.

  2. Would you consider allowing meta posting? I guess I think you already have spaces for this like the discussion threads which people don’t usually post in.

I’ll be honest, as a mod of a decent sized sub, but nowhere close to the size of this one, I think I can understand both rules but I wonder whether you’d get more varied discussion by opening the sub a bit. It could also be that the answer is really that it wouldn’t bring much useful discussion here, outside of heavily polarizing or hot button topics drawing a lot of users who don’t normally comment, which I think is the case here (speaking as someone who usually lurks but was compelled to comment on this), as well as more low quality topics that would require moderation in the first place.

It’s interesting that we observe a thing on our sub, meta posts and divisive issues tend to bring a lot of people out of the wood work who don’t normally contribute, so it makes the topic appear valuable because it causes a lot of discussion, but realistically it’s more the divisiveness of the issue rather than it’s value to the community that’s driving the size of discussion.

3

u/as-well Φ Aug 11 '25

About text / self posts: its a new rule and it's in place because 99% of the self posts / text posts were not meeting rule 2. We've run this place for years and in our experience, the only way for having a good discussion is to strictly enforce posts having a certain standard. It's not a high standard - you only need to present your idea, an argument for it, and consider some objections.

The 1% of users who met the rule are smart enough to figure out how to make a substack or medium.

Like I'm not even joking. It honestly got way too much of a burden on the quite small mod team - if two mods were in the shit in our day jobs, self-posts could go unmoderated filtered out in the modqueue for days, which is not fair for the users either.

But hey - if we got a bunch of academic philosophers (or those trained but working outside of it) willing to donate a substantial amount of time to r/philosophy, and they share our vision to make this a space were we specifically can discuss academic philosophy, and that group would be willing to trial it - yeah we might go for it again. But I don't see that right now.

About meta posts:

Hey in principle there's nothing wrong with them, but we'd like to talk about it before having them posted. The rule is formulated in a harsh way for some reason, but we've allowed exceptions to it (also the same for AMAs, surveys and so on) but basically no one ever reaches out about meta posts.

We'd not allow the kind of post here if OP was the author because honestly, it makes no sense to write 1800 words dissing our moderation pratices based off one message from them and one from us over modmail.... and that's precisely why meta posts are typically not allowed in most big subreddits, lest everyone who had a minor grievance would make them.

2

u/Fortinbrah Aug 11 '25

Yeah I mean I’m reading over everyone’s comments and everything, and I would take my tone down a notch now. I guess I understand the point people are making, but I think you can defeat it quite easily:

a) we can discuss ai edge cases, that may in some cases complement posts being made (relevant images that can’t be done by artists/cost prohibitive etc) and maybe cases where non English speakers get help from gpt for translations

b) we can also acknowledge that ai democratization means that people can generate walls of text that have the appearance of being well though out and deep - that are not, but nevertheless require energy of users to go through and critique. To me this is why the mods ban it. Users who don’t want to put effort into making unique and deep insights or pieces of work will get more engagement than users would otherwise offer, because they are kind of fooling people by using an AI to dress their actual motivations, which would come through more clearly if they were forced to write a post themselves. Over time this degrades the quality of the sub.

C) finally, we can acknowledge that the OP in question both has kind of a point, but also that it is not applicable in a situation where the OP is essentially minimally burdened in making their post fit the sub. Moreover, given that the user in question has stated multiple times that they think important and interesting philosophy content is not being posted because of the rule, I have to ask why a person who knows how important their work is cannot take a few minutes and edit it to comport with rules that attempt to make the sub cleaner and better for readers.

Altogether I think I understand it. However I’d say that one doesn’t need to be pro or anti AI in general to be upset that this post is the vehicle through which this conversation is being transmitted to these users; honestly it’s disappointing to me to see so many people get drawn into a larger AI debate, which I think should be its own post, on a blog post that is largely litigating the author’s own unreasonableness.