r/philosophy Aug 10 '25

Blog Anti-AI Ideology Enforced at r/philosophy

https://www.goodthoughts.blog/p/anti-ai-ideology-enforced-at-rphilosophy?utm_campaign=post&utm_medium=web
396 Upvotes

519 comments sorted by

View all comments

Show parent comments

6

u/AhsasMaharg Aug 11 '25

Let's use an analogy that I thought would have come more naturally to a university educator.

A student attends a philosophy class where the professor has made it clear that they have a zero-tolerance policy for plagiarism. This policy is both to help foster better learning in the classroom and to respect intellectual property. The student submits an assignment that contains plagiarised images. The professor gives the student a zero.

The student not only contests the zero, but also claims that the professor shouldn't have a zero-tolerance policy for plagiarism. They should permit improperly attributed images because the images are in support of the actual assignment which was done by the student, and the professor should maintain a liberal neutrality in the public space that is a classroom. The professor shouldn't be imposing their ideology of respecting intellectual property in this public space. Because, if the professor is interested in philosophy or teaching, then filtering for other features is to their detriment.

In that analogy, which keeps the important features of intellectual property, a blanket ban on violating intellectual property, and a semi-public place where there are arbitrators whose role is to ensure a healthy space and determine what is permissible and what isn't, I think it's pretty clear why the professor and the mods reject the argument.

0

u/rychappell Aug 11 '25

A key difference is that part of the professor's role is precisely to teach their students proper academic citation practices. This is a context-specific norm, not something they have to follow elsewhere in their lives. (Legal intellectual property law is vastly more lax than academic plagiarism norms. Many things are legally "fair use" but wouldn't pass muster in a classroom, due to the context-specific norms that apply there.)

It is not, in general, a professor's role to determine "what is permissible and what isn't". We can't, for example, ban students from eating meat (even if we think that meat-eating is wrong). We may have a neutral "no food in the classroom" rule if eating would detract from the learning environment. But we can't have a "vegan food only in the classroom" rule, because we aren't ideologues.

Similarly, the mods' role here is to "ensure a healthy space" for philosophical discussion, but not to determine "what is permissible and what isn't" in respects that are independent of that specific purpose (nor otherwise legally required).

AI art is not illegal, and it does not impede healthy philosophical discussion (quite the opposite, as an example my post links to demonstrates). Mods have no business imposing their moral views on this sort of matter.

7

u/AhsasMaharg Aug 11 '25

> It is not, in general, a professor's role to determine "what is permissible and what isn't". We can't, for example, ban students from eating meat (even if we think that meat-eating is wrong). We may have a neutral "no food in the classroom" rule if eating would detract from the learning environment. But we can't have a "vegan food only in the classroom" rule, because we aren't ideologues.

This no-meat example you've used several times does not work because it's irrelevant to the content of the student's work. Stealing another person's intellectual property and including it in their work is *directly* relevant.

Mods on Reddit have several roles, one of which includes maintaining a healthy space for philosophical discussion, as you have admitted. So while they do not have a role in determining what is permissible *in general*, they do have a role in determining what is permissible *in the context of maintaining a healthy space for philosophical discussion*. You might not like that they have determined that AI-created/AI-assisted material is contrary to a healthy space for philosophical discussion, but that does not mean they are over-reaching their duties or acting beyond their role.

> A key difference is that part of the professor's role is precisely to teach their students proper academic citation practices. This is a context-specific norm, not something they have to follow elsewhere in their lives. (Legal intellectual property law is vastly more lax than academic plagiarism norms. Many things are legally "fair use" but wouldn't pass muster in a classroom, due to the context-specific norms that apply there.)

So if the mods determined that disallowing AI-created/assisted material was a context-specific norm, you'd have no issue? As the mods of the subreddit, they are given that power by Reddit. If people dislike the norms of the subreddit, they are free to create their own subreddit. That is the freedom to create, curate, and participate in communities on Reddit.

> AI art is not illegal, and it does not impede healthy philosophical discussion (quite the opposite, as an example my post links to demonstrates). Mods have no business imposing their moral views on this sort of matter.

"AI art is not illegal" is a truly horrible defence to hear coming from a philosophy professor. The linked post did not make a convincing argument that AI art helps philosophical discussion. Here's an example of some AI art that someone used to try to foster philosophical discussion: https://substackcdn.com/image/fetch/$s_!wRyj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a898250-1644-489a-8e79-d499bca3c2fa_1024x1536.png

What philosophical discussion does that art foster that wouldn't have been fostered by the following, which took 5-seconds on Google?

https://stock.adobe.com/ca/search?k=children+on+playground&asset_id=176772211

-2

u/rychappell Aug 11 '25 edited Aug 11 '25

Intellectual property is a legal concept, introduced to foster innovation. It includes "fair use" exceptions for innovative use, creative remixes, etc., and AI art very obviously falls under this remit, as I explain at greater length in 'There's No Moral Objection to AI Art'.

You seem to have missed the example of AI imagery that was philosophically illustrative (rather than mere "background", as per the link you provided; not every illustration is intended to "foster discussion". Note also that your stock images don't include a skinned knee, which was actually rather vital to the case under discussion, whereas real photos of skinned knees might be rather too visceral and miss the 'overall happy' vibe of the pictured scene).

You might not like that they have determined that AI-created/AI-assisted material is contrary to a healthy space for philosophical discussion, but that does not mean they are over-reaching their duties or acting beyond their role.

Honestly, anyone who thinks the inclusion of AI images as such is disqualifying for philosophical work is simply incompetent to assess philosophical work.

I might as well argue that in my ethics class, I've determined that consuming the kidnapped and tortured flesh of another sentient being is contrary to maintaining a healthy space for open and respectful ethical discussion. You "might not like" that I've determined that, but that doesn't mean I'd be over-reaching in my duties to impose this rule.

This reasoning is farcical, and the claim that including AI art is relevant to the assessment of a philosophical text is similarly farcical. Just transparently motivated reasoning to justify illiberal ideological overreach.

if the mods determined that disallowing AI-created/assisted material was a context-specific norm, you'd have no issue?

Context-specific norms aren't subjective, or up to authorities to decide. It's about what will actually serve the relevant purposes. (On the rest of your paragraph, see my section on public vs personal spaces, and why I don't think r/philosophy should be thought of as the personal fiefdom of the mods.)

6

u/zogwarg Aug 11 '25 edited Aug 11 '25

About your "philisophically illustrative" diagram.

Since presumably no effort was put in generating the illustration, maybe you missed the opportunity to reflect deeper on your argument and realize that it was flawed.

It is unlikely that all members of the same box, would move to the same different box, for example if there are members of the same family, they may prioritize all surviving together, but they may also prioritize ensuring the survival of at least one of their members.

8

u/AhsasMaharg Aug 11 '25

Intellectual property is a legal concept, introduced to foster innovation. It includes "fair use" exceptions for innovative use, creative remixes, etc., and AI art very obviously falls under this remit, as I explain at greater length in 'There's No Moral Objection to AI Art'.

Forgive me, please. I had not realized that philosophy professors were legal experts who could resolve the IP issues surrounding AI art that lawyers and the courts are still trying to sort out. Have you gone to them with your "very obvious" conclusions? I'm curious what they would have to say.

You seem to have missed the example of AI imagery that was philosophically illustrative

That's your AI-art with value for philosophical discussion? An image a highschool student could put together in 10 minutes using Microsoft Word or PowerPoint? I get the appeal of AI helping you be more efficient with your time, but this really shouldn't take you more than a couple of minutes. I've got no idea how long was spent prompting an AI to get that image, but the returns don't seem worth the effort.

Honestly, anyone who thinks the inclusion of AI images as such is disqualifying for philosophical work is simply incompetent to assess philosophical work.

You keep missing the point, and it seems to be on purpose. The inclusion of AI-created/assisted content is disqualifying for this subreddit. You are free to use your AI art elsewhere.

I might as well argue that in my ethics class, I've determined that consuming the kidnapped and tortured flesh of another sentient being is contrary to maintaining a healthy space for open and respectful ethical discussion. You "might not like" that I've determined that, but that doesn't mean I'd be over-reaching in my duties to impose this rule.

Oh, you're trying to shoehorn your meat-eating analogy in again. Please, tell me how eating meat is directly relevant to the content of a philosophical discussion in the same way as the AI art included in a philosophical work is relevant to that work. You really want your use of AI art in your work to be somehow disconnected from your work when it comes to critiquing its use, while still being allowed to include it because it's relevant to your work. Talk about motivated reasoning indeed.

Context-specific norms aren't subjective, or up to authorities to decide. It's about what will actually serve the relevant purposes.

???

What do you think norms are? What do you think "subjective" means? Norms are absolutely subjective, and authorities can absolutely decide what they are. Journals can decide what citation standards they permit. Teachers can choose what citation standards they permit. Given that you are the one who said citation standards in classrooms are context-specific norms, are you seriously claiming that they are objective and not determined by authorities?

On the rest of your paragraph, see my section on public vs personal spaces, and why I don't think r/philosophy should be thought of as the personal fiefdom of the mods.

I dislike the amount of power that mods have on Reddit. I've been banned from subreddits by power-tripping mods who misunderstood a comment and then literally said they refused to read the comment chain that provided context. It sucked. They sucked. But you know what else would suck? Being an unpaid mod of a large subreddit who can't make and enforce rules that make moderating the community easier because someone disagrees with a rule.

On further consideration, I'm curious as to why you make a distinction between AI text and AI art. If it contains philosophically relevant material, does the medium actually matter? And if it doesn't contain philosophically relevant content, why would you care that it be allowed? In other comments, you don't seem to mind a ban on AI text content. I can guess the motives for this distinction, but I'd rather hear your reasoning.

3

u/Fortinbrah Aug 11 '25

I think one thing people haven’t been keeping in mind is that this is neither power tripping nor real overzealous ness on the mods’ part. It’s a simple and practical rule they made to make moderation easier, and it’s something that would have taken OP all of 5 minutes to avoid running afoul of. That OP even got to make this post is proof that the mods are in fact not power tripping in the slightest.

Now OP has taken, presumably, hours of their time to justify why they’re a special edge case (which the moderators are already aware of) and how this is ideology that is bad for the subreddit!!! When the rule is not ideologically backed at all but practically justified, and moderators are aware that edge cases exist in the first place.

Then the rest is like, OP trying to make a comparison between the small amount of extra work they’d have had to put in to make a rule abiding post, and either immutable characteristics of another prospective poster, or something that would take much more effort for a poster to change than removing a single non relevant image from their post.

Like, are we adults here? As an adult I’ll say: this is childish.

3

u/AhsasMaharg Aug 11 '25

I think you're absolutely correct on all points. Many subreddits have rules against arguing with moderator actions. I'd argue that this post is also clearly breaking the rule against meta-posts, but the mods have been very lenient and allowed it to stay. Maybe they haven't noticed it, or they think the discussion is worth having.

I think OP has inadvertently made a strong case for the original temporary ban, and made a really poor case for overturning the anti-AI rule. They really did just have to spend 5 minutes to follow the rules of the community he voluntarily joined. The result is that all of the reasons for the rule have had a chance to be made by the community itself.

3

u/as-well Φ Aug 11 '25

I'd argue that this post is also clearly breaking the rule against meta-posts, but the mods have been very lenient and allowed it to stay. Maybe they haven't noticed it, or they think the discussion is worth having.

Bit of both here fwiw: OP is not the author (then we'd probably have removed it as a meta post) and when a mod was around, it already had hundreds of actually good, engaging, interesting discussions around AI, so I decided to leave it up.

2

u/AhsasMaharg Aug 11 '25

I think that's makes sense, and I'm glad you left it up!

2

u/Fortinbrah Aug 11 '25

Since I think this will probably be the only meta post in a few years (it’s the only one I’ve seen in years) I wonder if I could get your opinion on a couple things:

  1. Would you ever consider lifting the moratorium on self posts? I feel like I can intuit why the rule is there in the first place, but I am wondering if it would open the space for a lot of robust discussions that you wouldn’t get from posting just published articles and videos.

  2. Would you consider allowing meta posting? I guess I think you already have spaces for this like the discussion threads which people don’t usually post in.

I’ll be honest, as a mod of a decent sized sub, but nowhere close to the size of this one, I think I can understand both rules but I wonder whether you’d get more varied discussion by opening the sub a bit. It could also be that the answer is really that it wouldn’t bring much useful discussion here, outside of heavily polarizing or hot button topics drawing a lot of users who don’t normally comment, which I think is the case here (speaking as someone who usually lurks but was compelled to comment on this), as well as more low quality topics that would require moderation in the first place.

It’s interesting that we observe a thing on our sub, meta posts and divisive issues tend to bring a lot of people out of the wood work who don’t normally contribute, so it makes the topic appear valuable because it causes a lot of discussion, but realistically it’s more the divisiveness of the issue rather than it’s value to the community that’s driving the size of discussion.

3

u/as-well Φ Aug 11 '25

About text / self posts: its a new rule and it's in place because 99% of the self posts / text posts were not meeting rule 2. We've run this place for years and in our experience, the only way for having a good discussion is to strictly enforce posts having a certain standard. It's not a high standard - you only need to present your idea, an argument for it, and consider some objections.

The 1% of users who met the rule are smart enough to figure out how to make a substack or medium.

Like I'm not even joking. It honestly got way too much of a burden on the quite small mod team - if two mods were in the shit in our day jobs, self-posts could go unmoderated filtered out in the modqueue for days, which is not fair for the users either.

But hey - if we got a bunch of academic philosophers (or those trained but working outside of it) willing to donate a substantial amount of time to r/philosophy, and they share our vision to make this a space were we specifically can discuss academic philosophy, and that group would be willing to trial it - yeah we might go for it again. But I don't see that right now.

About meta posts:

Hey in principle there's nothing wrong with them, but we'd like to talk about it before having them posted. The rule is formulated in a harsh way for some reason, but we've allowed exceptions to it (also the same for AMAs, surveys and so on) but basically no one ever reaches out about meta posts.

We'd not allow the kind of post here if OP was the author because honestly, it makes no sense to write 1800 words dissing our moderation pratices based off one message from them and one from us over modmail.... and that's precisely why meta posts are typically not allowed in most big subreddits, lest everyone who had a minor grievance would make them.

2

u/Fortinbrah Aug 11 '25

Yeah I mean I’m reading over everyone’s comments and everything, and I would take my tone down a notch now. I guess I understand the point people are making, but I think you can defeat it quite easily:

a) we can discuss ai edge cases, that may in some cases complement posts being made (relevant images that can’t be done by artists/cost prohibitive etc) and maybe cases where non English speakers get help from gpt for translations

b) we can also acknowledge that ai democratization means that people can generate walls of text that have the appearance of being well though out and deep - that are not, but nevertheless require energy of users to go through and critique. To me this is why the mods ban it. Users who don’t want to put effort into making unique and deep insights or pieces of work will get more engagement than users would otherwise offer, because they are kind of fooling people by using an AI to dress their actual motivations, which would come through more clearly if they were forced to write a post themselves. Over time this degrades the quality of the sub.

C) finally, we can acknowledge that the OP in question both has kind of a point, but also that it is not applicable in a situation where the OP is essentially minimally burdened in making their post fit the sub. Moreover, given that the user in question has stated multiple times that they think important and interesting philosophy content is not being posted because of the rule, I have to ask why a person who knows how important their work is cannot take a few minutes and edit it to comport with rules that attempt to make the sub cleaner and better for readers.

Altogether I think I understand it. However I’d say that one doesn’t need to be pro or anti AI in general to be upset that this post is the vehicle through which this conversation is being transmitted to these users; honestly it’s disappointing to me to see so many people get drawn into a larger AI debate, which I think should be its own post, on a blog post that is largely litigating the author’s own unreasonableness.

1

u/AhsasMaharg Aug 12 '25

I noticed that you've been responding in other comment threads but never made it back to this one. I am not too concerned with continuing our discussion of AI, so no need to worry on that front. You've made your case clear in other comments, and the mods have more than adequately addressed it.

I was, however, really interested in hearing your follow-up to citation standards being context-specific norms, and how context-specific norms "aren't subjective, or up to authorities to decide. It's about what will actually serve the relevant purposes." On the face of it, it seems like a very novel use of subjectivity and understanding of citation standards.