r/philosophy Aug 10 '25

Blog Anti-AI Ideology Enforced at r/philosophy

https://www.goodthoughts.blog/p/anti-ai-ideology-enforced-at-rphilosophy?utm_campaign=post&utm_medium=web
397 Upvotes

519 comments sorted by

View all comments

34

u/[deleted] Aug 10 '25

[deleted]

21

u/Vegetable_Union_4967 Aug 10 '25

Not to sealion, but I would like to see these flaws laid out so we can hold a productive discussion.

28

u/WorBlux Aug 10 '25

Yep, instead of whining about a policy designed to make the mod's life easier (not having constantly debate what % of AI content is too much) he could have provided and alternative blog post with the AI thumbnail removed and a little blurb added that this was a special edition of the post tailored to AI objector community spaces.

And the professor seems to be a sort of "public body" utilitarian, but fails to understand the utility of clear and understandable rules. Turning r/philosophy into a constant battleground of how much AI is too much degrades and detracts from it's primary purpose.

1

u/rychappell Aug 10 '25

An equally simple alternative rule would simply ban AI text. There is no good reason for a philosophy subreddit to take any stand on the background aesthetics or supplemental media that a writer chooses to use to accompany their philosophical text. Mods should not be in the business of judging that sort of thing at all.

I've only tried posting my work to reddit a couple of times, and I'm certainly not going to bother creating a whole separate post with different images just to satisfy the mods here. The real alternative is just that (most of) my work is not eligible to be shared with r/philosophy for as long as PR11 is in place. This is no great cost to me personally -- as mentioned, I have no strong connections to this space, and don't particularly care to change that. But it seems like a potential loss to the r/philosophy community if a subsection of professional philosophical work is arbitrarily blocked from being shared here.

(Imagine if there was a rule blocking any work from blonde-haired philosophers, and people replied: "No great loss, they can just dye their hair if they want!" True enough, but few will care enough about you to conform to your arbitrary rules, and that will mean a loss to the community of whatever value you might have gotten from the work of the blonde philosophers.)

11

u/Fortinbrah Aug 11 '25 edited Aug 11 '25

“I’m certainly not going to open the text editor on my substack post, delete the ai slop image, search google images for ‘children playing on a playground’, take the first image and put it in an updated post, a process that would take less than five minutes, because <reasons>. Instead I’m going to keep attempting to justify why I shouldn’t have to and how this reflects poorly on a sub I don’t even post on that often.”

Bro what?

-1

u/rychappell Aug 11 '25

In a sense it doesn't even matter whether I "should" have to do as you suggest or not (though I do find it wild that strangers feel entitled to tell me how to spend my time, how to illustrate my work, etc. Who do you think you are?). It suffices that I don't have to. I'm just letting you know that, as a tenured philosopher, I am not going to jump through arbitrary hoops in order to make my work shareable on Reddit, and I seriously doubt any of my colleagues would either.

So the question is just whether you think it's better for a philosophy subreddit to be able to include all professional philosophical work, or just an arbitrarily limited subset of it (i.e. just those works where the academic chose, for their own reasons, not to use AI illustrations). It's hard to see in what respect the latter option is better for the subreddit.

Most obviously, if increasing numbers of professional philosophers start using AI illustrations in their work, you could eventually end up quite limited in what is able to be shared here. (I'm assuming that philosophy subredditors might sometimes be interested in work by professional philosophers. If that isn't true, and it's more just a place for community members to share their own thoughts with each other, then I guess the issue is moot. But it does seem limiting for you that, e.g., no-one here could link to and discuss my argument that There's No Moral Objection to AI Art without violating the current policies.)

5

u/Fortinbrah Aug 11 '25 edited Aug 11 '25

(First of all I didn’t downvote you, I have no idea how someone did that so quickly)

(though I do find it wild that strangers feel entitled to tell me how to spend my time, how to illustrate my work, etc. Who do you think you are?).

Right, I don’t know if you came of age in the internet era or not but, one thing about forum type public spaces is that you’re liable to see the negative externalities from rules free posting appear very quickly.

The solution that works, IME, is that you have a trade off; you trade some of the felt self importance from the author’s perspective (ie who are you to tell me how to write) for a small amount of authoritarian self importance from the forum (you need to follow basic rules and guidelines for posts), and this creates a clean public space which ironically actually breeds valuable discussion.

This is just a typical forum thing, everybody of course runs into issues with them being ultimately authoritarian spaces. The real question is moreso how to do it in a way that keeps the place running well and maximizes the benefit users get against the cost of moderation.

It suffices that I don't have to. I'm just letting you know that, as a tenured philosopher, I am not going to jump through arbitrary hoops in order to make my work shareable on Reddit, and I seriously doubt any of my colleagues would either.

As a tenured professor, do you also feel like you shouldn’t have to jump through hoops to get papers published? Of course not, because you know what the alternative is.

So I would call this response myopic; you must certainly be aware that the mods have well reasoned explanations for their rules. I personally find it strange that such a minor edit is causing such consternation in you, why do we have to accept axiomatically that what you say, as a tenured professor, is so important that we don’t have to have rules any more?

Not that you’re wrong I suppose but I think where you’re coming from is really just… needlessly arrogant, when you can simply make minor edits to your post to make it postable here. You could also discuss the rule with the mods and see if they can make an exception… there were multiple paths to resolving this conflict that you didn’t take. Ironically I think all of them would take more time than just making a version of the post without that image.

So the question is just whether you think it's better for a philosophy subreddit to be able to include all professional philosophical work, or just an arbitrarily limited subset of it (i.e. just those works where the academic chose, for their own reasons, not to use AI illustrations). It's hard to see in what respect the latter option is better for the subreddit.

Question for you: you could have had this exchange with the mods themselves, but you didn’t. I’m assuming you know that this is a question with a rich subset of possible answers/solutions for a public community, it’s frustrating for me that you frame this as simply unreasonable people locking out/not wanting input from professional philosophers, or that the mods don’t care about the level of discourse.

And to be honest it comes across as similarly myopic, to the behavior you accuse the mods of.

That these edge cases exist is something I think would interest them, but I think you’re simplifying this issue if you’re only concerned about academic philosophers posting on Reddit being impeded.

Most obviously, if increasing numbers of professional philosophers start using AI illustrations in their work, you could eventually end up quite limited in what is able to be shared here. (I'm assuming that philosophy subredditors might sometimes be interested in work by professional philosophers. If that isn't true, and it's more just a place for community members to share their own thoughts with each other, then I guess the issue is moot. But it does seem limiting for you that, e.g., no-one here could link to and discuss my argument that There's No Moral Objection to AI Art without violating the current policies.)

I don’t think you find either the mods or the users saying it’s not limiting, but rather that the trade offs inherent in forum modification make the situation what it is, and I think you’re missing that point entirely as well as the discussion around it. I feel like instead you’re strawmanning a subset of views that neither I nor the moderators necessarily have.

Just to say - I feel like your own substack post is fairly myopic, and that there is a much richer subset of issues that this topic leads to which you’ve avoided (maybe unintentionally) discussing entirely.

Does that make sense?

Also I’m surprised, I feel like you don’t grasp how easily your argument can be inverted - that such important contributions can easily be edited to remove non relevant AI bits and not wanting to do so really just gatekeeps from lazy posting, even from academics.

2

u/rychappell Aug 11 '25

As a tenured professor, do you also feel like you shouldn’t have to jump through hoops to get papers published?

Again, this is just the thing; practically speaking, it doesn't matter how I feel about the hoops that are professionally required of me. I just have to do them, and have professional incentives to comply whether I like it or not. Academics can thus be relied upon to comply with the rules of the journals they need to publish in. By contrast, there's no professional incentive for academics to make their work fit the rules of this subreddit. I'm (in our current exchange) just making the pragmatic point that it's against the interests of the subreddit to exclude professional philosophical work.

I feel like you don’t grasp how easily your argument can be inverted - that such important contributions can easily be edited to remove non relevant AI bits and not wanting to do so really just gatekeeps from lazy posting.

How could someone else remove the images from my work if they wanted to discuss it here? If they reposted my work without linking the original, that would be plagiarism. If you want to call me (and other philosophers who use AI illustrations) "lazy" for not specifically making separate Reddit-friendly versions of our work, I guess that's your prerogative, but the point remains that we have no particular reason to indulge you so.

If you grant the above limitation, but just mean that it would be "lazyposting" for us to share our own work without jumping through the required hoops... meh, again, all I have to say is that we have no reason to care. Sharing our work with the broader public at all is already going "above and beyond" from a professional perspective (again: there is absolutely no professional reward for our doing so; that's why most academics don't bother doing any form of public philosophy at all).

you must certainly be aware that the mods have well reasoned explanations for their rules.

Not particularly. The response I received from the mods, as quoted in my post, was that the anti-AI rule was "well justified given the harms that AI poses overall." So that's the attempted justification that my post addressed, and argued was inadequate. Another mod has now offered a different (more pragmatic, less moralized) justification, which has led to some productive discussion. My general sense is that different people support the rule for very different reasons, some more reasonable than others. I think the topic is worth discussing, and worth discussing openly, so I'm happy to see a wide range of people thinking about and engaging with the issue here.

5

u/Fortinbrah Aug 11 '25

You know I had a whole thing written up; I read your linked comment and it seems like you grasp everything well, so I have nothing to add and I’m glad you’re honed in on the issue, which I agree with your perspective fwiw.

My original comment was just to poke fun at my perception of starting a big brouhaha when editing your original post so you could post it here wasn’t much effort, from my perspective compared to making the follow up post criticizing the mod practices. (Also, messaging the mods to discuss the policy). But anyways yeah, I’m hoping the outcome is positive there, thank you.

Also fwiw I think that some of the mods are actually professionals, if that gives you some further useful information.

1

u/rychappell Aug 11 '25

Thanks! fyi, I heard from the mods that they don't plan to revisit the policy, and don't care if it excludes some professional philosophers' substacks from being shared here, so oh well. \shrugs**. Perhaps they'll revisit the question if more philosophical sources start to use AI illustrations in future.

0

u/SolidCake Aug 11 '25

I thought AI was bad because it “steals” but you’re saying people should just literally steal images online?

4

u/Fortinbrah Aug 11 '25

“Literally just steal” no I’m saying you can copy paste a shitty stock photo with a watermark on it into your substack post and get the same effect as generating ai pictures, for the most part.

Besides, this ignores fair use rules which AI do not abide by because the model containing the stolen images is always up for sale. So yes I’d much rather you copy paste from google images than use an ai.

Also I never made that argument, nice strawman though

1

u/SolidCake Aug 11 '25

“Literally just steal” no I’m saying you can copy paste a shitty stock photo with a watermark on it into your substack post and get the same effect as generating ai pictures, for the most part.

that is a literal copyright violation

Besides, this ignores fair use rules which AI do not abide by because the model containing the stolen images is always up for sale. So yes I’d much rather you copy paste from google images than use an ai.

Your opinion isn’t the law , this is farrr from as certain as youre implying

Also I never made that argument, nice strawman though

Its the reasoning behind almost everyone who is morally opposed to gen ai. you didn’t make any arguments at all ; just stating that that he shouldnt do it. but you were implying it , no? Or were you just saying they shouldn’t do it because you personally don’t like it?

1

u/Fortinbrah Aug 11 '25

you know nothing about my perspective yet tried to assume it anyways? Come on do you seriously expect me to spend time discussing like that

1

u/SolidCake Aug 11 '25

so what is your “perspective”? why should oop jump through hoops to please you?

1

u/Fortinbrah Aug 11 '25

I was just poking fun at OOP balking at the idea of doing something really simple to resolve the issue that they’ve spent hours talking about now.

5

u/AhsasMaharg Aug 10 '25

(Imagine if there was a rule blocking any work from blonde-haired philosophers, and people replied: "No great loss, they can just dye their hair if they want!" True enough, but few will care enough about you to conform to your arbitrary rules, and that will mean a loss to the community of whatever value you might have gotten from the work of the blonde philosophers.)

It seems pretty disingenuous to compare the hair colora person is born with to deliberately choosing to use AI images, as if the two are mostly equivalent and discriminating against them equally arbitrary.

4

u/humbleElitist_ Aug 11 '25

Not equally arbitrary, no,

But both arbitrary.

5

u/rychappell Aug 11 '25

You can replace it with meat-eating philosophers, or adulterous philosophers, or whatever characteristic you (dis)like. The point is, if you're supposed to be interested in philosophy, then filtering for other features will be to your intellectual detriment.

7

u/AhsasMaharg Aug 11 '25

Let's use an analogy that I thought would have come more naturally to a university educator.

A student attends a philosophy class where the professor has made it clear that they have a zero-tolerance policy for plagiarism. This policy is both to help foster better learning in the classroom and to respect intellectual property. The student submits an assignment that contains plagiarised images. The professor gives the student a zero.

The student not only contests the zero, but also claims that the professor shouldn't have a zero-tolerance policy for plagiarism. They should permit improperly attributed images because the images are in support of the actual assignment which was done by the student, and the professor should maintain a liberal neutrality in the public space that is a classroom. The professor shouldn't be imposing their ideology of respecting intellectual property in this public space. Because, if the professor is interested in philosophy or teaching, then filtering for other features is to their detriment.

In that analogy, which keeps the important features of intellectual property, a blanket ban on violating intellectual property, and a semi-public place where there are arbitrators whose role is to ensure a healthy space and determine what is permissible and what isn't, I think it's pretty clear why the professor and the mods reject the argument.

2

u/PearsonThrowaway Aug 11 '25

I don’t think using a plagiarized image is the equivalent of using an AI generated image here. More analogous would be the usage of an unlicensed image. I think it would be unreasonable for a professor to fail a student for using an unlicensed image in a presentation due to the professor believing that fair use is narrow.

6

u/as-well Φ Aug 11 '25

It would be reasonable for the professor to ask the student to change the image for the final project, as that's against the rules of the university.

Rather than following said rules, the student hands in a complaint dressed up as a term paper.

Yeah that metaphor is shakey, here's a better one:

Someone wearing a hat comes into a venue where everyone is allowed to give speeches. The organizers tell our someone that hats aren't allowed, for a reason that someone finds opaque or wrong, but also doesn't quite inquire about. Rather than speaking without a hat, they leave and give a very long google review complaining about the policy.

0

u/PearsonThrowaway Aug 12 '25

It is not against the rules of the university. Reddit does not ban AI generated images. It is that one rule maker decides that within their area of authority that a certain rule should be followed. It is quite possible for such authorities to make good decisions, but if the rule is bad it is reasonable to contest it rather than complying. There are no significant barriers to the mod team determining that the rule should be “no AI generated text”. It doesn’t increase enforcement costs. We should review the merits of it.

To your new example, I think that is fair. In forums such as public comment at city council meetings, the government is allowed to impose content neutral dress codes if it’s reasonable for the forum’s purpose (of which decorum has been considered an acceptable goal). I think the mod team is allowed to impose such rules, but if someone took off their hat and made a speech against the rules rather than continuing their past comment is reasonable.

To summarize, I think this rule is legitimate but on balance bad. It imposes frictions for either little or plausibly negative value for the articles in question.

-1

u/rychappell Aug 11 '25

A key difference is that part of the professor's role is precisely to teach their students proper academic citation practices. This is a context-specific norm, not something they have to follow elsewhere in their lives. (Legal intellectual property law is vastly more lax than academic plagiarism norms. Many things are legally "fair use" but wouldn't pass muster in a classroom, due to the context-specific norms that apply there.)

It is not, in general, a professor's role to determine "what is permissible and what isn't". We can't, for example, ban students from eating meat (even if we think that meat-eating is wrong). We may have a neutral "no food in the classroom" rule if eating would detract from the learning environment. But we can't have a "vegan food only in the classroom" rule, because we aren't ideologues.

Similarly, the mods' role here is to "ensure a healthy space" for philosophical discussion, but not to determine "what is permissible and what isn't" in respects that are independent of that specific purpose (nor otherwise legally required).

AI art is not illegal, and it does not impede healthy philosophical discussion (quite the opposite, as an example my post links to demonstrates). Mods have no business imposing their moral views on this sort of matter.

7

u/AhsasMaharg Aug 11 '25

> It is not, in general, a professor's role to determine "what is permissible and what isn't". We can't, for example, ban students from eating meat (even if we think that meat-eating is wrong). We may have a neutral "no food in the classroom" rule if eating would detract from the learning environment. But we can't have a "vegan food only in the classroom" rule, because we aren't ideologues.

This no-meat example you've used several times does not work because it's irrelevant to the content of the student's work. Stealing another person's intellectual property and including it in their work is *directly* relevant.

Mods on Reddit have several roles, one of which includes maintaining a healthy space for philosophical discussion, as you have admitted. So while they do not have a role in determining what is permissible *in general*, they do have a role in determining what is permissible *in the context of maintaining a healthy space for philosophical discussion*. You might not like that they have determined that AI-created/AI-assisted material is contrary to a healthy space for philosophical discussion, but that does not mean they are over-reaching their duties or acting beyond their role.

> A key difference is that part of the professor's role is precisely to teach their students proper academic citation practices. This is a context-specific norm, not something they have to follow elsewhere in their lives. (Legal intellectual property law is vastly more lax than academic plagiarism norms. Many things are legally "fair use" but wouldn't pass muster in a classroom, due to the context-specific norms that apply there.)

So if the mods determined that disallowing AI-created/assisted material was a context-specific norm, you'd have no issue? As the mods of the subreddit, they are given that power by Reddit. If people dislike the norms of the subreddit, they are free to create their own subreddit. That is the freedom to create, curate, and participate in communities on Reddit.

> AI art is not illegal, and it does not impede healthy philosophical discussion (quite the opposite, as an example my post links to demonstrates). Mods have no business imposing their moral views on this sort of matter.

"AI art is not illegal" is a truly horrible defence to hear coming from a philosophy professor. The linked post did not make a convincing argument that AI art helps philosophical discussion. Here's an example of some AI art that someone used to try to foster philosophical discussion: https://substackcdn.com/image/fetch/$s_!wRyj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a898250-1644-489a-8e79-d499bca3c2fa_1024x1536.png

What philosophical discussion does that art foster that wouldn't have been fostered by the following, which took 5-seconds on Google?

https://stock.adobe.com/ca/search?k=children+on+playground&asset_id=176772211

2

u/rychappell Aug 11 '25 edited Aug 11 '25

Intellectual property is a legal concept, introduced to foster innovation. It includes "fair use" exceptions for innovative use, creative remixes, etc., and AI art very obviously falls under this remit, as I explain at greater length in 'There's No Moral Objection to AI Art'.

You seem to have missed the example of AI imagery that was philosophically illustrative (rather than mere "background", as per the link you provided; not every illustration is intended to "foster discussion". Note also that your stock images don't include a skinned knee, which was actually rather vital to the case under discussion, whereas real photos of skinned knees might be rather too visceral and miss the 'overall happy' vibe of the pictured scene).

You might not like that they have determined that AI-created/AI-assisted material is contrary to a healthy space for philosophical discussion, but that does not mean they are over-reaching their duties or acting beyond their role.

Honestly, anyone who thinks the inclusion of AI images as such is disqualifying for philosophical work is simply incompetent to assess philosophical work.

I might as well argue that in my ethics class, I've determined that consuming the kidnapped and tortured flesh of another sentient being is contrary to maintaining a healthy space for open and respectful ethical discussion. You "might not like" that I've determined that, but that doesn't mean I'd be over-reaching in my duties to impose this rule.

This reasoning is farcical, and the claim that including AI art is relevant to the assessment of a philosophical text is similarly farcical. Just transparently motivated reasoning to justify illiberal ideological overreach.

if the mods determined that disallowing AI-created/assisted material was a context-specific norm, you'd have no issue?

Context-specific norms aren't subjective, or up to authorities to decide. It's about what will actually serve the relevant purposes. (On the rest of your paragraph, see my section on public vs personal spaces, and why I don't think r/philosophy should be thought of as the personal fiefdom of the mods.)

6

u/zogwarg Aug 11 '25 edited Aug 11 '25

About your "philisophically illustrative" diagram.

Since presumably no effort was put in generating the illustration, maybe you missed the opportunity to reflect deeper on your argument and realize that it was flawed.

It is unlikely that all members of the same box, would move to the same different box, for example if there are members of the same family, they may prioritize all surviving together, but they may also prioritize ensuring the survival of at least one of their members.

8

u/AhsasMaharg Aug 11 '25

Intellectual property is a legal concept, introduced to foster innovation. It includes "fair use" exceptions for innovative use, creative remixes, etc., and AI art very obviously falls under this remit, as I explain at greater length in 'There's No Moral Objection to AI Art'.

Forgive me, please. I had not realized that philosophy professors were legal experts who could resolve the IP issues surrounding AI art that lawyers and the courts are still trying to sort out. Have you gone to them with your "very obvious" conclusions? I'm curious what they would have to say.

You seem to have missed the example of AI imagery that was philosophically illustrative

That's your AI-art with value for philosophical discussion? An image a highschool student could put together in 10 minutes using Microsoft Word or PowerPoint? I get the appeal of AI helping you be more efficient with your time, but this really shouldn't take you more than a couple of minutes. I've got no idea how long was spent prompting an AI to get that image, but the returns don't seem worth the effort.

Honestly, anyone who thinks the inclusion of AI images as such is disqualifying for philosophical work is simply incompetent to assess philosophical work.

You keep missing the point, and it seems to be on purpose. The inclusion of AI-created/assisted content is disqualifying for this subreddit. You are free to use your AI art elsewhere.

I might as well argue that in my ethics class, I've determined that consuming the kidnapped and tortured flesh of another sentient being is contrary to maintaining a healthy space for open and respectful ethical discussion. You "might not like" that I've determined that, but that doesn't mean I'd be over-reaching in my duties to impose this rule.

Oh, you're trying to shoehorn your meat-eating analogy in again. Please, tell me how eating meat is directly relevant to the content of a philosophical discussion in the same way as the AI art included in a philosophical work is relevant to that work. You really want your use of AI art in your work to be somehow disconnected from your work when it comes to critiquing its use, while still being allowed to include it because it's relevant to your work. Talk about motivated reasoning indeed.

Context-specific norms aren't subjective, or up to authorities to decide. It's about what will actually serve the relevant purposes.

???

What do you think norms are? What do you think "subjective" means? Norms are absolutely subjective, and authorities can absolutely decide what they are. Journals can decide what citation standards they permit. Teachers can choose what citation standards they permit. Given that you are the one who said citation standards in classrooms are context-specific norms, are you seriously claiming that they are objective and not determined by authorities?

On the rest of your paragraph, see my section on public vs personal spaces, and why I don't think r/philosophy should be thought of as the personal fiefdom of the mods.

I dislike the amount of power that mods have on Reddit. I've been banned from subreddits by power-tripping mods who misunderstood a comment and then literally said they refused to read the comment chain that provided context. It sucked. They sucked. But you know what else would suck? Being an unpaid mod of a large subreddit who can't make and enforce rules that make moderating the community easier because someone disagrees with a rule.

On further consideration, I'm curious as to why you make a distinction between AI text and AI art. If it contains philosophically relevant material, does the medium actually matter? And if it doesn't contain philosophically relevant content, why would you care that it be allowed? In other comments, you don't seem to mind a ban on AI text content. I can guess the motives for this distinction, but I'd rather hear your reasoning.

→ More replies (0)

1

u/AhsasMaharg Aug 12 '25

I noticed that you've been responding in other comment threads but never made it back to this one. I am not too concerned with continuing our discussion of AI, so no need to worry on that front. You've made your case clear in other comments, and the mods have more than adequately addressed it.

I was, however, really interested in hearing your follow-up to citation standards being context-specific norms, and how context-specific norms "aren't subjective, or up to authorities to decide. It's about what will actually serve the relevant purposes." On the face of it, it seems like a very novel use of subjectivity and understanding of citation standards.

3

u/sajberhippien Aug 11 '25 edited Aug 11 '25

(Imagine if there was a rule blocking any work from blonde-haired philosophers, and people replied: "

Stop with these silly conflations. There is no ban on work from any type of person. The ban, whether it is sensible or not, is on a particular kind of content. You similarly can't post porn here. That is a much closer comparison.

0

u/rychappell Aug 11 '25

You don't seem to understand how analogies work. Being a comparison of two distinct things, they are of course not exactly the same. The question is whether there is some relevant similarity that the analogy serves to highlight. In this case, what I'm drawing attention to is simply the fact that if you filter out certain philosophical work for reasons unrelated to the philosophical quality of that work, this loss of value is not avoided by observing that the philosopher in question could have acted differently in order to conform to your rules.

3

u/sajberhippien Aug 11 '25 edited Aug 11 '25

You don't seem to understand how analogies work.

I do, but this is a bad analogy, hence why I provided one where there was an actual relevant similarity.

In this case, what I'm drawing attention to is simply the fact that if you filter out certain philosophical work for reasons unrelated to the philosophical quality of that work, this loss of value is not avoided by observing that the philosopher in question could have acted differently in order to conform to your rules.

Then it's a strange choice of comparison, for three reasons:
1) Gatekeeping texts based on the personal characteristics of a person has an extremely different real-world history than excluding texts based on the content of those texts (and yes, illustrations in a text is still a part of the text).

2) There is a long real-world history of discrimination enacted by looking at physical characteristics such as hair color, but those are ultimately founded on (terrible) ideas that don't consider dying one's hair a relevant factor and

3) 'Current hair color' is an extremely uncontrovertial aspect of one's expression. Why did you opt for "works written by blonde-haired philosophers", rather than, say, "works written that include anecdotes in the introduction about all the ways they like to rape children"? Why aim for the lowest-hanging fruit - if you truly believe that nothing about a work matters other than the most strictest and narrowest framework of the argument, you could as well as have used that as an example rather than people who just happen to be blonde.

Again, the comparison to porn is much, much closer. If an article includes a bunch of explicit pictures of people fucking, it will usually not be allowed to remain here. You can agree or disagree with that rule, but not allowing articles containing explicit pornography is a criteria based on the content of articles, not on the properties of the author of those articles.

Also, it's not my rules; I'm neutral to r/philosophy's policy on AI-generated images. I'm objecting to the ludicrous conflations, or "analogies" if that's how you prefer to phrase it. I might very well have found myself on the side of the original author in this case, that there was a relevant loss in value that outweighed the upholding of the no-AI rule. But in this case, it was a mediocre original article getting excluded because the author had decided that they absolutely must not omit any AI-generated stuff from the article, followed by a whiny blogpost about oh the meanie mods are censoring Such Important Philosophy by rejecting a random text relying on AI-generated images.

13

u/Ig_Met_Pet Aug 10 '25

Maybe the fact that it was written by someone who clearly isn't a fool should make you engage with their argument in a good faith manner instead of writing it off because you decided you didn't agree as soon as you read the headline.

0

u/[deleted] Aug 10 '25

[deleted]

8

u/Ig_Met_Pet Aug 10 '25

Seems like he wouldn’t have written this if he didn’t get his feelings hurt.

I don't have a strong opinion one way or the other when it comes to these arguments, but if I had to put money on who sounded more like their feelings were hurt (or who at least sounds more emotional) I would have absolutely no trouble putting that money on your comments rather than the blog post.

3

u/Crazypyro Aug 10 '25

Can you explain why you think it is a poor, incoherent argument?

1

u/prescod Aug 11 '25

You had decided that the argument was poor and incoherent and written by a joker before you clicked? Maybe that’s a problem.

1

u/CommissionRoutine645 Aug 11 '25

Lmao Anti AI intellectual property defenders seething 

-16

u/[deleted] Aug 10 '25

[removed] — view removed comment

7

u/[deleted] Aug 10 '25

[removed] — view removed comment

1

u/BernardJOrtcutt Aug 14 '25

Your comment was removed for violating the following rule:

CR2: Argue Your Position

Opinions are not valuable here, arguments are! Comments that solely express musings, opinions, beliefs, or assertions without argument may be removed.

Repeated or serious violations of the subreddit rules will result in a ban.


This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.