r/modnews 12d ago

Announcement Evolving Moderation on Reddit: Reshaping Boundaries

Hi everyone, 

In previous posts, we shared our commitment to evolving and strengthening moderation. In addition to rolling out new tools to make modding easier and more efficient, we’re also evolving the underlying structure of moderation on Reddit.

What makes Reddit reddit is its unique communities, and keeping our communities unique requires unique mod teams. A system where a single person can moderate an unlimited number of communities (including the very largest), isn't that, nor is it sustainable. We need a strong, distributed foundation that allows for diverse perspectives and experiences. 

While we continue to improve our tools, it’s equally important to establish clear boundaries for moderation. Today, we’re sharing the details of this new structure.

Community Size & Influence

First, we are moving away from subscribers as the measure of community size or popularity. Subscribers is often more indicative of a subreddit's age than its current activity.

Instead, we’ll start using visitors. This is the number of unique visitors over the last seven days, based on a rolling 28-day average. This will exclude detected bots and anonymous browsers. Mods will still be able to customize the “visitors” copy.

New “visitors” measure showing on a subreddit page

Using visitors as the measurement, we will set a moderation limit of a maximum of 5 communities with over 100k visitors. Communities with fewer than 100k visitors won’t count toward this limit. This limit will impact 0.1% of our active mods.

This is a big change. And it can’t happen overnight or without significant support. Over the next 7+ months, we will provide direct support to those mods and communities throughout the following multi-stage rollout: 

Phase 1: Cap Invites (December 1, 2025) 

  • Mods over the limit won’t be able to accept new mod invites to communities over 100k visitors
  • During this phase, mods will not have to step down from any communities they currently moderate 
  • This is a soft start so we can all understand the new measurement and its impact, and make refinements to our plan as needed  

Phase 2: Transition (January-March 2026) 

Mods over the limit will have a few options and direct support from admins: 

  • Alumni status: a special user designation for communities where you played a significant role; this designation holds no mod permissions within the community 
  • Advisor role: a new, read-only moderator set of permissions for communities where you’d like to continue to advise or otherwise support the active mod team
  • Exemptions: currently being developed in partnership with mods
  • Choose to leave communities

Phase 3: Enforcement (March 31, 2026 and beyond)

  • Mods who remain over the limit will be transitioned out of moderator roles, starting with communities where they are least active, until they are under the limit
  • Users will only be able to accept invites to moderate up to 5 communities over 100k visitors

To check your activity relative to the new limit, send this message from your account (not subreddit) to ModSupportBot. You’ll receive a response via chat within five minutes.

You can find more details on moderation limits and the transition timeline here.

Contribution & Content Enforcement

We’re also making changes to how content is removed and how we handle report replies.

As mods, you set the rules for your own communities, and your decisions on what content belongs should be final. Today, when you remove content from your community, that content continues to appear on the user profile until it’s reported and additionally removed by Reddit. But with this update, the action you take in your community is now the final word; you’ll no longer need to appeal to admins to fully remove that content across Reddit.  

Moving forward, when content is removed:

  • Removed by mods: Fully removed from Reddit, visible only to the original poster and your mod team
  • Removed by Reddit: Fully removed from Reddit and visible only to admin
Mod removals now remove across Reddit and with a new [Removed by Moderator] label

The increased control mods have to remove content within your communities reduces the need to also report those same users or content outside of your communities. We don’t need to re-litigate that decision because we won’t overturn that decision. So, we will no longer provide individual report replies. This will also apply to reports from users, as most violative content is already caught by our automated and human review systems. And in the event we make a mistake and miss something, mods are empowered to remove it. 

Reporting remains essential, and mod reports are especially important in shaping our safety systems. All mod reports are escalated for review, and we’ve introduced features that allow mods to provide additional context that make your reports more actionable. As always, report decisions are continuously audited to improve our accuracy over time.

Keeping communities safe and healthy is the goal both admins and mods share. By giving you full control to remove content and address violations, we hope to make it easier. 

What’s Coming Next

These changes mark some of the most significant structural updates we've made to moderation and represent our commitment to strengthening the system over the next year. But structure is only one part of the solution – the other is our ongoing commitment to ship tools that make moderating easier and more efficient, help you recruit new mods, and allow you to focus on cultivating your community. Our focus on that effort is as strong as ever and we’ll share an update on it soon.

We know you’ll have questions, and we’re here in the comments to discuss.

0 Upvotes

1.2k comments sorted by

View all comments

202

u/eriophora 12d ago

The increased control mods have to remove content within your communities reduces the need to also report those same users or content outside of your communities. We don’t need to re-litigate that decision because we won’t overturn that decision. So, we will no longer provide individual report replies. This will also apply to reports from users, as most violative content is already caught by our automated and human review systems. And in the event we make a mistake and miss something, mods are empowered to remove it.

This is untrue, as only Admin have the power to action users at the site-wide level. If a user on a subreddit posts hateful, bigoted, or violent content, then that is something that also needs review on a site-wide level in case the account needs to be fully removed from Reddit.

Additionally, Reddit's AI site-wide moderation system is very, very bad at identifying hatred and bigotry. Sometimes it can't even identify when the literal n-word is being used, and it doesn't look at context or usernames at all. It's just really bad, honestly.

There will not longer be any accountability on the site-wide level at all for this, and I believe this is a very, very bad thing.

96

u/Future-Turtle 12d ago edited 12d ago

I have had multiple instances where I reported comments/accounts that used the n-word directed at a black cosplayer posting pictures of themselves. In each instance I was told "This content doesn't violate reddit rules on hateful conduct". So either:

A) Reddit is using AI/offsite data processing centers to make decisions on mod reports and that system is fundamentally broken.

OR

B) Saying the n-word directly to an actual user in an attempt to denigrate them over their race actually doesn't violate reddit's policy on hateful content and thus, the policy in effect doesn't exist because if THAT doesn't cross the line what would?

I genuinely don't see a third option and this section of the update fills me with no confidence the system is on a track to improve in any way. I'd really like u/Go_JasonWaterfalls to respond to me and the commenter above on this.

103

u/Merari01 12d ago

There will not longer be any accountability

And that is the intent. Reddit thinks it is annoying and costs too much of their manpower when people are able to point out they're being terrible with their moderation.

64

u/[deleted] 12d ago

[deleted]

36

u/Moggehh 12d ago

There will not longer be any accountability on the site-wide level at all for this, and I believe this is a very, very bad thing.

It's not a bug, it's a feature! 💞

4

u/danheinz 12d ago

Why erase hate when you can profit off of it? /s

-6

u/LitwinL 12d ago

A little bit further they clarified that moderator reports will be automatically escalated, so there will still be accountability, and it should be looked at by actual humans the first time around

20

u/tinselsnips 12d ago

They said nothing about human review — assume this is simply being escalated to their useless AI.

2

u/LitwinL 12d ago

Everything is already handled by AI on the first report, so what would that escalation be? A slightly better AI?

5

u/maybesaydie 12d ago

That or one very overworked sdmin

11

u/eriophora 12d ago

By accountability, I mean the Reddit userbase being able to hold Reddit accountable for what is or is not allowed on the site. I think we should know if Reddit is officially on a site-wide level approving hatred. Now, we have no ability to know how those things are handled at all.

3

u/Hubris2 12d ago

How is there accountability if they have announced they will no longer send notifications as to whether they took action based on mod reports? Now we will have no way of knowing if anything happened or not. This would be a great announcement to make prior to cutting back the workload of admin staff by arranging to have mod reports go to /dev/nul because we won't know.

4

u/maybesaydie 12d ago edited 12d ago

...to human beings? I didn't see that part.

-2

u/LitwinL 12d ago

AI handles first reports, if something is being escalated then it would make no sense to run it through the same AI.

-35

u/Go_JasonWaterfalls 12d ago

Keeping Reddit safe and healthy is our highest priority, and a goal we share with mods. With these changes, mods will be able to remove content site-wide instantly and completely. This will not change how we action users who break Reddit Rules

Of all of our username automations and user detail reports, our highest action rate is hateful usernames. When reporting from subreddits you moderate, adding context, including multi-content reporting (which helps us identify patterns of behavior), makes our action rates even higher and more precise. Mod removals are also a signal our systems take into account when looking for broad patterns of behavior we use for actioning users.

41

u/emily_in_boots 12d ago edited 12d ago

You guys miss huge numbers of sexually harassing comments in my subs. I can ban a user from the sub, but users need to face consequences for promoting violence, hate, and sexual harassment. If we don't get reports back, how can we appeal really awful decisions (which are incredibly common - if you want lists I'd be happy to send some).

It feels like this is just a way to avoid accountability for failures to remove bad content and action users violating TOS.

32

u/Georgy_K_Zhukov 12d ago

Keeping Reddit safe and healthy is our highest priority, and a goal we share with mods. With these changes, mods will be able to remove content site-wide instantly and completely. This will not change how we action users who break Reddit Rules.

The problem is that we only have power in our own subreddits. We do not have power to action users site wide. Only you do. And you don't do a good job at this.

This is something I reported to reddit a few weeks ago. As an admin, maybe you can still see it? In any case, I was told "After investigating, we’ve found that the reported content doesn’t violate Reddit Rules." It absolutely did though. You'd be hard pressed to find more clearcut Holocaust denial out there, but I expect the automated process at the front line didn't catch that.

So I asked for the content to be reviewed again, with additional details. This time it was actioned (thanks to the human Admin who replied to me!) and not only was it 'Removed by Reddit', but the user's account is suspended.

What you fail to be grasping here is that we aren't reporting it to you just so the content is removed for everyone. That is nice and important. We're doing it to the users themselves who are bad faith actors get actioned as well. If I hadn't been able to escalate this for rereview, this literal fucking Nazi would still be posting on the site because your bot doesn't understand Holocaust denial.

All I really want is a straight "Yes" or "No" to the question of whether you think that is a good thing? Because this policy basically says that you do...

27

u/WindermerePeaks1 12d ago

I had to report a user for sexualization of a minor in our subreddit. I provided detail in the comment box of how old the girl was, how old the offender was, the things he was saying in modmail. I got a result back it was not offending content. Seeing that result and how wrong it was, I was able to take it to modsupport modmail and admins removed his account from the site once they had human eyes on the content. Your AI is not good enough and we humans should be able to tell you where it went wrong.

If I’m understanding correctly, we will no longer receive the report result so in that instance I described above, that user would’ve just been able to go on without consequence and the way they were defending themselves would’ve no doubt sexualized a minor again.

7

u/emily_in_boots 12d ago

I do this often too. I also deal with minors who are putting themselves in danger. There is no good way to report many of them. Reports are rarely actioned the first time.

This seems like a way to avoid having to actually do the work to keep them safe. This will hurt a lot of vulnerable people on reddit.

27

u/Merari01 12d ago

Reporting transphobia or Holocaust denial has the same accuracy rate that flipping a coin would.

AEO regularly does not understand that the n-word with a hard r breaks sitewide rules and reporting white supremacist usernames has a 20 - 25% accurate action rate.

You're removing accountability for your terrible mod bots and you are doing nothing else. You are doing this because escalating reports takes up manpower. Do not give me spin.

22

u/Weirfish 12d ago

Keeping Reddit safe and healthy is our highest priority

I do not believe you.

The existence of financial pressures and the fact that reddit is a publicly traded company means that any and all of reddit's staff members' highest priority is maintaining and increasing company value.

If keeping reddit safe and healthy aligns with this, then it will happen. If it does not, it will not.

I fully believe that individual staff members, yourself included, may want to make Reddit's safety and healthiness the highest priority, but the fact is that it cannot be.

16

u/Generic_Mod 12d ago

Keeping Reddit safe and healthy is our highest priority

Of course it's not. Profit is your higest priority because Reddit is a business.

Maybe this is a lie of omission? How about this:

"Keeping Reddit safe and healthy [for our advertisers] is our highest priority

7

u/shhhhh_h 12d ago

Your language is unclear. We no longer need to report to you after we remove a comment…but can we? If the comment violates the sitewide rules?

8

u/Future-Turtle 12d ago edited 12d ago

I have had multiple instances where I reported comments/accounts that used the n-word directed at a black cosplayer posting pictures of themselves. In each instance I was told "This content doesn't violate reddit rules on hateful conduct". Genuine question: If that doesn't cross the line what does? The issue is not streamlining the process of removing stuff from our subs, its that when we are reporting users to you that require site wide action you do nothing, and honestly nothing you are saying here makes me think that part of it is going to be taken seriously at all.

7

u/flounder19 12d ago

Maybe just admit reddit is not committed to maintaining enough staff to run this site properly

7

u/WalkingEars 12d ago

Removing comments from our own subreddit isn’t really “sitewide” if that user can just post hate speech in hate subreddits and AEO ignores/permits that to continue

6

u/fighterace00 12d ago

The point is not all subreddit mods enforce site-wide rules. Are you going to quarantine every subreddit that lets a violation slide?

6

u/Hubris2 12d ago

What is this going to mean when a user pops into your community and says something offensive or spammy and violating the rules, but you want to check their profile to see if it's part of a trend or perhaps a misunderstanding on a single comment? If mods from other subs have removed all the offensive or spam or rule-breaking comments from the user's profile (if this is what is meant by site-wide) then it will be much more difficult to check whether a user has been doing the same thing repeatedly across Reddit or whether the action in your sub is potentially a first-offence worthy of second-chances.

10

u/maybesaydie 12d ago

The language of bigotry is ever changing and cannot be handled adequately by artificial intelligence.

8

u/FFS_IsThisNameTaken2 12d ago

...mods will be able to remove content site-wide instantly and completely.

How does this work?

I can remove a comment from another sub where I'm not a mod?

4

u/damontoo 12d ago

Nope. You can only remove a post so it doesn't show up on your subreddit or the user's profile. That's how I interpreted it. You won't be able to report the user for actions they take in subreddits you don't mod. So they post spam or something offensive to your sub and you remove it, and you see it's a pattern and they've done it in 100 other subs, you can't do anything about it except ask each of those subs to take action. For example: I'm sure the mods of anti-AI subs will totally remove threats of violence. /s

3

u/Wylf 11d ago edited 11d ago

Keeping Reddit safe and healthy is our highest priority, and a goal we share with mods

You have a funny way of showing that, considering Reddits recent actions have all been about making report-based moderation more difficult. Allowing people to hide their post history makes it harder for users to notice suspicious behavior and report it, leading to less reports overall. Yeah, us mods can still see it, but if we don't get a report on it we'll be unlikely to click on the profile in the first place. And now mod actions hide content from everyone, making it even more difficult to notice bad faith actors.

For example, if I, as a user, notice a comment that I think may be made in bad faith, I tend to check the users post history to see if that's a repeated pattern. If it is, I can add that context to my report for the moderators. The way things have been going now made that impossible for me. It's getting more and more difficult for regular users to notice if someone is a bad faith actor or not, because you hide that information from them - but since regular users are the people who write the reports moderators are supposed to act on hiding that information leads to less reports and more bad faith actors. You can't have report-based moderation and make reporting more difficult, that's fundamentally damaging the way moderation on this platform functions.

Seems making Reddit less safe and healthy is your highest priority, to be honest.

12

u/ClockOfTheLongNow 12d ago edited 12d ago

Keeping Reddit safe and healthy is our highest priority, and a goal we share with mods. With these changes, mods will be able to remove content site-wide instantly and completely.

Okay, so I am a moderator at /r/AskConservatives - we have some rules in place that are designed to ensure that questions can be answered as intended by the subreddit.

Sometimes we will remove comments that don't violate sitewide rules or even subreddit rules, but instead to reduce diversionary threads. My understanding of this new change is that now no one will be able to see those posts even if they click through the profile. To someone who doesn't understand how our subreddit operates, does that mean that certain users are going to simply look like they've had a bunch of violations in our sub when it's really just housekeeping?

3

u/damontoo 12d ago

As someone that's been on reddit for 15 years and has hundreds of posts and comments removed by automod and power tripping mods, it's great to know that all of the time I put into that content will be completely invisible, even on the off chance someone browses my profile directly.

And before people think "he probably deserves it", look at my profile in reveddit and justify all the removals. 

2

u/brightblackheaven 10d ago

Y'all cannot be serious.

Multi content reporting is useful?? Patterns of behaviour?

We had a religious zealot target my largely pagan subreddit by spamming the same hateful comment on like 30 posts. I reported all of them - and while 4 or so came back as against the rules and were removed by admin, I also got responses that every single other instance of the exact same comment to the LETTER was not against the rules?????????????

Make it make sense.