As AI-generated content floods online forums, a new Cornell Tech study reveals how Reddit communities are taking regulation into their own hands—doubling their AI-related rules in just 16 months and reshaping platform governance from the ground up.
Research: AI Rules? Characterizing Reddit Community Policies Towards AI-Generated Content. Image Credit: Julia Tim / Shutterstock

*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.
Researchers at Cornell Tech have released a new report detailing how Reddit communities are evolving their policies to address a surge in AI-generated content.
One of the most striking discoveries is the rapid increase in subreddits with rules governing AI use. According to the research, currently posted to the arXiv preprint* server, the number of subreddits with AI rules more than doubled in 16 months, from July 2023 to November 2024.
The team collected metadata and rules from 300,000 public communities during two periods in July 2023 and November 2024. The researchers will present a paper with their findings at the Association of Computing Machinery's CHI conference on Human Factors in Computing Systems.
"This is important because it demonstrates that AI concern is spreading in these communities. It raises the question of whether or not the communities have the tools they need to effectively and equitably enforce these policies," said Travis Lloyd, a doctoral student at Cornell Tech.
The study found that AI rules are most common in subreddits focused on art and celebrity topics. These communities often share visual content, and their rules frequently address concerns about the quality and authenticity of AI-generated images, audio, and video. Larger subreddits were also significantly more likely to have these rules, reflecting growing concerns about AI among communities with larger user bases.
"This paper uses community rules to provide a first view of how our online communities are contending with the potential widespread disruption that is brought by generative AI," said co-author Mor Naaman, professor at Cornell Tech. "Looking at actions of moderators and rule changes gave us a unique way to reflect on how different subreddits are impacted and are resisting, or not, the use of AI in their communities."
As generative AI evolves, the researchers urge platform designers to prioritize the community concerns about quality and authenticity exposed in the data. The study also highlights the importance of "context-sensitive" platform design choices, which consider how different types of communities take varied approaches to regulating AI use.
For example, the research suggests that larger communities may be more inclined to use formal, explicit rules to maintain content quality and govern AI use. In contrast, closer-knit, more personal communities may rely on informal methods, such as social norms and expectations.
"The most successful platforms will be those that empower communities to develop and enforce their own context-sensitive norms about AI use. The most important thing is that platforms do not take a top-down approach that forces a single AI policy on all communities," Lloyd said. "Communities need to be able to choose for themselves whether they want to allow the new technology, and platform designers should explore new moderation tools that can help communities detect the use of AI."
By making their dataset public, the researchers aim to enable future studies that can further explore online community self-governance and the impact of AI on online interactions.
For additional information, see this Cornell Chronicle story.

*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.
Source:
Journal reference:
- Preliminary scientific report.
Lloyd, T., Gosciak, J., Nguyen, T., & Naaman, M. (2024). AI Rules? Characterizing Reddit Community Policies Towards AI-Generated Content. ArXiv. https://arxiv.org/abs/2410.11698