The Rules of
Engagement
The policies and principles on how to engage with Dis.course.
I. Our safety imperatives
II. Measures to achieve them
III. Reporting, suspension, and bans
IV. Creating an environment that deters bad behavior
V. Appendix
I. Our safety imperatives
The Rules of Engagement are designed to:
1. Keep disturbing and explicit content off the site
2. Keep away bots and bad-actors
3. Prevent disinformation
4. Maintain civility, positivity
Why?
Dis.course is meant to be a positive, fun, productive place. No one wants to have their experience marred by running into graphic or grizzly content. And we don’t want bots or disinformation to distort the discourse – they don’t represent the people’s perspective or reality.
We have a comprehensive strategy to address these issues. Our goal is to do better than what's out there.
II. Measures to achieve them
1. Keeping disturbing and explicit content off the site
AI trained to recognize and detect banned content.
It can identify offending content and will immediately remove it.
User reporting
If anything slips through the cracks, users can report this content and it will be bumped up to the beginning of our reporting queue to get it off-site ASAP.
See appendix section 2 for full list of banned content.
A note: if you feel that banned imagery is important to making your point, you’ll have to find another way. Users don’t need to see dead bodies or intimate moments – you can use your words and insight to bring these concepts to life.
2. Keeping away bots and bad-actors
Bots create noise, and only serve the interests of the people who programmed them – not our public discourse.
We don’t want you dealing with them in your inbox, and we don’t want them diluting the perspectives of our users by spamming the site. To keep the bots (and bad-actors) at bay, we have specific guidelines around:
Account creation
-
All accounts are associated with a single (non-VoIP) phone number
-
Data checks in place to prevent account duplication and spoofing
-
Must have an American IP address
Posting and activity frequency
We also have standards around the frequency of posting and activity to avoid spamming. Providing tiered user-access based on platform-interaction limits bad behavior from new accounts while encouraging real participation. Established accounts have access to more features than newer accounts. And duplicate posts/messages will face higher limitations than unique ones.
3. Preventing disinformation
We conceptualize "disinformation" as statements or content that claim scientific/conceptual authority without evidence or in the face of established conflicting evidence, in an active attempt to mislead.
Detecting and removing disinformation from the site
Artificial intelligence
We’re investing in “public interest algorithms” that help identify and publicize fake news posts.
Our inspo: computer scientist William Yang Wang used surface-level linguistic patterns to create a public database of 13K statements labeled for accuracy that can be analyzed to identify problematic stories and users. In his words, “when combining meta-data with text, significant improvements can be achieved for fine-grained fake news detection.”
User reporting
Our users will be able to report incidents they believe constitute disinformation. Confirmed incidents will result in the content being removed from the site. Offending users will be informed of why it violates our policies.
Alerting other users to disinformation
If a user has had 2+ confirmed incidents of disinformation, a caution flag will appear on their profile and posts.
Standing up to disinformation more broadly
Users are encouraged to “pin rebuttals” on content that they think is misleading – sharing why they think the argument is misleading or in bad faith, or in conflict with established science/research.
Users must include sources when citing statistics/research and repeated failure to do so may result in an offense.
4. Maintaining civility
The internet can be a cesspool, and we don’t want that for Dis.course. To that end, users will be prevented from entering what we call “banned language” into the app. It’s a long list – we’re pretty creative. We’re not here to limit your free speech, just to limit the bad vibes.
Banned language includes:
-
Threats to other users
-
Discriminatory language (race, gender, sexual orientation, physical ability, religion, national origin, military status, etc.)
-
Violent, sexual, or criminal threats
-
Excessively profane our vulgar language
III. Reporting, suspensions, and banning
Reporting
Users will be able to report other users, posts, or comments that violate our policies around banned content and disinformation. The response and processing of user-reports is an ongoing and evolving journey. Users can currently report content for the below reasons:
-
Nudity or sexual activity
-
Discriminatory or hateful language/symbolism
-
Violence, self-harm, or death
-
Bullying or harassment
-
Selling products or services
-
IP violations
-
Spamming or scamming
-
Disinformation
-
Fraud or impersonation
Conditions for suspensions and bans
Each time a report is filed, we review if that incident violates the Rules of Engagement. If it does, it gets registered as an offense. If a user commits 3 offenses, they will be suspended for one week. If they commit 2 more offenses after that, they will be banned.
Users who post content that violates the guidelines in appendix section 1.1 (“Banned Content”) will be immediately banned.
We want to be as fair as possible while protecting the integrity of Dis.course. If an offense is committed, the user will be informed of the reasoning behind it. If the user disagrees with an offense, they can appeal it.
When registering an account with Dis.course, users will have to agree to The Pact – stating that they are aware of and accept the conditions for suspension and banning per the above.
IV. Creating an environment that deters bad behavior
Our Rules of Engagement are designed to not only maintain a positive and safe environment, but to deter users from trashing it in the first place! We make it difficult to create a new account, so you've only got one shot on the platform, and we assume most people won't want to waste it (though of course, bad actors will always exist).
And to help with moderation, each hub within Discourse will have its own set of “peacemakers." Peacemakers (PMs) have moderator privileges in our hubs, and are incredibly dedicated to the subject areas they oversee. Each hub will have its own additional set of content guidelines established by the peacemakers, and the latter will be on the lookout for offenders.
We don’t want to tell people what to do. We just want to maintain an environment that leaves all users feeling good when they log off. It’s possible to have a social media platform that truly does good, and these rules are in place so dis.course can be that platform.
Beyond the rules, here’s what we encourage...
✨ Help us make politics colorful, accessible, empowering, and fun. American politics is stuffy and boring. It can and should be more than that! It should be by and for the people – it should reflect who we are: our creativity, humor, and insight.
🤓 Be curious and humble in what you don’t know! You don’t have to be an expert or even know anything about politics to engage with dis.course. Just come with an open mind, and click around a bit – we think you’ll like what you find.
💖 Be (as) nice (as you can)! We all know it feels good to be nice. Have this be the place you get those good feelings.
🧩 Make connections. Dis.course isn’t just meant to be a place for education and discussion – we want it to be a place for action too! Build coalitions with like minded peers, commiserate about the issues facing us today, and come together to advance our political interests.
☀️ Believe that progress is possible. Surveys show: Americans haven’t been feeling great about our politics for awhile now. But we can’t let that make us apathetic; we have the power to make our voices heard and make change if we care enough to try.
V. Appendix
1. Evolving with the platform
There's always more that can be done to protect our users and the integrity of Dis.course. We're committed to continue investing in those measures as technology advances.
We'll also be soliciting feedback from our users on an ongoing basis. One of our only pre-built hubs, Growing Together, is a round table for users to weigh in on what's working on the platform and what isn't. We'll pop in often, and will actively look to implement suggestions or updates that have major support, and are aligned with our objectives.
2. Banned content and language
Videos or images of dead bodies
-
Including visible organs, corpses in any state (charred, decomposed, etc.)
-
Excluding archaeological discoveries users want to share
Videos or images depicting violence or death, including but not limited to:
-
Maiming a body (dismembering, throat-slitting, etc.)
-
Self-harm (cutting, etc.)
-
Cannibalism
-
Killing another person, animal, or oneself (beheadings, shootings, stabbings, any other means of killing)
Sexually explicit videos or images:
Video or images of real, naked adults, including but not limited to:
-
Visible genitalia, anus, and/or visible nipples
-
Fully naked-close ups
Video or images of sexual activity, including but not limited to:
-
Explicit sexual activity, intercourse, or stimulation (oral, vaginal, anal, inanimate, and erogenous zones including nipples, breasts, etc.)
-
Realistic/implied depictions of the above
Extended audio of sexual content
Videos or images depicting sexual objects
-
Sex toys, byproducts of sexual activity, indicators of arousal (erections, discharge, etc.)
Fetish content that involves
-
Physical violence (per the definitions above)
-
Human byproducts (feces, urine, spit, menstruation, vomit, discharge)
Just... don't. 🙏