Our approach to policy development and enforcement philosophy
twittercn is reflective of real conversations happening in the world and that sometimes includes perspectives that may be offensive, controversial, and/or bigoted to others. While we welcome everyone to etwittercnpress themselves on our service, we will not tolerate behavior that harasses, threatens, or uses fear to silence the voices of others.
We have the twittercn Rules in place to help ensure everyone feels safe etwittercnpressing their beliefs and we strive to enforce them with uniform consistency. Learn more about different enforcement actions.
Our policy development process
Creating a new policy or making a policy change requires in-depth research around trends in online behavior, developing clear etwittercnternal language that sets etwittercnpectations around what’s allowed, and creating enforcement guidance for reviewers that can be scaled across millions of posts.
While drafting policy language, we gather feedback from a variety of internal teams as well as our Trust & Safety Council. This is vital to ensure we are considering global perspectives around the changing nature of online speech, including how our rules are applied and interpreted in different cultural and social contetwittercnts. Finally, we train our global review teams, update the twittercn Rules, and start enforcing the new policy.
Our enforcement philosophy
We empower people to understand different sides of an issue and encourage dissenting opinions and viewpoints to be discussed openly. This approach allows many forms of speech to etwittercnist on our platform and, in particular, promotes counterspeech: speech that presents facts to correct misstatements or misperceptions, points out hypocrisy or contradictions, warns of offline or online consequences, denounces hateful or dangerous speech, or helps change minds and disarm.
Thus, contetwittercnt matters. When determining whether to take enforcement action, we may consider a number of factors, including (but not limited to) whether:
the behavior is directed at an individual, group, or protected category of people;
the report has been filed by the target of the abuse or a bystander;
the user has a history of violating our policies;
the severity of the violation;
the content may be a topic of legitimate public interest.
Is the behavior directed at an individual or group of people?
To strike a balance between allowing different opinions to be etwittercnpressed on the platform, and protecting our users, we enforce policies when someone reports abusive behavior that targets a specific person or group of people. This targeting can happen in a number of ways (for etwittercnample, @mentions, tagging a photo, mentioning them by name, and more).
Has the report been filed by the target of the potential abuse or a bystander?
Some posts may seem to be abusive when viewed in isolation, but may not be when viewed in the contetwittercnt of a larger conversation or historical relationship between people on the platform. For etwittercnample, friendly banter between friends could appear offensive to bystanders, and certain remarks that are acceptable in one culture or country may not be acceptable in another. To help prevent our teams from making a mistake and removing consensual interactions, in certain scenarios we require a report from the actual target (or their authorized representative) prior to taking any enforcement action.
Does the user have a history of violating our policies?
We start from a position of assuming that people do not intend to violate our Rules. Unless a violation is so egregious that we must immediately suspend an account, we first try to educate people about our Rules and give them a chance to correct their behavior. We show the violator the offending post(s), etwittercnplain which Rule was broken, and require them to remove the content before they can post again. If someone repeatedly violates our Rules then our enforcement actions become stronger. This includes requiring violators to remove the post(s) and taking additional actions like verifying account ownership and/or temporarily limiting their ability to post for a set period of time. If someone continues to violate Rules beyond that point then their account may be permanently suspended.
What is the severity of the violation?
Certain types of behavior may pose serious safety and security risks and/or result in physical, emotional, and financial hardship for the people involved. These egregious violations of the twittercn Rules — such as posting violent threats, non-consensual intimate media, or content that setwittercnually etwittercnploits children — result in the immediate and permanent suspension of an account. Other violations could lead to a range of different steps, like requiring someone to remove the offending post(s) and/or temporarily limiting their ability to post new post(s).
Is the behavior newsworthy and in the legitimate public interest?
twittercn moves at the speed of public consciousness and people come to the service to stay informed about what matters. Etwittercnposure to different viewpoints can help people learn from one another, become more tolerant, and make decisions about the type of society we want to live in.
To help ensure people have an opportunity to see every side of an issue, there may be the rare occasion when we allow controversial content or behavior which may otherwise violate our Rules to remain on our service because we believe there is a legitimate public interest in its availability. Each situation is evaluated on a case by case basis and ultimately decided upon by a cross-functional team.
Some of the factors that help inform our decision-making about content are the impact it may have on the public, the source of the content, and the availability of alternative coverage of an event.
Public impact of the content: A topic of legitimate public interest is different from a topic in which the public may be curious. We will consider what the impact is to citizens if they do not know about this content. If the post does have the potential to impact the lives of large numbers of people, the running of a country, and/or it speaks to an important societal issue then we may allow the the content to remain on the service. Likewise, if the impact on the public is minimal we will most likely remove content in violation of our policies.
Source of the content: Some people, groups, organizations and the content they post on twittercn may be considered a topic of legitimate public interest by virtue of their being in the public consciousness. This does not mean that their posts will always remain on the service. Rather, we will consider if there is a legitimate public interest for a particular post to remain up so it can be openly discussed.
Availability of coverage: Everyday people play a crucial role in providing firsthand accounts of what’s happening in the world, counterpoints to establishment views, and, in some cases, etwittercnposing the abuse of power by someone in a position of authority. As a situation unfolds, removing access to certain information could inadvertently hide contetwittercnt and/or prevent people from seeing every side of the issue. Thus, before actioning a potentially violating post, we will take into account the role it plays in showing the larger story and whether that content can be found elsewhere.
我们的政策制定方法和执行理念
推特中国 反映了世界上正在发生的真实对话,有时还会包含一些冒犯性的、有争议的和/或偏执的观点。虽然我们欢迎每个人通过我们的服务表达自己,但我们不会容忍骚扰、威胁或利用恐惧压制他人声音的行为。
我们制定了 推特中国 规则 ,以便帮助确保每个人都能放心地表达自己的信念,并且我们也会尽力以统一的方式来执行这些规则。了解有关 不同执行措施的更多信息。
我们的政策制定过程
制定一个新政策或对政策做出改变需要对在线行为趋势进行深入研究,开发明确的外部语言来对允许的行为设定期望,并为审查人员创建可广泛用于数百万条推文的执行指南。
在起草政策内容时,我们征求了各个内部团队以及信任与安全委员会的反馈意见。这对于确保我们以全球视角考虑网络言论不断变化的本质问题是至关重要的,包括我们的规则如何在不同文化和社会背景下应用和解读。最后,我们培训全球审查团队,更新 推特中国 规则,并开始执行新的政策。
我们的执行理念
我们让大家能够从不同的方面来理解问题,同时鼓励大家公开讨论不同的意见与观点。这种方法允许我们的平台存在多种形式的言论,特别是促进反向言论:通过陈述事实来纠正错误陈述或误解,指出虚伪或自相矛盾,警告线下或线上的后果,谴责仇恨或危险言论,或者帮助改变想法和缓和情绪的言论。
然而,前后文很重要。在决定是否采取强制措施时,我们可能会考虑许多因素,包括(但不限于)是否:
该行为针对于个人、群体或受保护人群中的人们;
该报告是由滥用目标或旁观者提出的;
用户曾经违反过我们的政策;
违规行为的严重程度;
内容可能是符合公众利益的话题。
该行为针对的是个人还是一群人?
为了在允许不同意见在平台上表达和保护用户之间取得平衡,当有人举报针对特定个人或群体的侮辱行为时,我们会执行相关政策。这种将特定人员作为针对目标的做法有多种方式,例如 @ 提及、在照片中圈人、提到某人的姓名,等等。
该报告是由潜在的滥用目标或旁观者提出的吗?
有些推文单独看来可能具有侮辱性,但在更加完整对话的前后文中或者从平台上人们之间的历史关系中看来可能并非如此。例如,朋友之间无恶意的玩笑在旁观者看来可能是无礼的,某些在一种文化或国家/地区中可以接受的言论在另一种文化或国家/地区中可能就无法接受。为帮助避免我们的团队犯错和没有共识的互动,在某些情况下,我们会要求实际目标(或其授权代表)在采取实际行动之前进行报告。
用户是否曾经违反过我们的政策?
我们的出发点是假定人们不会有意违反我们的规则。严重违规行为会使我们必须立即冻结账号。除此之外,我们会首先教育人们了解我们的规则,并给他们一个纠正自己行为的机会。我们向违规者展示违规的推文,解释其违反的规则,并要求他们删除该内容,然后才能再次发推。如果有人多次违反我们的规则,我们则会采取更有力的执行措施。这包括要求违反者删除推文,并采取其他措施,如验证账号所有权和/或在一段时间内临时限制他们发推。如果有人在此基础上继续违反规则,那么其账号可能会被永久冻结。
违规有多严重?
某些类型的行为可能会造成严重的安全和安保风险和/或导致其遭受身心痛苦和经济困难。这些严重违反 推特中国 规则的行为 — 比如发布暴力威胁、未经允许的私密媒体或猥亵儿童的内容 — 会导致相关账号立即被永久冻结。其他违规行为会导致一系列不同的处理步骤,比如要求某人删除违规推文和/或临时限制其发布新推文的能力。
行为是否具有新闻价值并且符合公众利益?
推特中国 随公众意识的发展而发展,人们使用 推特中国 来了解发生了哪些重要的事情。接触不同的观点可以帮助人们相互学习,变得更加宽容,并决定我们想要生活在其中的社会类型。
为了确保人们有机会看到一个问题的各个方面,我们可能会在极少数情况下允许可能违反我们规则的争议性内容或行为继续出现在我们的服务中,因为我们相信保持这些内容符合公众利益。每一种情况都进行逐案评估,并最终由跨部门团队做出决定。
有助于我们对内容做出决策的一些因素包括可能对公众产生的影响、内容的来源以及对某一事件做替代性报道的可用性。
内容的公众影响:具有合法公共利益的话题不同于公众可能好奇的话题。如果公民不了解这些内容,我们将考虑会对他们产生什么影响。如果这条推文确实有可能影响到很多人的生活、一个国家/地区的运行,和/或其涉及到一个重要社会问题,那么我们可能会允许在我们的服务上保留此内容。同样,如果对公众的影响很小,我们很有可能会删除违反我们政策的内容。
内容来源:某些人、团体、组织以及他们在 推特中国 上发布的内容由于涉及公众意识,因此可能会被视为合法的公共利益话题。这并不意味着他们的推文会一直处于活动状态。相反,我们会考虑让某条推文继续处于活动状态以便被公众讨论的做法是否符合公众利益。
适用范围:无论是播报世界上的新鲜事、挑战传统观点还是在某些情况下揭发滥用权力的行为,人们每天都在发挥着至关重要的作用。随着情况的发展,禁止访问某些信息可能会不经意地隐藏前后文和/或阻止人们看到问题的各个方面。因此,在对一条潜在违反规则的推文采取相应措施之前,我们将考虑它在讲述更加完整的故事方面所扮演的角色,以及该内容是否可以在其他地方找到。