Using Content Warnings to Protect People From Unwanted Content on Facebook

Product Design Lead at Facebook (2015-2017)

From 2015-2017, I led the product design to develop content warnings on Facebook to help people deal with unexpected, sensitive content that they encounter on Facebook. I understood, strategized, and collaborated with an interdisciplinary team to weigh different considerations and capacities from Facebook’s Policy and Machine-Learning team, to our international research team, with implementing an experience that would best solve our community’s needs. I also presented the strategy and design directly to Mark Zuckerberg to receive his feedback and guidance. Today, the content warning is accessible on all platforms (web, iOS, Android) to 100% of Facebook’s global market, viewable most popularly on the News Feed. It resulted in higher sentiment and decreased reporting of negative content on Facebook.

Understanding the Problem

When people go onto Facebook and unexpectedly encounter sensitive content, it is a jarring experience that bothers them.

Sometimes, seeing this graphic content without any warnings triggers past trauma. Other times, it results in pure embarassment if they’re browsing Facebook in public. In the inevitable situation where Facebook can’t proactively remove potentially offensive or suggestive content, how can we offer a remediary experience?

Employing the jobs-to-be-done approach to UX design and research, I sought to understand what the community did today to deal with the issues they encountered. Working with my team’s data scientist, we noted that the high volume of reported content indicated that people were emboldened to take action on such content. Although about 2.9% of content viewed on Facebook are sexually suggestive, non-policy-violating, only about 0.07% of sexual/nude content viewed on Facebook violate its policy.

If not all of this content can be removed, what more can we do?

The Crux

When people share content on Facebook, it is an expression of their values, identity, and personhood online. Moderating content on Facebook is not only about aiding the needs of the viewer, but also respecting the freedom of expression for the poster.


Approaching a Strategy

Given the complexity of our different users’ needs, what considerations must we acknowledge if we implemented warning screens on Facebook? We asked ourselves:

  1. How do we resolve the pain points of the person seeing unwanted content, with the freedom of expression we grant to person who posted such content in the first place?
  2. How do we give people control over how “strict” the content warning should be? And how we give them control when they think that Facebook has incorrectly covered up innocuous content?
  3. How would we design and implement an adaptable, modular product experience that could apply to various “story” formats on Facebook, a product on which numerous product teams drive their unique product direction (written post, photos, videos, and other miscellaneous formats)?


Defining Our Principles

In order to guide the team to make decisions around these complex considerations, I collaborated with my team’s content strategist to brainstorm and design product principles:

01

Clarity

Communicate to users that this content is concealed, and why.

02

Control

Give affordance for users to cover and uncover this photo/video.

03

Voice

If the content warning is connected to a user setting, then give users the option to flag this content warning as a false positive or to edit their user settings.

Initial Explorations

What could content warnings look like?

Both the aforementioned considerations and the product principles provide some constraints for the next phase, which is to broadly explore the possibilities of what how this content warning can take shape. During this initial phase, it is important to explore widely and divergent in order to expose any limitations of our technical capacity, highlight any philosophical questions we’d need to address, and uncover any “edge cases” we may not had been thinking of previously.

One stream of explorations I delved into was understanding how much content the content warning should cover.

Since one of the primary considerations was to balance alleviating the painful experience for the viewer, with the freedom expression of the content creator, I focused my design explorations with this in mind.

Presenting Our Point of View to Mark Zuckerberg

After many continued explorations and feedback sessions from my peers, who included product designers, researchers, and engineers from my Safety and Security team, as well as those from the News Feed team, I refined the concept to the video shown below: a content warning blurring only the related visual content (photo or video), that lets the user know why it’s covered, what they can do if they continue to see unwanted content, and a way to let Facebook know if it accidentally covered innocuous content.

Given the intense sensitivity of what my team was conceiving and its possible and grave implications for the community and for Facebook, this concept went through ample review at the executive level. With my core team in attendance, I presented this design concept directly to Mark Zuckerberg to receive his feedback around the design execution, the technical execution, and the unanswered questions we needed a gut check from him on.

Early video demo of a prototype concept

Mark encouraged our team to continue to iterate on our messaging and presentation of the content warning overlay, as well as its settings, so that it did not convey a judgement around someone’s personal morals around content they don’t want to see.


Iterating After Mark Zuckerberg’s Feedback

After the review with Mark, I continued to iterate on my content warnings and the settings. Given the expanding scope of the original project, I drove the team to differentiate the two specific workstreams we were beginning to work within. Even though they were closely related, they needed to be distinct so that we can be focused in our problem-solving and investigation.

With content warnings, I iterated upon developing a modular, flexible content warning that can support the right messaging for our community. I worked closely with the content strategist and with policy to gather their feedback and needs.


With Settings, I continued to explore what a community-moderated content preference settings might look like on Facebook. Gathering the known requirements from the Policy and our Engineering team, we explored and focused in on three possible directions that we needed to validate and understand better.

How do people feel about their “community” helping moderate content that they see on Facebook? What do people even consider to be flagrant content? We had a lot of hypotheses but not enough clear evidence, especially at a global scale. Thus, we set out on a research trip.

User Research

We went to India, Germany, Indonesia, and Mexico to better understand how people think about their community’s role in determining how content moderation might work on Facebook.

There were numerous questions and hypotheses we sought to better understand during this trip. The key themes were:

  • What is Facebook’s role in handling content described as sexually suggestive, hate speech, violence/gore, et cetera? This necessitated us to understand how the participants categorize and describe such content.
  • How much do they trust their community’s ability to help moderate this content on Facebook? This required better understanding how they describe “community”: is it defined by shared interests? Demographics? Location? it were a location, how might that work on a global scale?
  • Which interaction design patterns best match their expectations when dealing with such content?


During to the complexities of these questions, we answered them through a variety of methods: qualitative (interviews, categorical card-sorting, focus group discussions), and quantitative (usability testing).

For the individual research sessions, we talked with each participant for about 2 hours. In total, we spoke directly with 200 people spanning four continents.


Key Takeaways

After two intensive weeks of international research, we came away with these key takeaways:

  • Community is subjective…
    Across the diversity of race, religious affiliation, social class, sexual orientation, gender identity, geographic location, interests, countries, and languages of people we talked to, there are ample nuances and no definitive consensus of a definition of a “community”. Simply put, a community is a group of people, but the definition of the group changes meaning and context during different circumstances.
  • … so is what people consider flagrant on Facebook.
    Not only do people define their “community” subjectively, but so too is what they consider to be flagrant on Facebook. This affirmed our primary aim of giving people control to see what they want to see on Facebook, validating our content warning feature.
  • General UX/UI Feedback/Content
    After testing 3 variations of the content warning settings, we found that people wanted that the words they’d see to describe this content warning matters just as much as the UX/UI showcasing it. Following our direction, here were iterations I came up with afterwards.


The final push

During the next few weeks after the research session, I worked with my team to execute on our launch plan for our MVP. Taking into consideration the: 1) deadline, 2) legal and technical possibilities, and 3) necessary features to solve our core user problem, we worked through all of the implementation details, from the UI/UX design, to meeting policy and legal requirements, to working closely with our marketing and communication partnerships.

In April 2017, the content warning launched to all platforms (web, iOS, Android) to 100% of Facebook’s global market, viewable most popularly on the News Feed.