Using Content Warnings to Protect People From Unwanted Content on Facebook

Product Design Lead at Facebook (2015-2017)

Overview

From 2015-2017, I led the product design to develop content warnings on Facebook to help people deal with unexpected, sensitive content that they encounter on Facebook. I understood, strategized, and collaborated with an interdisciplinary team to weigh different considerations and capacities from Facebook’s Policy and Machine-Learning team, to our international research team, with implementing an experience that would best solve our community’s needs. I also presented the strategy and design directly to Mark Zuckerberg to receive his feedback and guidance. Today, the content warning is accessible on all platforms (web, iOS, Android) to 100% of Facebook’s global market, viewable most popularly on the News Feed.

    Role
  • Product design and strategy
  • Project management leadership
  • International research scope (qualitative and quantitative)
Launched to all platforms (iOS, Android, web) since Spring 2017
Understanding the Problem

When people go onto Facebook and unexpectedly encounter sensitive content, it is a jarring experience that bothers them.

Sometimes, seeing this graphic content without any warnings triggers past trauma. Other times, it results in pure embarassment if they’re browsing Facebook in public. In the inevitable situation where Facebook can’t proactively remove potentially offensive or suggestive content, how can we offer a remediary experience?

Existing Behaviors and Models

Employing the jobs-to-be-done approach to UX design and research, I sought to understand what the community did today to deal with the issues they encountered. Working with my team’s data scientist, we noted that the high volume of reported content indicated that people were emboldened to take action on such content. Although about 2.9% of content viewed on Facebook are sexually suggestive, non-policy-violating, only about 0.07% of sexual/nude content viewed on Facebook violate its policy.

If not all of this content can be removed, what more can we do?

After conducting a competitive analysis of analogous social networking platforms, I determined that doing such on Facebook must be approached with a unique sensitivity. When people share content on Facebook, it is an expression of their values, identity, and personhood online. Moderating content on Facebook is not only about aiding the needs of the viewer, but also respecting the freedom of expression for the poster.

Approaching a Strategy

Given the complexity of our different users’ needs, what considerations must we acknowledge if we implemented warning screens on Facebook? We asked ourselves:

  1. How do we resolve the pain points of the person seeing unwanted content, with the freedom of expression we grant to person who posted such content in the first place?
  2. How do we give people control over how “strict” the content warning should be? And how we give them control when they think that Facebook has incorrectly covered up innocuous content?
  3. How would we design and implement an adaptable, modular product experience that could apply to various “story” formats on Facebook, a product on which numerous product teams drive their unique product direction (written post, photos, videos, and other miscellaneous formats)?

Brainstorming product principles for our content warning.
Defining Our Principles

In order to guide the team to make decisions around these complex considerations, I collaborated with my team’s content strategist to brainstorm and design product principles:

  1. Communicate to users that this content is concealed, and why.
  2. Give affordance for users to cover and uncover this photo/video.
  3. If the content warning is connected to a user setting, then give users the option to flag this content warning as a false positive or to edit their user settings.

Initial Explorations

Both the aforementioned considerations and the product principles provide some constraints for the next phase, which is to broadly explore the possibilities of what how this content warning can take shape. During this initial phase, it is important to explore widely and divergent in order to expose any limitations of our technical capacity, highlight any philosophical questions we’d need to address, and uncover any “edge cases” we may not had been thinking of previously.

Exploring a breadth of possibilities for the content warnings.

One stream of explorations I delved into was understanding how much content the content warning should cover.

Since one of the primary considerations was to balance alleviating the painful experience for the viewer, with the freedom expression of the content creator, I focused my design explorations with this in mind.

Variation of how much coverage the content warning might have.

Presenting Our Point of View to Mark Zuckerberg

After many continued explorations and feedback sessions from my peers, who included product designers, researchers, and engineers from my Safety and Security team, as well as those from the News Feed team, I refined the concept to the video shown below: a content warning blurring only the related visual content (photo or video), that lets the user know why it’s covered, what they can do if they continue to see unwanted content, and a way to let Facebook know if it accidentally covered innocuous content.

Given the intense sensitivity of what my team was conceiving and its possible and grave implications for the community and for Facebook, this concept went through ample review at the executive level. With my core team in attendance, I presented this design concept directly to Mark Zuckerberg to receive his feedback around the design execution, the technical execution, and the unanswered questions we needed a gut check from him on.

Early video demo of a prototype concept

Mark encouraged our team to continue to iterate on our messaging and presentation of the content warning overlay, as well as its settings, so that it did not convey a judgement around someone’s personal morals around content they don’t want to see.

Iterating After Mark’s Feedback

After the review with Mark, I continued to iterate on my content warnings and the settings. Given the growing scope of the original project, I drove the team to differentiate the two specific workstreams we were beginning to work within. Even though they were closely related, they needed to be distinct so that we can be focused in our problem-solving and investigation.

With content warnings, I iterated upon developing a modular, flexible content warning that can support the right messaging for our community. I worked closely with the content strategist and with policy to gather their feedback and needs.

With Settings, I continued to explore what a community-moderated content preference settings might look like on Facebook. Gathering the known requirements from the Policy and our Engineering team, we explored and focused in on three possible directions that we needed to validate and understand better.

How do people feel about their “community” helping moderate content that they see on Facebook? What do people even consider to be flagrant content? We had a lot of hypotheses but not enough clear evidence, especially at a global scale. Thus, we set out on a research trip.

Validating Around the World

We went to India, Germany, Indonesia, and Mexico to better understand how people think about their community’s role in determining how content moderation might work on Facebook.

There were numerous questions and hypotheses we sought to better understand during this trip. The key themes were:

  • What is Facebook’s role in handling content described as sexually suggestive, hate speech, violence/gore, et cetera? This necessitated us to understand how the participants categorize and describe such content.
  • How much do they trust their community’s ability to help moderate this content on Facebook? This required better understanding how they describe “community”: is it defined by shared interests? Demographics? Location? it were a location, how might that work on a global scale?
  • Which interaction design patterns best match their expectations when dealing with such content?

During to the complexities of these questions, we answered them through a variety of methods: qualitative (interviews, categorical card-sorting, focus group discussions), and quantitative (usability testing).

Working with my UX researcher from the observation room in Jakarta, Indonesia.

What We Learned

Placeholder text here

  • Control
  • Flagrancy of Bad Content
  • Perceptions Around Community
  • UX

Continuing Iterations to Narrow In

In order to drive the vision for our MVP, we took a few considerations to constrain our decision-making process: 1) the deadline, 2) the legal and technical possibilities, and 3) and creating what would best solve the problem we were most keen on solving.

After continuing to present to and receive feedback from Facebook’s executive team, I drove the team to ship the MVP:

Launched to all platforms (iOS, Android, web) since Spring 2017
See other work