Press "Enter" to skip to content

Pop-up fact-checking moves online: Lessons from our user experience testing

We initially wanted to build pop-up fact-checking for a TV screen. But for nearly a year, people have told us in surveys and in coffee shops that they like live fact-checking but they need more information than they can get on a TV.

The testing is a key part of our development of Squash, our groundbreaking live fact-checking product. We started by interviewing a handful of users of our FactStream app. We wanted to know how they found out about the app, how they find fact checks about things they hear on TV, and what they would need to trust live fact-checking. As we saw in our “Red Couch Experiments” in 2018, they were excited about the concept but they wanted more than a TV screen allowed. 

We supplemented those interviews with conversations in coffee shops – “guerilla research” in user experience (UX) terms. And again, the people we spoke with were excited about the concept but wanted more information than a 1740×90 pixel display could accommodate.

The most common request was the ability to access the full published fact-check. Some wanted to know if more than one fact-checker had vetted the claim, and if so, did they all reach the same conclusion? Some just wanted to be able to pause the video. 

Since those things weren’t possible with a conventional TV display, we pivoted and began to imagine what live fact-checking would look like on the web. 

Bringing Pop-Up Fact-Checking to the Web

In an online whiteboard session, our Duke Tech and Check Cooperative team discussed many possibilities for bringing live fact-checking online, and then, our UX team — students Javan Jiang and Dora Pekec and myself — designed a new interface for live fact-checking and tested it in a series of simple open-ended preference surveys. 

In total, 100 people responded to these surveys, in addition to the eight interviews above and a large experiment with 1,500 participants we did late last year about whether users want ratings in on-screen displays (they do). 

A common theme emerged in the new research: Make live fact-checking as non-disruptive to the viewing experience as possible. More specifically, we found three things that users want and need from the live fact-checking experience.

  • Users prefer a fact-checking display beneath the video. In our initial survey, users could choose if they liked a display beside or beneath the video. About three-quarters of respondents said that a display beneath the video was less disruptive to their viewing, with several telling us that this placement was similar to existing video platforms such as YouTube. 
  •  Users need “persistent onboarding” to make use of the content they get from live fact-checking. A user guide or FAQ is not enough. Squash can’t yet provide real-time fact-checking. It is a system that matches claims made during a televised event to claims previously checked. But users need to be reminded that they are seeing a “related fact-check,” not necessarily a perfect match to the claim they just heard. “Persistent onboarding” means providing users with subtle reminders in the display. For example, when a user hovers over the label “Related Fact Check,” a small box could explain that this is not a real-time fact check but an already published fact check about a similar claim made in the past. This was one of the features users liked most because it kept them from having to find the information themselves.
  • Users prefer all the information that is available on the initial screen. Our first test allowed users to expand the display to see more information about the fact check, such as the publisher of the fact check and an explanation of what statement triggered the system to display a fact check. But users said that having to toggle the display to see this information was disruptive. 
Users told us they wanted more on-screen explanations, sometimes called “persistent onboarding.”

More to Learn

Though we’ve learned a lot, some big questions remain. We still don’t know what live fact-checking looks like under less-than-ideal conditions. For example, how would users react to a fact check when the spoken claim is true but the relevant fact check is about a claim that was false? 

And we need to figure out timing, particularly for multi-speaker events such as debates. When is the right time to display a fact-check after a politician has spoken? And what if the screen is now showing another politician?

And how can we appeal to audiences that are skeptical of fact-checking? One respondent specifically said he’d want to be able to turn off the display because “none of the fact-checkers are credible.” What strategies or content would help make such audiences more receptive to live fact-checking? 

As we wrestle with those questions, moving live fact-checking to the web still opens up new possibilities, such as the ability to pause content (we call that “DVR mode”), read fact-checks,  and return to the event. We are hopeful this shift in platform will ultimately bring automated fact-checking to larger audiences.