Category: Tech & Check Cooperative

Tech & Check Cooperative

The lessons of Squash, our groundbreaking automated fact-checking platform

Squash fulfilled our dream of instant checks on speeches and debates. But to work at scale, we need more fact-checks.

By stepht@duke.edu - June 28, 2021

Squash began as a crazy dream.

Soon after I started PolitiFact in 2007, readers began suggesting a cool but far-fetched idea. They wanted to see our fact checks pop up on live TV.

That kind of automated fact-checking wasn’t possible with the technology available back then, but I liked the idea so much that I hacked together a PowerPoint of how it might look. It showed a guy watching a campaign ad when PolitiFact’s Truth-O-Meter suddenly popped up to indicate the ad was false.

Bill Adair’s original depiction of pop-up fact-checking.

It took 12 years, but our team in the Duke University Reporters’ Lab managed to make the dream come true. Today, Squash (our code name for the project, chosen because it is a nutritious vegetable and a good metaphor for stopping falsehoods) has been a remarkable success. It displays fact checks seconds after politicians utter a claim and it largely does what those readers wanted in 2007.

But Squash also makes lots of mistakes. It converts politicians’ speech to the wrong text (often with funny results) and it frequently stays idle because there simply aren’t enough claims that have been checked by the nation’s fact-checking organizations. It isn’t quite ready for prime time.

As we wrap up four years on the project, I wanted to share some of our lessons to help developers and journalists who want to continue our work. There is great potential in automated fact-checking and I’m hopeful that others will build on our success.

When I first came to Duke in 2013 and began exploring the idea, it went nowhere. That’s partly because the technology wasn’t ready and partly because I was focused on the old way that campaign ads were delivered — through conventional TV. That made it difficult to isolate ads the way we needed to.

But the technology changed. Political speeches and ads migrated to the web and my Duke team partnered with Google, Jigsaw and Schema.org to create ClaimReview, a tagging system for fact-check articles. Suddenly we had the key elements that made instant fact-checking possible: accessible video and a big database of fact checks.

I wasn’t smart enough to realize that, but my colleague Mark Stencel, the co-director of the Reporters’ Lab, was. He came into my office one day and said ClaimReview was a game changer. “You realize what you’ve done, right? You’ve created the magic ingredient for your dream of live fact-checking.” Um … yes! That had been my master plan all along!

Fact-checkers use the ClaimReview tagging system to indicate the person and claim being checked, which not only helps Google highlight the articles in search results, it also makes a big database of checks that Squash can tap.

It would be difficult to overstate the technical challenge we were facing. No one had attempted this kind of work beyond doing a demo, so there was no template to follow. Fortunately we had a smart technical team and some generous support from the Knight Foundation, Craig Newmark and Facebook.

Christopher Guess, our wicked-smart lead technologist, had to invent new ways to do just about everything, combining open-source tools with software that he built himself. He designed a system to ingest live TV and process the audio for instant fact-checking. It worked so fast that we had to slow down the video.

To reduce the massive amount of computer processing, a team of students led by Duke computer science professor Jun Yang came up with a creative way to filter out sentences that did not contain factual claims. They used ClaimBuster, an algorithm developed at the University of Texas at Arlington, to act like a colander that kept only good factual claims and let the others drain away.

Squash works by converting audio to text and then matching the claim against a database of fact-checks.

Today, this is how Squash works: It “listens” to a speech or debate, sending audio clips to Google Cloud that are converted to text. That text is then run through ClaimBuster, which identifies sentences the algorithm believes are good claims to check. They are compared against the database of published fact checks to look for matches. When one is found, a summary of that fact check pops up on the screen.

The first few times you see the related fact check appear on the screen, it’s amazing. I got chills. I felt was getting a glimpse of the future. The dream of those PolitiFact readers from 2007 had come true.

But …

Look a little closer and you will quickly realize that Squash isn’t perfect. If you watch in our web mode, which shows Squash’s AI “brain” at work, you will see plenty of mistakes as it converts voice to text. Some are real doozies.

Last summer during the Democratic convention, former Iowa Gov. Tom Vilsack said this: “The powerful storm that swept through Iowa last week has taken a terrible toll on our farmers ……”

But Squash (it was really Google Cloud) translated it as “Armpit sweat through the last week is taking a terrible toll on our farmers.”

Squash’s matching algorithm also makes too many mistakes finding the right fact check. Sometimes it is right on the money. It often correctly matched then-President Donald Trump’s statements on China, the economy and the border wall.

But other times it comes up with bizarre matches. Guess and our project manager Erica Ryan, who spends hours analyzing the results of our tests, believe this often happens because Squash mistakenly thinks an individual word or number is important. (Our all-time favorite was in our first test, when it matched a sentence by President Trump about men walking on the moon with a Washington Post fact-check about the bureaucracy for getting a road permit. The match occurred because both included the word years.)

Squash works by detecting politicians’ claims and matching them with related fact-checks. (Screengrab from Democratic debate)

To reduce the problem, Guess built a human editing tool called Gardener that enables us to weed out the bad matches. That helps a lot because the editor can choose the best fact check or reject them all.

The most frustrating problem is that a lot of time, Squash just sits there, idle, even when politicians are spewing sentences packed with factual claims. Squash is working properly, Guess assures us, it just isn’t finding any fact checks that are even close. This happened in our latest test, a news conference by President Joe Biden, when Squash could muster only two matches in more than an hour.

That problem is a simple one: There simply are not enough published fact checks to power Squash (or any other automated app).

We need more fact checks – As I noted in the previous section, this is a major shortcoming that will hinder anyone who wants to draw from the existing corpus of fact checks. Despite the steady growth of fact-checking in the United States and around the world, and despite the boom that occurred in the Trump years, there simply are not enough fact checks of enough politicians to provide enough matches for Squash and similar apps.

We had our greatest success during debates and party conventions, events when Squash could draw from a relatively large database of checks on the candidates from PolitiFact, FactCheck.org and The Washington Post. But we could not use Squash on state and local events because there simply were not enough fact-checks for possible matches.

Ryan and Guess believe we need dozens of fact checks on a single candidate, across a broad range of topics, to have enough to make Squash work.

More armpit sweat is needed to improve voice to text – We all know the limitations of Siri, which still translates a lot of things wrong despite years of tweaks and improvements by Apple. That’s a reminder that improving voice-to-text technology remains a difficult challenge. It’s especially hard in political events when audio can be inconsistent and when candidates sometimes shout at each other. (Identifying speakers in debates is yet another problem.)

As we currently envision Squash and this type of automated fact-checking, we are reliant on voice-to-text translations, but given the difficulty of automated “hearing,” we’ll have to accept a certain error level for the foreseeable future.

Matching algorithms can be improved – This is one area that we’re optimistic about. Most of our tests relied on off-the-shelf search engines to do the matching, until Guess began to experiment with a new approach to improve the matching. That approach relies on subject tags (which unfortunately are not included in ClaimReview) to help the algorithm make smarter choices and avoid irrelevant choices.

The idea is that if Squash knows the claim is about guns, it would find the best matches from published fact checks that have been tagged under the same subject. Guess found this approach promising but did not get a chance to try the approach at scale.

Until the matching improves, we’ve found humans are still needed to monitor and manage anything that gets displayed — as we did with our Gardener tool.

Ugh, UX – The simplest part of my vision, the Truth-O-Meter popping up on the screen, ended up being one of our most complex challenges. Yes, Guess was able to make the meter or the Washington Post Pinocchios pop up, but what were they referring to? This question of user experience was tricky in several ways.

First, we were not providing an instant fact check of the statement that was just said. We were popping up a summary of a related fact check that was previously published. Because politicians repeat the same talking points, the statements were generally similar and in some cases, even identical. But we couldn’t guarantee that, so we labeled the pop-up “Related fact-check.”

Second, the fact check appeared during a live, fast-moving event. So we realized it could be unclear to viewers which previous statement the pop-up referred to. This was especially tricky in a debate when candidates traded competing factual claims. The pop-up could be helpful with either of them. But the visual design that seemed so simple for my PowerPoint a decade earlier didn’t work in real life. Was that “False” Truth-O-Meter for the immigration statement Biden said? Or the one that Trump said?

Another UX problem: To give people time to read all the text (the related fact checks sometimes had lengthy statements), Guess had them linger on the screen for 15 seconds. And our designer Justin Reese made them attractive and readable. But by the end of that time the candidates might have said two more factual claims, further confusing viewers that saw the “False” meter.

So UX wasn’t just a problem, it was a tangle of many problems involving limited space on the screen (What should we display and where? Will readers understand the concept that the previous fact check is only related to what was just said?), time (How long should we display it in relation to when the politician spoke?) and user interaction (Should our web version allow users to pause the speech or debate to read a related fact check?). It’s an enormously complicated challenge.

* * *

Looking back at my PowerPoint vision of how automated fact-checking would work, we came pretty close. We succeeded in using technology to detect political speech and make relevant fact checks automatically pop up on a video screen. That’s a remarkable achievement, a testament to groundbreaking work by Guess and an incredible team.

But there are plenty of barriers that make it difficult for us to realize the dream and will challenge anyone who tries to tackle this in the future. I hope others can build on our successes, learn from our mistakes, and develop better versions in years to come.

Squash report card: Improvements during State of the Union … and how humans will make our AI smarter

We've had some encouraging improvements in the AI powering our experimental fact-checking technology. But to make Squash smarter, we're calling in a human.

By stepht@duke.edu - February 23, 2020

Squash, the experimental pop-up fact-checking product of the Reporters’ Lab, is getting better.

Our live test during the State of the Union address on Feb. 4 showed significant improvement over our inaugural test last year. Squash popped up 14 relevant fact-checks on the screen, up from just six last year.

That improvement matches a general trend we’ve seen in our testing. We’ve had a higher rate of relevant matches when we use Squash on videos of debates and speeches.

But we still have a long way to go. This month’s State of the Union speech also had 20 non-relevant matches, which means Squash displayed fact-checks that weren’t related to what the president said. If you’d been watching at that moment, you probably would have thought, “What is Squash thinking?”

We’re now going to try two ways to make Squash smarter: a new subject tagging system that will be based on a wonderfully addictive game developed by our lead technologist Chris Guess; and a new interface that will bring humans into the live decision-making. Squash will recommend fact-checks to display, but an editor will make the final judgment.

Some background in case you’re new to our project: Squash, part of the Lab’s Tech & Check Cooperative, is a revolutionary new product that displays fact-checks on a video screen during a debate or political speech. Squash “hears” what politicians say, converts their speech to text and then searches a database of previously published fact-checks for one that’s related. When Squash finds one, it displays a summary on the screen.

For our latest tests, we’ve been using Elasticsearch, a tool for building search engines that we’ve made smarter with two filters: ClaimBuster, an algorithm that identifies factual claims, and a large set of common synonyms. ClaimBuster helps Squash avoid wasting time and effort on sentences that aren’t factual claims, and the synonyms help it make better matches.

Guess, assisted by project manager Erica Ryan and student developers Jack Proudfoot and Sanha Lim, will soon be testing a new way of matching that uses natural language processing based on the subject of the fact-check. We believe that we’ll get more relevant matches if the matching is based on subjects rather than just the words in the politicians’ claims.

But to make that possible, we have to put subject tags on thousands of fact-checks in our ClaimReview database. So Guess has created a game called Caucus that displays a fact-check on your phone and then asks you to assign subject tags to it. The game is oddly addictive. Every time you submit one, you want to do another…and another. Guess has a leaderboard so we can keep track of who is tagging the most fact-checks. We’re testing the game with our students and staff, but hope to make it public soon.

We’ve also decided that Squash needs a little human help. Guess, working with our student developer Matt O’Boyle, is building an interface for human editors to control which matches actually pop up on users’ screens.

The new interface would let them review the fact-check that Squash recommends and decide whether to let it pop up on the screen, which should help us filter out most of the unrelated matches.

That should eliminate the slightly embarrassing problem when Squash makes a match that is comically bad. (My favorite: one from last year’s State of the Union when Squash matched the president’s line about men walking on the moon with a fact-check on how long it takes to get a permit to build a road.)

Assuming the new interface works relatively well, we’ll try to do a public demo of Squash this summer. 

Slowly but steadily, we are making progress. Watch for more improvements soon.

Beyond the Red Couch: Bringing UX Testing to Squash

As automated fact-checking gains ground, it's time to learn how to make pop-up content crystal clear on video screens.

By stepht@duke.edu - October 28, 2019

Fact-checkers have a problem.

They want to use technology to hold politicians accountable by getting fact-checks in front of the public as quickly as possible. But they don’t yet know the best ways to make their content understood. At the Duke Reporters’ Lab, that’s where Jessica Mahone comes in.

Jessica Mahone is designing tests to help Duke Reporters’ Lab researchers figure out how to clearly share fact-checks live during broadcasts. Photo by Andrew Donohue

The Lab is developing Squash, a tool built to bring live fact-checking of politicians to TV. Mahone, a social scientist, was brought on board to design experiments and conduct user experience (UX) tests for Squash. 

UX design is the discipline focused on making new products easy to use. A clear UX design means that a product is intuitive and new users get it without a steep learning curve. 

“If people can’t understand your product or find it hard to use, then you are doomed from the start. With Squash, this means that we want people to comprehend the information and be able to quickly determine whether a claim is true or not,” Mahone said

For Squash, fact-check content that pops up on screens needs to be instantly understood since it will only be visible for a few seconds. So what’s the best way?

Bill Adair, the director of the Duke Tech & Check Cooperative, organized some preliminary testing last year that he dubbed the red couch experiments. The tests revealed more research was needed to understand the best way to inform viewers. 

“I originally thought that all it would take is a Truth-O-Meter popping up on screen,” Adair said. “Turns out it’s much more complicated than that.”

Sixteen people watched videos of Barack Obama and Donald Trump delivering State of the Union speeches while fact-checks of some of what they said appeared on the screen. Ratings were true, false or something in between. Blink, a company specializing in UX testing, found that participants loved the concept of real-time fact-checking and would welcome it on TV broadcasts. But the design of the pop-up fact-checks often confused them.

It’s not just the quality of content that counts. Viewers must understand what they see very quickly. Squash may one day share fact-checks during live events, including State of the Union addresses.

Some viewers didn’t understand the fact-check ratings such as true or false when they were displayed. Others assumed the presidents’ statements must be true if no fact-check was shown. That’s a problem because Squash doesn’t fact-check all claims in speeches. It displays published previously fact-checks for only the claims that match Squash’s finicky search algorithm. 

The red couch experiments were “a very basic test of the concept,” Mahone said. “What they found mainly is that there was a need to do more diving in and digging into the some questions about how people respond to this. Because it’s actually quite complex.”

Mahone has developed a new round of tests scheduled to begin this week. These tests will use Amazon Mechanical Turk, an online platform that relies on people who sign up to be paid research subjects.

“One thing that came out of the initial testing was that people don’t like to see a rating of a fact-check,” Mahone said. “I was a little skeptical of that. Most of the social science research says that people do prefer things like that because it makes it a lot easier for them to make decisions.”

In this next phase, Mahone will recruit about 500 subjects. A third will see a summary of a fact-check with a PolitiFact TRUE icon. Another third will see a summary with the just the label TRUE. The rest will see just a summary text of a fact-check.

Each viewer will rank how interested they are in using an automated fact-checking tool after viewing the different displays. Mahone will compare the results.

After finding out if including ratings works, Mahone and three undergraduate students, Dora Pekec, Javan Jiang and Jia Dua, will look at the bigger picture of Squash’s user experience. They will use a company to find about 20 people to talk to, ideally individuals who consistently watch TV news and are familiar with fact-checking.

Participants will be asked what features they would want in real-time fact-checking.

“The whole idea is to ask people ‘Hey, if you had access to a tool that could tell you if what someone on TV is saying is true or false, what would you want to see in that tool?’ ” Mahone said. “We want to figure out what people want and need out of Squash.”

Figuring out how to make Squash intuitive is critical to its success, according to Chris Guess, the Lab’s lead technologist. Part of the challenge is that Squash is something new and viewers have no experience with similar products.

“These days, people do a lot more than just watch a debate. They’re cooking dinner, playing on their phone, watching over the kids,” Guess said. “We want people to be able to tune in, see what’s going on, check out the automated fact-checks and then be able to tune out without missing anything.”

Reporters’ Lab researchers hope to have Squash up and running for the homestretch of the 2020 presidential campaign. Adair, Knight Professor of the Practice of Journalism and Public Policy at Duke, has begun reaching out to television executives to gauge their interest in an automated fact-checking tool. 

“TV networks are interested, but they want to wait and see a product that is more developed.” Adair said. 

 

Reporters’ Lab Launches Global Effort to Expand the Use of ClaimReview

At Global Fact 6 in Cape Town, the Lab launched an effort to help standardize the taging fact-checks.

By stepht@duke.edu - July 17, 2019

The Duke Reporters’ Lab has launched a global effort to expand the use of ClaimReview, a standardized method of identifying fact-check articles for search engines and apps.

Funded by a grant from the Google News Initiative, The ClaimReview Project provides training and instructional materials about the use of ClaimReview for fact-checkers around the world. 

photo of Bill presenting at Global Fact
Bill Adair at the Global Fact 6 conference

ClaimReview was developed through a partnership of the Reporters’ Lab, Google, Jigsaw, and Schema.org. It provides a standard way for publishers of fact-checks to identify the claim being checked, the person or entity that made the claim, and the conclusion of the article. This standardization enables search engines and other platforms to highlight fact-checks, and can power automated products such as the FactStream and Squash apps being developed in the Reporters’ Lab.

“ClaimReview is the secret sauce of the future,” said Bill Adair, director of the Duke Reporters’ Lab. “It enables us to build apps and automate fact-checking in new and powerful ways.”

Slightly less than half of the 188 organizations included in our fact-checking database use ClaimReview.

Joel Luther at a Global Fact workshop

At the Global Fact 6 conference in Cape Town, the Lab led two sessions designed to recruit and train new users. During a featured talk titled The Future of ClaimReview, the Lab introduced Google’s Fact Check Markup Tool, which makes it easier for journalists to create ClaimReview. They no longer have to embed code in their articles and can instead create ClaimReview by submitting a simple web form.

In an Intro to ClaimReview workshop later in the day, the Lab provided step-by-step assistance to fact-checkers using the tool for the first time. 

The Lab also launched a website with a user guide and best practices, and will continue to work to expand the number of publishers using the tool.

 

Talking Point Tracker: A project to spot hot topics as they flare up on TV news

Developers will debut tracker prototype at 2019 Tech & Check Conference.

By stepht@duke.edu - March 13, 2019

When fact-checking technologists and journalists gather in Durham for the 2019 Tech & Check Conference this month, they will share new tools intended to optimize and automate fact-checking.

Dan Schultz
Dan Schultz of the Bad Idea Factory is preparing to debut a version of Talking Point Tracker.

For Dan Schultz, one founder of the Bad Idea Factory software development collective, this will be a chance to debut a “mannequin” version of the Talking Point Tracker. Created in collaboration with the Duke Tech & Check Cooperative, the tracker is intended to “capture the zeitgeist” of television news by identifying trending topics.

Duke journalism professor Bill Adair, who runs Tech & Check, launched the project by asking Schultz how fact-checkers could capture hot topics on TV news as quickly as possible. That is a simple but powerful idea. TV news is a place of vast discourse, where millions of viewers watch traditional, nonpartisan newscasts and partisan broadcasters such as Sean Hannity and Rachel Maddow. Listening in would give insight into what Schultz calls a “driver or predictor of collective consciousness.”

But executing even simple ideas can be difficult. In this case, TV news programs broadcast dense flows of media: audio, video, text and images that are not simple to track. Luckily, network and cable news outlets produce closed-caption subtitles for news shows. Talking Pointer Tracker scans those subtitles to identify keywords used most frequently within blocks of time. It also puts the keywords in context by showing sentences and longer passages where the keywords were found. To deepen the context, the tracker shows related keywords that often appear with the trending words.

The eventual goal is to group keywords into clusters that better capture emerging conversations. “Our hope is that it will be a useful tool for journalists who want to write in the context of what’s being discussed,“ said Schultz, who is collaborating with Justin Reese, a front-end developer with the Bad Idea Factory, on the project.

More technically, Talking Point Tracker runs closed-caption transcripts through a natural language processing pipeline that cleans the text as well as it can. An application programming interface, an API, uses separate language processing algorithm to find the most common keywords. These are “named entities” — usually proper nouns that can be sorted into different categories such as places, organizations and locations.

Talking Point Tracker’s prototype, to be unveiled at Tech & Check, is dense with information. But the design Reese created for viewing on a computer screen makes it readable. There’s enough white space to be easy on the eyes and a color scheme of red, blue, black and yellow that organizes text.

Talking Point Tracker packs lots of data on the current version of its screen display.

A list of the most frequent keywords over a specified time period are listed in a column on the left. Next to that is a line graph highlights their frequency. Sentences from which the keywords are listed on the right. If you click there, the tool points you to longer passages of transcripts. On the bottom are related keywords that often appear in the same sentences as a given word.

Moving from a mannequin stage to a living stage for this project will be challenging, Schultz said. As much as natural language processing has evolved over the past decade, algorithms still have trouble understanding aspects of human language. One free, open-source system the Tracker relies on is an API called spaCy. But programs like spaCy don’t always recognize the same thing when they’re stated differently — say, the “Virginia legislature” and the “Virginia General Assembly.”

Another challenge is coping with the quality of news show transcripts, Schultz said. The transcripts can contain many typos, in addition to sometimes being either all caps or all lowercase, which the API can have trouble reading.

Talking Point Tracker’s logo

And the API doesn’t always know where sentences break. Too often, the system will return sentences that contain just “Mr.” because it concludes that a period signifies the end of the sentence. To get around this, Schultz is using another NLP technology to clean the transcripts he obtains.

To prepare for the Tech & Check Conference, Schultz is building better searching tools and further cleaning up the Tracker’s design. “It’s always good to have your feet close to the fire,” Schultz said.

The biggest question he hopes to get answered before leaving is whether Talking Point Tracker could be useful for journalists, he said.

“There’s a lot things we can gain from feedback. If we have the capacity and interest from whoever, we will continue to iterate and build on top of that,” Schultz said.

 

         

 

During State of the Union, a failure…and a glimpse of Squash

FactStream

Our app crashed under unusual traffic. But our tests of a new automated tool were a success.

By stepht@duke.edu - February 6, 2019

We tested two fact-checking products during the State of the Union address. One failed, the other showed great promise.

The failure was FactStream, our iPhone app. It worked fine for the first 10 minutes of the speech. Users received two timely “quick takes” from Washington Post Fact Checker Glenn Kessler, but then the app crashed under an unusual surge of heavy traffic that we’re still investigating. We never recovered.

The other product is a previously secret project we’ve code-named Squash. It’s our first attempt at fully automated fact-checking. It converts speech to text and then searches our database of fact-checks from the Post, FactCheck.org and PolitiFact. When it finds a match, a summary of the fact-check pops onto the screen.

We’ve been testing Squash for the last few weeks with mixed results. Sometimes it finds exactly the right fact-checks. Other times the results are hilariously bad. But that’s what progress looks like.

A screenshot of Squash, our fully automated fact-checking tool, in the live test.

We went into last night’s speech with very modest expectations. I said before the speech I’d be happy if the speech simply triggered fact-checks to pop up, even if it was a poor match.

But Squash actually performed pretty well. It had 20 pop-ups and six of them were in the ballpark.

Overall, the results were stunning. It gave us a glimpse of how good automated fact-checking can be.

We’ll have more to share once we’ve reviewed the results, so stay tuned.

As for FactStream, it now has lots of timely fact-checks from the State of the Union on the main home screen, which continues to function well. We will fix any problems we identify with the live event feature and plan to be back in action for real-time coverage for campaign events later this year.

Live fact-checking of the State of the Union address with our FactStream app

FactStream

We're partnering with FactCheck.org, PolitiFact and Washington Post Fact Checker Glenn Kessler to provide real-time checks.

By stepht@duke.edu - February 3, 2019

UPDATE, Feb. 5, 11 p.m.: Our FactStream app failed during the State of the Union address. We apologize for the problems. We are still sorting out what happened, but it appears we got hit with an unexpected surge of traffic that overwhelmed our servers and our architecture.

As we noted at the bottom of this post, this was a test – only our second of the app.  We’ll fix the problems and be better next time.

—————-

The Reporters’ Lab is teaming up with the Washington Post, PolitiFact and FactCheck.org to offer live fact-checking of the State of the Union address on Tuesday night on our new FactStream app.

FactStreamJournalists from the Post, PolitiFact, and FactCheck.org will provide real-time updates throughout the speech in two forms:

Ratings – Links to previously published fact-checks with ratings when the president repeats a claim that has been checked before.

Quick takes – Instant updates about a statement’s accuracy. They will be labeled red, yellow and green to indicate their truthfulness.

Tuesday’s speech will be the second test of FactStream. The first test, conducted during last year’s State of the Union address, provided users with 32 updates. We got valuable feedback and have made several improvements to the app.

FactStream is part of the Duke Tech & Check Cooperative, a project to automate fact-checking that is funded by Knight Foundation, the Facebook Journalism Project and the Craig Newmark Foundation. Additional support has been provided by Google.

FactStream is available for iPhone and iPad (sorry, no Android version yet!) and is a free download from the App Store.

The live event feature for the State of the Union address is marked by an icon of calendar with a check mark.

The app has two streams. One, shown by the home symbol in the lower left of the screen, provides a constant stream of the latest fact-checks published every day throughout the year. The live event feature for the State of the Union address is marked by an icon of a calendar with a check mark.

Because this is a test, users could encounter a few glitches. We’d love to hear about any bugs you encounter and get your feedback at team@sharethefacts.org.

Tech & Check in the news

Coverage of Duke Tech & Check Cooperative 's efforts to strengthen journalism

By stepht@duke.edu - December 14, 2018

It’s been more than a year since the Reporters’ Lab received $1.2 million in grant funding to launch the Duke Tech & Check Cooperative.

Our goal is to link computer scientists and journalists to better automate fact-checking and expand how many people see this vital, accountability reporting.

Here’s a sampling of some of the coverage about the range of projects we’re tackling:

Tech & Check:

Associated Press, Technology Near For Real-Time TV Political Fact-Checks

Digital Trends, Real-time fact-checking is coming to live TV. But will networks use it?

Nancy Watzman, Tech & Check: Automating Fact-Checking
Poynter, Automated fact-checking has come a long way. But it still faces significant challenges.
MediaShift, The Fact-Checking Army Waging War on Fake News

FactStream:
NiemanLab, The red couch experiments, early lessons in pop-up fact-checking.
WRAL, Fake news? App will help State of the Union viewers sort out fact, fiction
Media Shift, An Experiment in Live Fact-Checking the State of the Union Speech by Trump
American Press Institute, President Trump’s first State of the Union address is Tuesday night. Here’s how to prepare yourself, factually speaking.
WRAL, App will help views sort fact, fiction in State of the Union
NiemanLab, Automated, live fact-checks during the State of the Union? The Tech & Check Cooperative’s first beta test hopes to pull it off
NiemanLab, FactStream debuted live fact-checking with last night’s SOTU. How’d it go?

Tech & Check Alerts:
Poynter, This Washington Post fact check was chosen by a bot

Truth Goggles:
NiemanLab, Truth Goggles are back! And ready for the next era of fact-checking

And …
NiemanLab, So what is that, er, Trusted News Integrity Trust Project all about? A guide to the (many, similarly named) new efforts fighting for journalism
MediaShift, Fighting Fake News: Key Innovations in 2017 from Platforms, Universities and More NiemanLab, With $4.5 million, Knight is launching a new commission — and funding more new projects — to address declining public trust in media
Poynter, Knight’s new initiative to counter misinformation includes more than $1.3 million for fact-checking projects
Axios, How pro-trust initiatives are taking over the internet
Recode, Why the Craig behind Craigslist gave big bucks to a journalism program
Digital News Report (with Reuters and Oxford), Understanding the Promise and Limits of Automated Fact-Checking
Democratic Minority Staff Report, U.S. House Committee on Science, Space & Technology, Old Tactics, New Tools: A Review of Russia’s Soft Cyber Influence Operations

Reporters’ Lab students are fact-checking North Carolina politicians

Student journalists and computer scientists find claims and report articles for the N.C. Fact-Checking Project

By stepht@duke.edu - November 20, 2018

Duke Reporters’ Lab students expanded vital political journalism during a historic midterm campaign season this fall with the North Carolina Fact-Checking Project.

Five student journalists reviewed thousands of statements that hundreds of North Carolina candidates vying for state and federal offices made online and during public appearances. They collected newsy and checkable claims from what amounted to a firehose of political claims presented as fact.

Duke computer science undergraduates with the Duke Tech & Check Cooperative applied custom-made bots and the ClaimBuster algorithm to scrape and sort checkable political claims from hundreds of political Twitter feeds.

Editors and reporters then selected claims the students had logged for most of the project’s 30 plus  fact-checks and six summary articles that the News and Observer and PolitiFact North Carolina published between August and November.

Duke senior Bill McCarthy

Duke senior Bill McCarthy was part of the four-reporter team on the project, which the North Carolina Local News Lab Fund supported to expand local fact-checking during the 2018 midterms and beyond in a large, politically divided and politically active state.

“Publishing content in any which way is exciting when you know it has some value to voters, to democracy,” said McCarthy, who interned at PolitiFact in Washington, D.C. last summer. “It was especially exciting to get so many fact-checks published in so little time.”

Reporters found politicians and political groups often did not stick with the facts during a campaign election season that that fielded an unusually large number of candidates statewide and a surge in voter turnout.

The N.C. Fact-Checking Project produces nonpartisan journalism

NC GOP falsely ties dozens of Democrats to single-payer health care plan,” read one project fact-check headline. “Democrat falsely links newly-appointed Republican to health care bill,” noted another.  The fact-check “Ad misleads about NC governors opposing constitutional amendments” set the record straight about some Democratic-leaning claims about six proposed amendments to the state constitution.

And on and on.

Digging for the Truth

Work in the lab was painstaking. Five sophomores filled weekday shifts to scour hundreds of campaign websites, social media feeds, Facebook and Google political ads, televised debates, campaign mailers and whatever else they could put their eyes on. Often they recorded one politician’s attacks on an opponent that might, or might not, be true.

Students scanned political chatter from all over the state, tracking competitive state and congressional races most closely. The resulting journalism was news that people could use as they were assessing candidates for the General Assembly and U.S. Congress as well as six proposed amendments to the state constitution.

The Reporters’ Lab launched a mini news service to share each fact-checking article with hundreds of newsrooms across the state for free.

One of more than 30 N.C. Fact-Checking Project articles

The Charlotte Observer, a McClatchy newspaper like the N&O, published several checks. So did smaller publications such as Asheville’s Citizen-Times  and the Greensboro News and Record. Newsweek cited  a fact-check report by the N&O’s Rashaan Ayesh and Andy Specht about a fake photo of Justice Kavanaugh’s accuser, Christine Blasey Ford, shared by the chairman of the Cabarrus County GOP, which WRAL referenced in a roundup.

Project fact-checks influenced political discourse directly too. Candidates referred to project fact-checks in campaign messaging on social media and even in campaign ads. Democrat Dan McCready, who lost a close race against Republican Mark Marris in District 9, used project fact-checks in two campaign ads promoted on Facebook and in multiple posts on his Facebook campaign page, for instance.

While N&O reporter Andy Specht was reporting a deceptive ad from the Stop Deceptive Amendments political committee, the group announced plans to change it.

The fact-checking project will restart in January, when North Carolina’s reconfigured General Assembly opens its first 2019 session.

 

Lessons learned from fact-checking 2018 midterm campaigns

After monitoring political messaging, students see the need for accountability journalism more than ever

By stepht@duke.edu - November 20, 2018

Five Duke undergraduates monitored thousands of political claims this semester during a heated midterm campaign season for the N.C. Fact-Checking Project.

That work helped expand nonpartisan political coverage in a politically divided state with lots of contested races for state and federal seats this fall. The effort resumes in January when the project turns its attention to a newly configured North Carolina General Assembly.

Three student journalists who tackled this work with fellow sophomores Alex Johnson and Sydney McKinney reflect on what they’ve learned so far.

Lizzie Bond

Lizzie Bond: After spending the summer working in two congressional offices on Capitol Hill, I began my work in the Reporters’ Lab and on the N.C. Fact-Checking Project with first-hand knowledge of how carefully elected officials and their staff craft statements in press releases and on social media. This practice derives from a fear of distorting the meaning or connotation of their words. And in this social media age where so many outlets are available for sharing information and for people to consume it, this fear runs deep.

Yet, it took me discovering one candidate for my perspective to shift on the value of our work with the N.C. Fact-Checking Project. That candidate, Peter Boykin, proved to be a much more complicated figure than any other politician whose social media we monitored. The Republican running to represent Greensboro’s District 58 in the General Assembly, Boykin is the founder of “Gays for Trump,” a former online pornography actor, a Pro-Trump radio show host, and an already controversial, far-right online figure with tens of thousands of followers. Pouring through Boykin’s nearly dozen social media accounts, I came across everything from innocuous self-recorded music video covers to contentious content, like hostile characterizations of liberals and advocacy of conspiracy theories, like one regarding the Las Vegas mass shooting which he pushed with little to no corroborating evidence.

When contrasting Boykin’s posts on both his personal and campaign social media accounts with the more cautious and mild statements from other North Carolina candidates, I realized that catching untruthful claims has a more ambitious goal that simply detecting and reporting falsehoods. By reminding politicians that they should be accountable to the facts in the first place, fact-checking strives to improve their commitment to truth-telling. The push away from truth and decency in our politics and toward sharp antagonism and even alternate realities becomes normalized when Republican leaders support candidates like Boykin as simply another GOP candidate. The N.C. Fact-Checking Project is helping to revive truth and decency in North Carolina’s politics and to challenge the conspiracy theories and pants-on-fire campaign claims that threaten the self-regulating, healthy political society we seek.

Ryan Williams

Ryan Williams: I came into the Reporters’ Lab with relatively little journalism experience. I spent the past summer working on social media outreach & strategy at a non-profit where I drafted tweets and wrote the occasional blog post. But I’d never tuned into writing with the immense brevity of political messages during an election season. The N.C. Fact-Checking Project showed me the importance of people who not only find the facts are but who report them in a nonpartisan, objective manner that is accessible to an average person.

Following the 2016 election, some people blamed journalists and pollsters for creating false expectations about who would win the presidency. I was one of those critics. In the two and a half months I spent fact-checking North Carolina’s midterm races, I learned how hard fact-checkers and reporters work. My fellow fact-checkers and I compiled a litany of checkable claims made by politicians this midterm cycle. Those claims, along with claims found by the automated claim-finding algorithm ClaimBuster were raw material for many fact-checks of some of North Carolina hottest races. Those checks were made available for voters ahead of polling.

Now that election day has come and gone, I am more than grateful for this experience in fact-finding and truth-reporting. Not only was I able to hone research skills, I gained a deeper understanding of the intricacies of political journalism. I can’t wait to see what claims come out of the next two years leading up to, what could be, the presidential race of my lifetime.

Jake Sheridan

Jake Sheridan: I’m a Carolina boy who has grown up on the state’s politics. I’ve worked on campaigns, went to the 2012 Democratic National Committee in my hometown of Charlotte and am the son of a long-time news reporter. I thought I knew North Carolina politics before working in the Reporter’s Lab. I was wrong.

While trying to wrap my head around the 300-plus N.C. races, I came to better understand the politics of this state. What matters in the foothills of the Piedmont, I found out, is different than what matters on the Outer Banks and in Asheville. I discovered that campaigns publicly release b-roll so that PACs can create ads for them and saw just how brutal attack ads can be. I got familiar with flooding and hog farms, strange politicians and bold campaign claims.

There was no shortage of checkable claims. That was good for me. But it’s bad for us. I trust politicians less now. The ease with which some N.C. politicians make up facts troubles me. Throughout this campaign season in North Carolina, many politicians lied, misled and told half truths. If we want democracy to work — if we want people to vote based on what is real so that they can pursue what is best for themselves and our country — we must give them truth. Fact-checking is essential to creating that truth. It has the potential to place an expectation of explanation upon politicians making claims. That’s critical for America if we want to live in a country in which our government represents our true best interests and not our best interests in an alternate reality.