Press "Enter" to skip to content

Tag: Automated fact-checking

The lessons of Squash, our groundbreaking automated fact-checking platform

Squash began as a crazy dream.

Soon after I started PolitiFact in 2007, readers began suggesting a cool but far-fetched idea. They wanted to see our fact checks pop up on live TV.

That kind of automated fact-checking wasn’t possible with the technology available back then, but I liked the idea so much that I hacked together a PowerPoint of how it might look. It showed a guy watching a campaign ad when PolitiFact’s Truth-O-Meter suddenly popped up to indicate the ad was false.

Bill Adair’s original depiction of pop-up fact-checking.

It took 12 years, but our team in the Duke University Reporters’ Lab managed to make the dream come true. Today, Squash (our code name for the project, chosen because it is a nutritious vegetable and a good metaphor for stopping falsehoods) has been a remarkable success. It displays fact checks seconds after politicians utter a claim and it largely does what those readers wanted in 2007.

But Squash also makes lots of mistakes. It converts politicians’ speech to the wrong text (often with funny results) and it frequently stays idle because there simply aren’t enough claims that have been checked by the nation’s fact-checking organizations. It isn’t quite ready for prime time.

As we wrap up four years on the project, I wanted to share some of our lessons to help developers and journalists who want to continue our work. There is great potential in automated fact-checking and I’m hopeful that others will build on our success.

When I first came to Duke in 2013 and began exploring the idea, it went nowhere. That’s partly because the technology wasn’t ready and partly because I was focused on the old way that campaign ads were delivered — through conventional TV. That made it difficult to isolate ads the way we needed to.

But the technology changed. Political speeches and ads migrated to the web and my Duke team partnered with Google, Jigsaw and Schema.org to create ClaimReview, a tagging system for fact-check articles. Suddenly we had the key elements that made instant fact-checking possible: accessible video and a big database of fact checks.

I wasn’t smart enough to realize that, but my colleague Mark Stencel, the co-director of the Reporters’ Lab, was. He came into my office one day and said ClaimReview was a game changer. “You realize what you’ve done, right? You’ve created the magic ingredient for your dream of live fact-checking.” Um … yes! That had been my master plan all along!

Fact-checkers use the ClaimReview tagging system to indicate the person and claim being checked, which not only helps Google highlight the articles in search results, it also makes a big database of checks that Squash can tap.

It would be difficult to overstate the technical challenge we were facing. No one had attempted this kind of work beyond doing a demo, so there was no template to follow. Fortunately we had a smart technical team and some generous support from the Knight Foundation, Craig Newmark and Facebook.

Christopher Guess, our wicked-smart lead technologist, had to invent new ways to do just about everything, combining open-source tools with software that he built himself. He designed a system to ingest live TV and process the audio for instant fact-checking. It worked so fast that we had to slow down the video.

To reduce the massive amount of computer processing, a team of students led by Duke computer science professor Jun Yang came up with a creative way to filter out sentences that did not contain factual claims. They used ClaimBuster, an algorithm developed at the University of Texas at Arlington, to act like a colander that kept only good factual claims and let the others drain away.

Squash works by converting audio to text and then matching the claim against a database of fact-checks.

Today, this is how Squash works: It “listens” to a speech or debate, sending audio clips to Google Cloud that are converted to text. That text is then run through ClaimBuster, which identifies sentences the algorithm believes are good claims to check. They are compared against the database of published fact checks to look for matches. When one is found, a summary of that fact check pops up on the screen.

The first few times you see the related fact check appear on the screen, it’s amazing. I got chills. I felt was getting a glimpse of the future. The dream of those PolitiFact readers from 2007 had come true.

But …

Look a little closer and you will quickly realize that Squash isn’t perfect. If you watch in our web mode, which shows Squash’s AI “brain” at work, you will see plenty of mistakes as it converts voice to text. Some are real doozies.

Last summer during the Democratic convention, former Iowa Gov. Tom Vilsack said this: “The powerful storm that swept through Iowa last week has taken a terrible toll on our farmers ……”

But Squash (it was really Google Cloud) translated it as “Armpit sweat through the last week is taking a terrible toll on our farmers.”

Squash’s matching algorithm also makes too many mistakes finding the right fact check. Sometimes it is right on the money. It often correctly matched then-President Donald Trump’s statements on China, the economy and the border wall.

But other times it comes up with bizarre matches. Guess and our project manager Erica Ryan, who spends hours analyzing the results of our tests, believe this often happens because Squash mistakenly thinks an individual word or number is important. (Our all-time favorite was in our first test, when it matched a sentence by President Trump about men walking on the moon with a Washington Post fact-check about the bureaucracy for getting a road permit. The match occurred because both included the word years.)

Squash works by detecting politicians’ claims and matching them with related fact-checks. (Screengrab from Democratic debate)

To reduce the problem, Guess built a human editing tool called Gardener that enables us to weed out the bad matches. That helps a lot because the editor can choose the best fact check or reject them all.

The most frustrating problem is that a lot of time, Squash just sits there, idle, even when politicians are spewing sentences packed with factual claims. Squash is working properly, Guess assures us, it just isn’t finding any fact checks that are even close. This happened in our latest test, a news conference by President Joe Biden, when Squash could muster only two matches in more than an hour.

That problem is a simple one: There simply are not enough published fact checks to power Squash (or any other automated app).

We need more fact checks – As I noted in the previous section, this is a major shortcoming that will hinder anyone who wants to draw from the existing corpus of fact checks. Despite the steady growth of fact-checking in the United States and around the world, and despite the boom that occurred in the Trump years, there simply are not enough fact checks of enough politicians to provide enough matches for Squash and similar apps.

We had our greatest success during debates and party conventions, events when Squash could draw from a relatively large database of checks on the candidates from PolitiFact, FactCheck.org and The Washington Post. But we could not use Squash on state and local events because there simply were not enough fact-checks for possible matches.

Ryan and Guess believe we need dozens of fact checks on a single candidate, across a broad range of topics, to have enough to make Squash work.

More armpit sweat is needed to improve voice to text – We all know the limitations of Siri, which still translates a lot of things wrong despite years of tweaks and improvements by Apple. That’s a reminder that improving voice-to-text technology remains a difficult challenge. It’s especially hard in political events when audio can be inconsistent and when candidates sometimes shout at each other. (Identifying speakers in debates is yet another problem.)

As we currently envision Squash and this type of automated fact-checking, we are reliant on voice-to-text translations, but given the difficulty of automated “hearing,” we’ll have to accept a certain error level for the foreseeable future.

Matching algorithms can be improved – This is one area that we’re optimistic about. Most of our tests relied on off-the-shelf search engines to do the matching, until Guess began to experiment with a new approach to improve the matching. That approach relies on subject tags (which unfortunately are not included in ClaimReview) to help the algorithm make smarter choices and avoid irrelevant choices.

The idea is that if Squash knows the claim is about guns, it would find the best matches from published fact checks that have been tagged under the same subject. Guess found this approach promising but did not get a chance to try the approach at scale.

Until the matching improves, we’ve found humans are still needed to monitor and manage anything that gets displayed — as we did with our Gardener tool.

Ugh, UX – The simplest part of my vision, the Truth-O-Meter popping up on the screen, ended up being one of our most complex challenges. Yes, Guess was able to make the meter or the Washington Post Pinocchios pop up, but what were they referring to? This question of user experience was tricky in several ways.

First, we were not providing an instant fact check of the statement that was just said. We were popping up a summary of a related fact check that was previously published. Because politicians repeat the same talking points, the statements were generally similar and in some cases, even identical. But we couldn’t guarantee that, so we labeled the pop-up “Related fact-check.”

Second, the fact check appeared during a live, fast-moving event. So we realized it could be unclear to viewers which previous statement the pop-up referred to. This was especially tricky in a debate when candidates traded competing factual claims. The pop-up could be helpful with either of them. But the visual design that seemed so simple for my PowerPoint a decade earlier didn’t work in real life. Was that “False” Truth-O-Meter for the immigration statement Biden said? Or the one that Trump said?

Another UX problem: To give people time to read all the text (the related fact checks sometimes had lengthy statements), Guess had them linger on the screen for 15 seconds. And our designer Justin Reese made them attractive and readable. But by the end of that time the candidates might have said two more factual claims, further confusing viewers that saw the “False” meter.

So UX wasn’t just a problem, it was a tangle of many problems involving limited space on the screen (What should we display and where? Will readers understand the concept that the previous fact check is only related to what was just said?), time (How long should we display it in relation to when the politician spoke?) and user interaction (Should our web version allow users to pause the speech or debate to read a related fact check?). It’s an enormously complicated challenge.

* * *

Looking back at my PowerPoint vision of how automated fact-checking would work, we came pretty close. We succeeded in using technology to detect political speech and make relevant fact checks automatically pop up on a video screen. That’s a remarkable achievement, a testament to groundbreaking work by Guess and an incredible team.

But there are plenty of barriers that make it difficult for us to realize the dream and will challenge anyone who tries to tackle this in the future. I hope others can build on our successes, learn from our mistakes, and develop better versions in years to come.

Comments closed

Pop-up fact-checking moves online: Lessons from our user experience testing

We initially wanted to build pop-up fact-checking for a TV screen. But for nearly a year, people have told us in surveys and in coffee shops that they like live fact-checking but they need more information than they can get on a TV.

The testing is a key part of our development of Squash, our groundbreaking live fact-checking product. We started by interviewing a handful of users of our FactStream app. We wanted to know how they found out about the app, how they find fact checks about things they hear on TV, and what they would need to trust live fact-checking. As we saw in our “Red Couch Experiments” in 2018, they were excited about the concept but they wanted more than a TV screen allowed. 

We supplemented those interviews with conversations in coffee shops – “guerilla research” in user experience (UX) terms. And again, the people we spoke with were excited about the concept but wanted more information than a 1740×90 pixel display could accommodate.

The most common request was the ability to access the full published fact-check. Some wanted to know if more than one fact-checker had vetted the claim, and if so, did they all reach the same conclusion? Some just wanted to be able to pause the video. 

Since those things weren’t possible with a conventional TV display, we pivoted and began to imagine what live fact-checking would look like on the web. 

Bringing Pop-Up Fact-Checking to the Web

In an online whiteboard session, our Duke Tech and Check Cooperative team discussed many possibilities for bringing live fact-checking online, and then, our UX team — students Javan Jiang and Dora Pekec and myself — designed a new interface for live fact-checking and tested it in a series of simple open-ended preference surveys. 

In total, 100 people responded to these surveys, in addition to the eight interviews above and a large experiment with 1,500 participants we did late last year about whether users want ratings in on-screen displays (they do). 

A common theme emerged in the new research: Make live fact-checking as non-disruptive to the viewing experience as possible. More specifically, we found three things that users want and need from the live fact-checking experience.

  • Users prefer a fact-checking display beneath the video. In our initial survey, users could choose if they liked a display beside or beneath the video. About three-quarters of respondents said that a display beneath the video was less disruptive to their viewing, with several telling us that this placement was similar to existing video platforms such as YouTube. 
  •  Users need “persistent onboarding” to make use of the content they get from live fact-checking. A user guide or FAQ is not enough. Squash can’t yet provide real-time fact-checking. It is a system that matches claims made during a televised event to claims previously checked. But users need to be reminded that they are seeing a “related fact-check,” not necessarily a perfect match to the claim they just heard. “Persistent onboarding” means providing users with subtle reminders in the display. For example, when a user hovers over the label “Related Fact Check,” a small box could explain that this is not a real-time fact check but an already published fact check about a similar claim made in the past. This was one of the features users liked most because it kept them from having to find the information themselves.
  • Users prefer all the information that is available on the initial screen. Our first test allowed users to expand the display to see more information about the fact check, such as the publisher of the fact check and an explanation of what statement triggered the system to display a fact check. But users said that having to toggle the display to see this information was disruptive. 
Users told us they wanted more on-screen explanations, sometimes called “persistent onboarding.”

More to Learn

Though we’ve learned a lot, some big questions remain. We still don’t know what live fact-checking looks like under less-than-ideal conditions. For example, how would users react to a fact check when the spoken claim is true but the relevant fact check is about a claim that was false? 

And we need to figure out timing, particularly for multi-speaker events such as debates. When is the right time to display a fact-check after a politician has spoken? And what if the screen is now showing another politician?

And how can we appeal to audiences that are skeptical of fact-checking? One respondent specifically said he’d want to be able to turn off the display because “none of the fact-checkers are credible.” What strategies or content would help make such audiences more receptive to live fact-checking? 

As we wrestle with those questions, moving live fact-checking to the web still opens up new possibilities, such as the ability to pause content (we call that “DVR mode”), read fact-checks,  and return to the event. We are hopeful this shift in platform will ultimately bring automated fact-checking to larger audiences.

Comments closed

Squash report card: Improvements during State of the Union … and how humans will make our AI smarter

Squash, the experimental pop-up fact-checking product of the Reporters’ Lab, is getting better.

Our live test during the State of the Union address on Feb. 4 showed significant improvement over our inaugural test last year. Squash popped up 14 relevant fact-checks on the screen, up from just six last year.

That improvement matches a general trend we’ve seen in our testing. We’ve had a higher rate of relevant matches when we use Squash on videos of debates and speeches.

But we still have a long way to go. This month’s State of the Union speech also had 20 non-relevant matches, which means Squash displayed fact-checks that weren’t related to what the president said. If you’d been watching at that moment, you probably would have thought, “What is Squash thinking?”

We’re now going to try two ways to make Squash smarter: a new subject tagging system that will be based on a wonderfully addictive game developed by our lead technologist Chris Guess; and a new interface that will bring humans into the live decision-making. Squash will recommend fact-checks to display, but an editor will make the final judgment.

Some background in case you’re new to our project: Squash, part of the Lab’s Tech & Check Cooperative, is a revolutionary new product that displays fact-checks on a video screen during a debate or political speech. Squash “hears” what politicians say, converts their speech to text and then searches a database of previously published fact-checks for one that’s related. When Squash finds one, it displays a summary on the screen.

For our latest tests, we’ve been using Elasticsearch, a tool for building search engines that we’ve made smarter with two filters: ClaimBuster, an algorithm that identifies factual claims, and a large set of common synonyms. ClaimBuster helps Squash avoid wasting time and effort on sentences that aren’t factual claims, and the synonyms help it make better matches.

Guess, assisted by project manager Erica Ryan and student developers Jack Proudfoot and Sanha Lim, will soon be testing a new way of matching that uses natural language processing based on the subject of the fact-check. We believe that we’ll get more relevant matches if the matching is based on subjects rather than just the words in the politicians’ claims.

But to make that possible, we have to put subject tags on thousands of fact-checks in our ClaimReview database. So Guess has created a game called Caucus that displays a fact-check on your phone and then asks you to assign subject tags to it. The game is oddly addictive. Every time you submit one, you want to do another…and another. Guess has a leaderboard so we can keep track of who is tagging the most fact-checks. We’re testing the game with our students and staff, but hope to make it public soon.

We’ve also decided that Squash needs a little human help. Guess, working with our student developer Matt O’Boyle, is building an interface for human editors to control which matches actually pop up on users’ screens.

The new interface would let them review the fact-check that Squash recommends and decide whether to let it pop up on the screen, which should help us filter out most of the unrelated matches.

That should eliminate the slightly embarrassing problem when Squash makes a match that is comically bad. (My favorite: one from last year’s State of the Union when Squash matched the president’s line about men walking on the moon with a fact-check on how long it takes to get a permit to build a road.)

Assuming the new interface works relatively well, we’ll try to do a public demo of Squash this summer. 

Slowly but steadily, we are making progress. Watch for more improvements soon.

Comments closed

Beyond the Red Couch: Bringing UX Testing to Squash

Fact-checkers have a problem.

They want to use technology to hold politicians accountable by getting fact-checks in front of the public as quickly as possible. But they don’t yet know the best ways to make their content understood. At the Duke Reporters’ Lab, that’s where Jessica Mahone comes in.

Jessica Mahone is designing tests to help Duke Reporters’ Lab researchers figure out how to clearly share fact-checks live during broadcasts. Photo by Andrew Donohue

The Lab is developing Squash, a tool built to bring live fact-checking of politicians to TV. Mahone, a social scientist, was brought on board to design experiments and conduct user experience (UX) tests for Squash. 

UX design is the discipline focused on making new products easy to use. A clear UX design means that a product is intuitive and new users get it without a steep learning curve. 

“If people can’t understand your product or find it hard to use, then you are doomed from the start. With Squash, this means that we want people to comprehend the information and be able to quickly determine whether a claim is true or not,” Mahone said

For Squash, fact-check content that pops up on screens needs to be instantly understood since it will only be visible for a few seconds. So what’s the best way?

Bill Adair, the director of the Duke Tech & Check Cooperative, organized some preliminary testing last year that he dubbed the red couch experiments. The tests revealed more research was needed to understand the best way to inform viewers. 

“I originally thought that all it would take is a Truth-O-Meter popping up on screen,” Adair said. “Turns out it’s much more complicated than that.”

Sixteen people watched videos of Barack Obama and Donald Trump delivering State of the Union speeches while fact-checks of some of what they said appeared on the screen. Ratings were true, false or something in between. Blink, a company specializing in UX testing, found that participants loved the concept of real-time fact-checking and would welcome it on TV broadcasts. But the design of the pop-up fact-checks often confused them.

It’s not just the quality of content that counts. Viewers must understand what they see very quickly. Squash may one day share fact-checks during live events, including State of the Union addresses.

Some viewers didn’t understand the fact-check ratings such as true or false when they were displayed. Others assumed the presidents’ statements must be true if no fact-check was shown. That’s a problem because Squash doesn’t fact-check all claims in speeches. It displays published previously fact-checks for only the claims that match Squash’s finicky search algorithm. 

The red couch experiments were “a very basic test of the concept,” Mahone said. “What they found mainly is that there was a need to do more diving in and digging into the some questions about how people respond to this. Because it’s actually quite complex.”

Mahone has developed a new round of tests scheduled to begin this week. These tests will use Amazon Mechanical Turk, an online platform that relies on people who sign up to be paid research subjects.

“One thing that came out of the initial testing was that people don’t like to see a rating of a fact-check,” Mahone said. “I was a little skeptical of that. Most of the social science research says that people do prefer things like that because it makes it a lot easier for them to make decisions.”

In this next phase, Mahone will recruit about 500 subjects. A third will see a summary of a fact-check with a PolitiFact TRUE icon. Another third will see a summary with the just the label TRUE. The rest will see just a summary text of a fact-check.

Each viewer will rank how interested they are in using an automated fact-checking tool after viewing the different displays. Mahone will compare the results.

After finding out if including ratings works, Mahone and three undergraduate students, Dora Pekec, Javan Jiang and Jia Dua, will look at the bigger picture of Squash’s user experience. They will use a company to find about 20 people to talk to, ideally individuals who consistently watch TV news and are familiar with fact-checking.

Participants will be asked what features they would want in real-time fact-checking.

“The whole idea is to ask people ‘Hey, if you had access to a tool that could tell you if what someone on TV is saying is true or false, what would you want to see in that tool?’ ” Mahone said. “We want to figure out what people want and need out of Squash.”

Figuring out how to make Squash intuitive is critical to its success, according to Chris Guess, the Lab’s lead technologist. Part of the challenge is that Squash is something new and viewers have no experience with similar products.

“These days, people do a lot more than just watch a debate. They’re cooking dinner, playing on their phone, watching over the kids,” Guess said. “We want people to be able to tune in, see what’s going on, check out the automated fact-checks and then be able to tune out without missing anything.”

Reporters’ Lab researchers hope to have Squash up and running for the homestretch of the 2020 presidential campaign. Adair, Knight Professor of the Practice of Journalism and Public Policy at Duke, has begun reaching out to television executives to gauge their interest in an automated fact-checking tool. 

“TV networks are interested, but they want to wait and see a product that is more developed.” Adair said. 

 

Comments closed

Tech & Check in the news

It’s been more than a year since the Reporters’ Lab received $1.2 million in grant funding to launch the Duke Tech & Check Cooperative.

Our goal is to link computer scientists and journalists to better automate fact-checking and expand how many people see this vital, accountability reporting.

Here’s a sampling of some of the coverage about the range of projects we’re tackling:

Tech & Check:

Associated Press, Technology Near For Real-Time TV Political Fact-Checks

Digital Trends, Real-time fact-checking is coming to live TV. But will networks use it?

Nancy Watzman, Tech & Check: Automating Fact-Checking
Poynter, Automated fact-checking has come a long way. But it still faces significant challenges.
MediaShift, The Fact-Checking Army Waging War on Fake News

FactStream:
NiemanLab, The red couch experiments, early lessons in pop-up fact-checking.
WRAL, Fake news? App will help State of the Union viewers sort out fact, fiction
Media Shift, An Experiment in Live Fact-Checking the State of the Union Speech by Trump
American Press Institute, President Trump’s first State of the Union address is Tuesday night. Here’s how to prepare yourself, factually speaking.
WRAL, App will help views sort fact, fiction in State of the Union
NiemanLab, Automated, live fact-checks during the State of the Union? The Tech & Check Cooperative’s first beta test hopes to pull it off
NiemanLab, FactStream debuted live fact-checking with last night’s SOTU. How’d it go?

Tech & Check Alerts:
Poynter, This Washington Post fact check was chosen by a bot

Truth Goggles:
NiemanLab, Truth Goggles are back! And ready for the next era of fact-checking

And …
NiemanLab, So what is that, er, Trusted News Integrity Trust Project all about? A guide to the (many, similarly named) new efforts fighting for journalism
MediaShift, Fighting Fake News: Key Innovations in 2017 from Platforms, Universities and More NiemanLab, With $4.5 million, Knight is launching a new commission — and funding more new projects — to address declining public trust in media
Poynter, Knight’s new initiative to counter misinformation includes more than $1.3 million for fact-checking projects
Axios, How pro-trust initiatives are taking over the internet
Recode, Why the Craig behind Craigslist gave big bucks to a journalism program
Digital News Report (with Reuters and Oxford), Understanding the Promise and Limits of Automated Fact-Checking
Democratic Minority Staff Report, U.S. House Committee on Science, Space & Technology, Old Tactics, New Tools: A Review of Russia’s Soft Cyber Influence Operations

Comments closed

Duke students tackle big challenges in automated fact-checking

Three Duke computer science majors advanced the quest for what some computer scientists say is the Holy Grail in fact-checking this summer.

Caroline Wang, Ethan Holland and Lucas Fagan tackled major challenges to creating an automated system that can both detect factual claims while politicians speak and instantly provide fact-checks.

That required finding and customizing state-of-art computing tools that most journalists would not recognize. A collective fondness for that sort of challenge helped, a lot.

Duke junior Caroline Wang

“We had a lot of fun discussing all the different algorithms out there, and just learning what machine learning techniques had been applied to natural language processing,” said Wang, a junior also majoring in math.

Wang and her partners took on the assignment for a Data+ research project. Part of the Information Initiative at Duke, Data+ invites students and faculty to find data-driven solutions to research challenges confronting scholars on campus.

The fact-checking team convened in a Gross Hall conference from 9 am to 4 pm every weekday for 10 weeks to help each other figure out how to help achieve live fact-checking, a goal of Knight journalism professor Bill Adair and other practitioners of accountability journalism.

Their goal was to do something of a “rough cut” of end-to-end automated fact-checking: to convert a political speech to text, identify the most “checkable” sentences in the speech and then match them with previously published fact-checks.

The students concluded that Google Cloud Speech-to-Text API was the best available tool to automate audio transcriptions. They then submitted the sentences to ClaimBuster, a project at the University of Texas at Arlington that the Duke Tech & Check Cooperative uses to identify statements that merit fact-checking. ClaimBuster acted as a helpful filter that reduced the number of claims submitted to the database, which in turn reduced processing time.

They chose Google Cloud speech-to-text because it can infer where punctuation belongs, Holland said. That yields text divided into complete thoughts. Google speech-to-text also shares transcription results while it processes the audio, rather than waiting until translation is done. That speeds up how fast the new text can get moved to the next steps along a fact-checking pipeline.

Duke junior Ethan Holland

“Google will say: This is my current take and this is my current confidence that take is right. That lets you cut down on the lag,” said Holland, a junior whose second major is statistics.

Their next step was finding ways to match the claims from that speech with the database of fact-checks that came from the Lab’s Share the Facts project. (The database contains thousands of articles published by the Washington Post, FactCheck.org and PolitiFact, each checking an individual claim.)

To do that, the students adapted an algorithm that the open-source research outfit OpenAI released in June, after the students started working together. The algorithm builds on The Transformer, a new neural network computing architecture that Google researchers published just six months prior.

Duke sophomore Lucas Fagan

The architecture alters how computers organize trying to understand written language. Instead of translating a sentence word by word, The Transformer weighs the importance of each word to the meaning of every other word. Over time that system helps machines discern meaning in more and more sentences more quickly.

“It’s a lot more like learning English. You grow up hearing it and your learn it,” said Fagan, a sophomore also majoring in math.

Work by Wang, Holland and Fagan is expected to help jumpstart a Bass Connections fact-checking team that started this fall. Students on that team will continue the hunt for better strategies to find statements that are good fact-check candidates, produce pop-up fact-checks and create apps to deliver this accountability journalism to more people.

Tech & Check has $1.2 million in funding from the John S. and James L. Knight Foundation, the Facebook Journalism Project and the Craig Newmark Foundation to tackle that job.

Comments closed

FactStream app now shows latest fact-checks from Post, FactCheck.org and PolitiFact

FactStream, our iPhone/iPad app, has a new feature that displays the latest fact-checks from FactCheck.org, PolitiFact and The Washington Post.

FactStream was conceived as an app for live fact-checking during debates and speeches. (We had a successful beta test during the State of the Union address in January.) But our new “daily stream” makes the app valuable every day. You can check it often to get summaries of the newest fact-checks and then click through to the full articles.

The new version of FactStream lets users get notifications of the latest fact-checks.

By viewing the work of the nation’s three largest fact-checkers in the same stream, you can spot trends, such as which statements and subjects are getting checked, or which politicians and organizations are getting their facts right or wrong.

The new version of the app includes custom notifications so users can get alerts for every new fact-check or every “worst” rating, such as Four Pinocchios from Washington Post Fact Checker Glenn Kessler, a False from FactCheck.org or a False or Pants on Fire from PolitiFact.

The daily stream shows the latest fact-checks.

The new daily stream was suggested by Eugene Kiely, the director of FactCheck.org. The app was built by our lead technologist Christopher Guess and the Durham, N.C., design firm Registered Creative. It gets the fact-check summaries from ClaimReview, our partnership with Google that has created a global tagging system for fact-checking. We plan to expand the daily stream to include other fact-checkers in the future.

The app also allows users to search the latest fact-checks by the name of the person or group making the statement, by subject or keyword.

Users can get notifications on their phones and on their Apple Watch.

FactStream is part of the Duke Tech & Check Cooperative, a $1.2 million project to automate fact-checking supported by Knight Foundation, the Facebook Journalism Project and the Craig Newmark Foundation.

FactStream is available as a free download from the App Store.

 

Comments closed

At Global Fact V: A celebration of community

My opening remarks at Global Fact V, the fifth annual meeting of the world’s fact-checkers, organized by the International Fact-Checking Network, held June 20-22 in Rome.

A couple of weeks ago, a photo from our first Global Fact showed up in my Facebook feed. Many of you will remember it: we had been all crammed into a classroom at the London School of Economics. When we went outside for a group photo, there were about 50 of us.

To show how our conference has grown, I posted that photo on Twitter along with one from our 2016 conference that had almost twice as many people. I also posted a third photo that showed thousands of people gathered in front of the Vatican. I said that was our projected crowd for this conference.

I rate that photo Mostly True.

What all of our conferences have in common is that they are really about community. It all began in that tiny classroom at the London School of Economics when we realized that whether we were from Italy or the U.K. or Egypt, we were all in this together. We discovered that even though we hadn’t talked much before or in many cases even met, we were facing the same challenges — fundraising and finding an audience and overcoming partisanship.

It was also a really powerful experience because we got a sense of how some fact-checkers around the world were struggling under difficult circumstances — under governments that provide little transparency, or, much worse, governments that oppress journalists and are hostile toward fact-checkers.

Throughout that first London conference there was an incredible sense of community. We’d never met before, but in just a couple of days we formed strong bonds. We vowed to keep in touch and keep talking and help each other.

It was an incredibly powerful experience for me. I was at a point in my career where I was trying to sort out what I would do in my new position in academia. I came back inspired and decided to start an association of fact-checkers – and hold these meetings every year.

The next year we started the IFCN and Poynter generously agreed to be its home. And then we hired Alexios as the leader.

Since then, there are have been two common themes. One you hear so often that it’s become my mantra: Fact-checking keeps growing. Our latest census of fact-checking in the Reporters’ Lab shows 149 active fact-checking projects and I’m glad to see that number keep going up and up.

The other theme, as I noted earlier, is community. I thought I’d focus this morning on a few examples.

Let’s start with Mexico, where more than 60 publishers, universities and civil society organizations have started Verificado 2018, a remarkable collaboration. It was originally focused largely on false news, but they’ve put more emphasis on fact-checking because of public demand. Daniel Funke wrote a great piece last week about how they checked a presidential debate.

In Norway, an extraordinary team of rivals has come together to create Faktisk, which is Norwegian for “actually” and “factually.” It launched nearly a year ago with four of the country’s biggest news organizations — VG, Dagbladet, NRK and TV 2 – and it’s grown since then. My colleague Mark Stencel likened it to the New York Times, The Washington Post and PBS launching a fact-checking project together.

 

At Duke, both of our big projects are possible because of the fact-checkers’ commitment to help each other. The first, Share the Facts and the creation of the ClaimReview schema, grew out of an idea from Glenn Kessler, the Washington Post Fact Checker, who suggested that Google put “fact-check” tags on search results.

That idea became our Duke-Google-Schema.org collaboration that created what many of you now use so search engines can find your work. And one unintended consequence: it makes automated fact-checking more possible. It all started because of one fact-checker’s sense of community.

Also, FactStream, the new app of our Tech & Check Cooperative, has been a remarkable collaboration between the big US fact-checkers — the Post, FactCheck.org and PolitiFact. All three took part in the beta test of the first version, our live coverage of the State of the Union address back in January. Getting them together on the same app was pretty remarkable. But our new version of the app –which we’re releasing this week – is even cooler. It’s like collaboration squared, or collaboration to the second power!

It took Glenn’s idea, which created the Share the Facts widget, and combined it with an idea from Eugene Kiely, the head of FactCheck.org, who said we should create a new feature on FactStream that shows the latest U.S. widgets every day.

So that’s what we did. And you know what: it’s a great new feature that reveals new things about our political discourse. Every day, it shows the latest fact-checks in a constant stream and users can click through, driving new traffic to the fact-checking sites. I’ll talk more about it during the automated demo session on Friday. But it wouldn’t be possible if it weren’t for the commitment to collaboration and community by Glenn and Eugene.

We’ve got a busy few days ahead, so let’s get on with it. There sure are a lot of you!

As we know from the photographs: fact-checking keeps growing.

 

Comments closed

Tech & Check Alerts aim to ease the workload of fact-checkers

Students in the Duke Reporters’ Lab have built a bot that is like an intern who watches TV around the clock.

Asa Royal, a junior at Duke University, and Lucas Fagan, a freshman, have created Tech & Check Alerts, a new tool in a series of innovations the Reporters’ Lab is creating to help simplify the fact-checking process.

Using Tech & Check Alerts, the Lab can identify check-worthy claims in television news transcripts and send them to fact-checkers in daily email alerts.

“We’re going to save fact-checkers a lot of time and help them find things that they would otherwise miss,” said Mark Stencel, co-director of the Reporters’ Lab.

Though the fact-checking industry is growing worldwide, the organizations doing that work are typically small, even one-person enterprises, and the workload can be burdensome. Fact-checkers often have to sift through pages of text to find claims to check. This time-consuming process can create a substantial time gap between when statements are made and when fact-checks are available to viewers or readers.

The Tech & Check Alerts automate that process. Royal and Fagan, who are both computer science majors, created a program that scans transcripts of TV news channels, such as CNN, for claims that fact-checkers may want to investigate. It then compiles the check-worthy claims and sends them in a daily email to fact-checkers at The Washington Post, PolitiFact, the Associated Press, FactCheck.org and The New York Times, among others. Thus far, there have been seven fact-checks performed based on these alerts.

“Journalists don’t have to watch 15 hours of CNN or read the entire congressional report,” Royal said. “We’ll do it for them.”

Royal and Fagan created Tech & Check Alerts using ClaimBuster, an algorithm created by computer scientist Chengkai Li from the University of Texas at Arlington. ClaimBuster scans blocks of text and identifies “check-worthy” claims, based on indicators such as past-tense verbs, numbers, dates or statistics. It ranks statements from 0 to 1.0 based on how likely they are to be checkable; any statements that score a 0.7 or higher are typically considered check-worthy.

According to Royal, Li’s technology had yet to be used much outside of academia, so leaders of the Tech & Check Cooperative decided to utilize it for daily alerts.

“There’s already software that can find factual claims, and there are already fact-checkers who can check them,” Royal said. “We’re just solving the last-mile problem.”

The creation of Tech & Check Alerts is an important step for the Duke Tech & Check Cooperative, a two-year research project funded by the John S. and James L. Knight Foundation, the Facebook Journalism Project and the Craig Newmark Foundation.

The broader purpose of this initiative is to bring together journalists, academics and computer scientists from across the country to innovate and automate the fact-checking industry. Over the course of two years, the Reporters’ Lab will develop tools that ease the job of fact-checkers and make fact-checking more accessible to consumers. Another tool the Lab is currently working on is FactStream, an app that provides instant fact-checking during live events.

Alongside other student researchers, Fagan and Royal are working to improve Tech & Check Alerts to include additional sources such as daily floor speeches and debates from the Congressional Record, and social media feeds from endangered incumbents running in this year’s closest House and Senate races. Fact-checkers will have input on how these additional alerts will be deployed.

Fagan is also building a web interface that would give fact-checking partners a way to dig deeper into these feeds and perhaps even customize certain alerts. Freshman Helena Merk, another student researcher in the Lab, is building a tool that would deliver the daily alerts directly to a channel on Slack, a communication platform used in many newsrooms.

Once these improvements are completed, and Tech & Check Alerts are deployed more widely, they should help fact-checkers across the country.

“This project is a stepping stone in our process of using real-time claims and existing fact-checks to automate fact-checking in real time,” Stencel said.

Comments closed

Journalists, computer scientists gather for Tech & Check Conference at Duke

About 40 fact-checkers, journalists, computer scientists and academics gathered at Duke University March 29-30 for the Tech & Check Conference, a meeting hosted by the Reporters’ Lab.

As part of its Tech & Check Cooperative, the Reporters’ Lab is serving as a hub for automated fact-checking to connect journalists and technologists around the world. The conference gave them an opportunity to demonstrate current projects and discuss the big challenges of automation.

Some highlights of the conference:

Tech & Check Conference* Eleven demos of past and current projects.  Technologists and computer scientists showed off projects they’ve been developing to either automate fact-checking or improve the flow of accurate information on the internet.

Topics included new tools such as Chequeabot, an automated service that detects factual claims for the Argentinian fact-checker Chequeado; the Bad Idea Factory’s update of the Truth Goggles tool; and the perils of misinformation, including a real-life example from Penn State professor S. Shyam Sundar, whose research project about fake news was inaccurately described in widespread news coverage.

Tech & Check Conference

* Two Q&A panels. Alexios Mantzarlis, director of the International Fact-Checking Network, led a discussion with three fact-checkers about the potential tools and processes that could make fact-checking more efficient in the future.

Reporters’ Lab co-director Bill Adair moderated a conversation about challenges in automated fact-checking, including the pitfalls of voice-to-text technology and natural language processing.

Attendees also participated in breakout sessions to discuss ways to develop international standards and consistent terminology.

Photos by Colin Huth.

Comments closed