Press "Enter" to skip to content

Tag: Tech & Check Cooperative

The lessons of Squash, our groundbreaking automated fact-checking platform

Squash began as a crazy dream.

Soon after I started PolitiFact in 2007, readers began suggesting a cool but far-fetched idea. They wanted to see our fact checks pop up on live TV.

That kind of automated fact-checking wasn’t possible with the technology available back then, but I liked the idea so much that I hacked together a PowerPoint of how it might look. It showed a guy watching a campaign ad when PolitiFact’s Truth-O-Meter suddenly popped up to indicate the ad was false.

Bill Adair’s original depiction of pop-up fact-checking.

It took 12 years, but our team in the Duke University Reporters’ Lab managed to make the dream come true. Today, Squash (our code name for the project, chosen because it is a nutritious vegetable and a good metaphor for stopping falsehoods) has been a remarkable success. It displays fact checks seconds after politicians utter a claim and it largely does what those readers wanted in 2007.

But Squash also makes lots of mistakes. It converts politicians’ speech to the wrong text (often with funny results) and it frequently stays idle because there simply aren’t enough claims that have been checked by the nation’s fact-checking organizations. It isn’t quite ready for prime time.

As we wrap up four years on the project, I wanted to share some of our lessons to help developers and journalists who want to continue our work. There is great potential in automated fact-checking and I’m hopeful that others will build on our success.

When I first came to Duke in 2013 and began exploring the idea, it went nowhere. That’s partly because the technology wasn’t ready and partly because I was focused on the old way that campaign ads were delivered — through conventional TV. That made it difficult to isolate ads the way we needed to.

But the technology changed. Political speeches and ads migrated to the web and my Duke team partnered with Google, Jigsaw and Schema.org to create ClaimReview, a tagging system for fact-check articles. Suddenly we had the key elements that made instant fact-checking possible: accessible video and a big database of fact checks.

I wasn’t smart enough to realize that, but my colleague Mark Stencel, the co-director of the Reporters’ Lab, was. He came into my office one day and said ClaimReview was a game changer. “You realize what you’ve done, right? You’ve created the magic ingredient for your dream of live fact-checking.” Um … yes! That had been my master plan all along!

Fact-checkers use the ClaimReview tagging system to indicate the person and claim being checked, which not only helps Google highlight the articles in search results, it also makes a big database of checks that Squash can tap.

It would be difficult to overstate the technical challenge we were facing. No one had attempted this kind of work beyond doing a demo, so there was no template to follow. Fortunately we had a smart technical team and some generous support from the Knight Foundation, Craig Newmark and Facebook.

Christopher Guess, our wicked-smart lead technologist, had to invent new ways to do just about everything, combining open-source tools with software that he built himself. He designed a system to ingest live TV and process the audio for instant fact-checking. It worked so fast that we had to slow down the video.

To reduce the massive amount of computer processing, a team of students led by Duke computer science professor Jun Yang came up with a creative way to filter out sentences that did not contain factual claims. They used ClaimBuster, an algorithm developed at the University of Texas at Arlington, to act like a colander that kept only good factual claims and let the others drain away.

Squash works by converting audio to text and then matching the claim against a database of fact-checks.

Today, this is how Squash works: It “listens” to a speech or debate, sending audio clips to Google Cloud that are converted to text. That text is then run through ClaimBuster, which identifies sentences the algorithm believes are good claims to check. They are compared against the database of published fact checks to look for matches. When one is found, a summary of that fact check pops up on the screen.

The first few times you see the related fact check appear on the screen, it’s amazing. I got chills. I felt was getting a glimpse of the future. The dream of those PolitiFact readers from 2007 had come true.

But …

Look a little closer and you will quickly realize that Squash isn’t perfect. If you watch in our web mode, which shows Squash’s AI “brain” at work, you will see plenty of mistakes as it converts voice to text. Some are real doozies.

Last summer during the Democratic convention, former Iowa Gov. Tom Vilsack said this: “The powerful storm that swept through Iowa last week has taken a terrible toll on our farmers ……”

But Squash (it was really Google Cloud) translated it as “Armpit sweat through the last week is taking a terrible toll on our farmers.”

Squash’s matching algorithm also makes too many mistakes finding the right fact check. Sometimes it is right on the money. It often correctly matched then-President Donald Trump’s statements on China, the economy and the border wall.

But other times it comes up with bizarre matches. Guess and our project manager Erica Ryan, who spends hours analyzing the results of our tests, believe this often happens because Squash mistakenly thinks an individual word or number is important. (Our all-time favorite was in our first test, when it matched a sentence by President Trump about men walking on the moon with a Washington Post fact-check about the bureaucracy for getting a road permit. The match occurred because both included the word years.)

Squash works by detecting politicians’ claims and matching them with related fact-checks. (Screengrab from Democratic debate)

To reduce the problem, Guess built a human editing tool called Gardener that enables us to weed out the bad matches. That helps a lot because the editor can choose the best fact check or reject them all.

The most frustrating problem is that a lot of time, Squash just sits there, idle, even when politicians are spewing sentences packed with factual claims. Squash is working properly, Guess assures us, it just isn’t finding any fact checks that are even close. This happened in our latest test, a news conference by President Joe Biden, when Squash could muster only two matches in more than an hour.

That problem is a simple one: There simply are not enough published fact checks to power Squash (or any other automated app).

We need more fact checks – As I noted in the previous section, this is a major shortcoming that will hinder anyone who wants to draw from the existing corpus of fact checks. Despite the steady growth of fact-checking in the United States and around the world, and despite the boom that occurred in the Trump years, there simply are not enough fact checks of enough politicians to provide enough matches for Squash and similar apps.

We had our greatest success during debates and party conventions, events when Squash could draw from a relatively large database of checks on the candidates from PolitiFact, FactCheck.org and The Washington Post. But we could not use Squash on state and local events because there simply were not enough fact-checks for possible matches.

Ryan and Guess believe we need dozens of fact checks on a single candidate, across a broad range of topics, to have enough to make Squash work.

More armpit sweat is needed to improve voice to text – We all know the limitations of Siri, which still translates a lot of things wrong despite years of tweaks and improvements by Apple. That’s a reminder that improving voice-to-text technology remains a difficult challenge. It’s especially hard in political events when audio can be inconsistent and when candidates sometimes shout at each other. (Identifying speakers in debates is yet another problem.)

As we currently envision Squash and this type of automated fact-checking, we are reliant on voice-to-text translations, but given the difficulty of automated “hearing,” we’ll have to accept a certain error level for the foreseeable future.

Matching algorithms can be improved – This is one area that we’re optimistic about. Most of our tests relied on off-the-shelf search engines to do the matching, until Guess began to experiment with a new approach to improve the matching. That approach relies on subject tags (which unfortunately are not included in ClaimReview) to help the algorithm make smarter choices and avoid irrelevant choices.

The idea is that if Squash knows the claim is about guns, it would find the best matches from published fact checks that have been tagged under the same subject. Guess found this approach promising but did not get a chance to try the approach at scale.

Until the matching improves, we’ve found humans are still needed to monitor and manage anything that gets displayed — as we did with our Gardener tool.

Ugh, UX – The simplest part of my vision, the Truth-O-Meter popping up on the screen, ended up being one of our most complex challenges. Yes, Guess was able to make the meter or the Washington Post Pinocchios pop up, but what were they referring to? This question of user experience was tricky in several ways.

First, we were not providing an instant fact check of the statement that was just said. We were popping up a summary of a related fact check that was previously published. Because politicians repeat the same talking points, the statements were generally similar and in some cases, even identical. But we couldn’t guarantee that, so we labeled the pop-up “Related fact-check.”

Second, the fact check appeared during a live, fast-moving event. So we realized it could be unclear to viewers which previous statement the pop-up referred to. This was especially tricky in a debate when candidates traded competing factual claims. The pop-up could be helpful with either of them. But the visual design that seemed so simple for my PowerPoint a decade earlier didn’t work in real life. Was that “False” Truth-O-Meter for the immigration statement Biden said? Or the one that Trump said?

Another UX problem: To give people time to read all the text (the related fact checks sometimes had lengthy statements), Guess had them linger on the screen for 15 seconds. And our designer Justin Reese made them attractive and readable. But by the end of that time the candidates might have said two more factual claims, further confusing viewers that saw the “False” meter.

So UX wasn’t just a problem, it was a tangle of many problems involving limited space on the screen (What should we display and where? Will readers understand the concept that the previous fact check is only related to what was just said?), time (How long should we display it in relation to when the politician spoke?) and user interaction (Should our web version allow users to pause the speech or debate to read a related fact check?). It’s an enormously complicated challenge.

* * *

Looking back at my PowerPoint vision of how automated fact-checking would work, we came pretty close. We succeeded in using technology to detect political speech and make relevant fact checks automatically pop up on a video screen. That’s a remarkable achievement, a testament to groundbreaking work by Guess and an incredible team.

But there are plenty of barriers that make it difficult for us to realize the dream and will challenge anyone who tries to tackle this in the future. I hope others can build on our successes, learn from our mistakes, and develop better versions in years to come.

Comments closed

Update: 237 fact-checkers in nearly 80 countries … and counting

Fact-checking has expanded to 78 countries, where the Duke Reporters’ Lab counts at least 237 organizations that actively verify the statements of public figures, track political promises and combat misinformation.

So far, that’s a 26% increase in the 10 months since the Reporters’ Lab published its 2019 fact-checking census. That was on eve of last summer’s annual Global Fact summit in South Africa, when our international database and map included 188 active fact-checkers in more than 60 countries.

We know that’s an undercount because we’re still counting. But here’s where we stand by continent:

Africa: 17
Asia: 53
Australia: 4
Europe: 68
North America: 69
South America: 26

About 20 fact-checkers listed in the database launched since last summer’s census. One of the newest launched just last week: FACTA, a spinoff of longtime Italian fact-checker Pagella Politica that will focus broadly on online hoaxes and disinformation.

The Lab’s next annual census will be published this summer, when the International Fact Checking Network hosts an online version of Global Fact. On Wednesday, the network postponed the in-person summit in Norway, scheduled for June, because of the coronavirus pandemic.

Several factors are driving the growth of fact-checking. 

One is the increasing spread of misinformation on large digital media platforms, some of which are turning to fact-checkers for help — directly and indirectly. That includes a Facebook partnership that enlists participating “third-party” fact-checkers to help respond to some categories of misleading information flagged by its users. Another example is ClaimReview, an open-source tagging system the Reporters’ Lab helped develop that makes it easier for Google and other platforms to spotlight relevant fact-checks and contradict falsehoods. The Reporters’ Lab is developing a related new tagging-system, MediaReview, that will help flag manufactured and misleading use of images, including video and photos. (Disclosure: Facebook and Google are among the funders of the Lab, which develops and deploys technology to help fact-checkers. The Lab collaborated with Schema.org and Google to establish the ClaimReview framework and encourage its adoption.)

Another factor in the growth of fact-checking is the increasing role of collaboration. That includes fact-checking partnerships that involve competing news outlets and media groups that have banded together to share fact-checks or jointly cover political claims, especially during elections. It also includes growing collaboration within large media companies. Examples of those internal partnerships range from Agence France-Presse, the French news service that has established regional fact-checking sites with dedicated reporters in dozens of its bureaus around the world, to U.S.-based TEGNA, whose local TV stations produce and share “Verify” fact-checking segments across more than four dozen outlets.

Sharing content and processes is a positive thing — though it means it’s more difficult for our Lab to keep count. These multi-outlet fact-checking collaborations make it complicated for us to determine who exactly produces what, or to keep track of the individual outlets where readers, viewers and listeners can find this work. We’ll be clarifying our selection process to address that.

We’ll have more to say about the trends and trajectory of fact-checking in our annual census when the Global Fact summit convenes online. Working with a student researcher, Reporters’ Lab director Bill Adair first began tallying fact-checking projects for the first Global Fact summit in 2014. That gathering of about 50 people in London ultimately led a year later to the formation of the International Fact Checking Network, which is based at the Poynter Institute, a media studies and training center in St. Petersburg, Florida.

The IFCN summit itself has become a measure of fact-checkng’s growth. Before IFCN decided to turn this year’s in-person conference into an online event, more than 400 people had confirmed their participation. That would have been about eight times larger than the original London meeting in 2014.

IFCN director Baybars Örsek told fact-checkers Wednesday that the virtual summit will be scheduled in the coming weeks. Watch for our annual fact-checking census then.

Comments closed

A better ClaimReview to grow a global fact-check database

It’s now much easier for fact-checkers to use ClaimReview, a tagging tool that logs fact-checks published around the world into one database. The tool helps search engines — and readers — find non-partisan fact-checks published globally. It also organizes fact-check content into structured data that automated fact-checking will require.

Currently, only half of the roughly 160 fact-checking organizations that the Duke Reporters’ Lab tracks globally use ClaimReview. In response, Google and the Duke Reporters’ Lab have developed an easier method of labelling the articles to help both recruit more users and expand a vital fact-check data set.

The locations of only some fact-checkers tracked by the Reporters’ Lab are visible here. A revised ClaimReview may help more log their fact-checks into a growing, global database.

ClaimReview was created in 2015 after a conversation between staff at Google and Glenn Kessler, the Washington Post fact-checker. Kessler wanted Google to highlight fact-checks in its search results. Bill Adair, director of the Duke Reporters’ Lab,  was soon brought in to help.

Dan Brickley from Schema.org, Justin Kosslyn from Google and Adair developed a tagging system based on the schemas maintained by Schema.org, an organization that develops structured ways of organizing information. They created a universal system for fact-checkers to label their articles to include the claim checked, who said it and a ruling on its accuracy. “It’s the infrastructure that provides the atomic unit of fact-checking to search engines,” Adair said.

Initially, ClaimReview produced a piece of code that fact-checkers copy and pasted into their online content management system. Google and other search engines look for the code when crawling content. Next, Chris Guess of Adair’s team developed a ClaimReview widget called Share the Facts, a content box summarizing fact-checks that PolitiFact, FactCheck.org and the Washington Post can publish online and share on social media.

The latest version of ClaimReview no longer requires users to copy and paste the code, which can behave inconsistently on different content management systems. Instead, fact-checkers only have to fill out Google form fields similar to what they used previously to produce the code.

While the concept of ClaimReview is simple, it opens to the door to more innovation in fact-checking. It organizes data in ways that can be reused. By “structuring journalism, we can present content in more valuable ways to people,” said Adair.

By labeling fact-checks, the creators effectively created a searchable database of fact-checks, numbering about 24,000 today. The main products under development at the Reporters’ Lab, from FactStream to Squash, rely on fact-check databases. Automated fact-checking especially requires a robust database to quickly match untrue claims to previously published fact-checks.

Bill Adair presenting at Tech & Check 2019

The database ClaimReview builds offers even more possibilities. Adair hopes to tweak the fields fact-checkers fill in to provide better summaries of the fact-checks and provide more information to readers. In addition, Adair envisions ClaimReview being used to tag types of misinformation, as well as authors and publishers of false content. It could also tag websites that have a history of publishing false or misleading articles.

The tagging already is already benefiting some fact-check publishers. “ClaimReview helps to highlight and surface our fact-checks on Google, more than the best SEO skills or organic search would be able to achieve,” said Laura Kapelari, a journalist with Africa Check. ClaimReview has increased traffic on Africa Check’s website and helped the smaller Africa Check compete with larger media houses, she said. It also helps fact-checkers know which facts have already been investigated, which reduces redundant checks.

Joel Luther, the ClaimReview project manager in the Reporters’ Lab, expects this new ClaimReview format will save fact-checkers time and decrease errors when labeling fact-checks. However, there is still room to grow. Kapelari wishes there was a way for the tool to automatically grab key fields such as names in order to save time.

The Reporters’ Lab has a plan to promote ClaimReview globally. Adair is already busy on that front. Early this month, a group of international fact-checkers and technologists met in Durham for Tech & Check 2019, an annual conference where people on this quest share progress on automated fact-checking projects intended to fight misinformation. Adair, an organizer of Tech & Check, emphasized new developments with ClaimReview, as well as its promise for automating fact-checking.

Not much would be possible without this tool, he stressed. “It’s the secret sauce.”

Comments closed

Talking Point Tracker: A project to spot hot topics as they flare up on TV news

When fact-checking technologists and journalists gather in Durham for the 2019 Tech & Check Conference this month, they will share new tools intended to optimize and automate fact-checking.

Dan Schultz
Dan Schultz of the Bad Idea Factory is preparing to debut a version of Talking Point Tracker.

For Dan Schultz, one founder of the Bad Idea Factory software development collective, this will be a chance to debut a “mannequin” version of the Talking Point Tracker. Created in collaboration with the Duke Tech & Check Cooperative, the tracker is intended to “capture the zeitgeist” of television news by identifying trending topics.

Duke journalism professor Bill Adair, who runs Tech & Check, launched the project by asking Schultz how fact-checkers could capture hot topics on TV news as quickly as possible. That is a simple but powerful idea. TV news is a place of vast discourse, where millions of viewers watch traditional, nonpartisan newscasts and partisan broadcasters such as Sean Hannity and Rachel Maddow. Listening in would give insight into what Schultz calls a “driver or predictor of collective consciousness.”

But executing even simple ideas can be difficult. In this case, TV news programs broadcast dense flows of media: audio, video, text and images that are not simple to track. Luckily, network and cable news outlets produce closed-caption subtitles for news shows. Talking Pointer Tracker scans those subtitles to identify keywords used most frequently within blocks of time. It also puts the keywords in context by showing sentences and longer passages where the keywords were found. To deepen the context, the tracker shows related keywords that often appear with the trending words.

The eventual goal is to group keywords into clusters that better capture emerging conversations. “Our hope is that it will be a useful tool for journalists who want to write in the context of what’s being discussed,“ said Schultz, who is collaborating with Justin Reese, a front-end developer with the Bad Idea Factory, on the project.

More technically, Talking Point Tracker runs closed-caption transcripts through a natural language processing pipeline that cleans the text as well as it can. An application programming interface, an API, uses separate language processing algorithm to find the most common keywords. These are “named entities” — usually proper nouns that can be sorted into different categories such as places, organizations and locations.

Talking Point Tracker’s prototype, to be unveiled at Tech & Check, is dense with information. But the design Reese created for viewing on a computer screen makes it readable. There’s enough white space to be easy on the eyes and a color scheme of red, blue, black and yellow that organizes text.

Talking Point Tracker packs lots of data on the current version of its screen display.

A list of the most frequent keywords over a specified time period are listed in a column on the left. Next to that is a line graph highlights their frequency. Sentences from which the keywords are listed on the right. If you click there, the tool points you to longer passages of transcripts. On the bottom are related keywords that often appear in the same sentences as a given word.

Moving from a mannequin stage to a living stage for this project will be challenging, Schultz said. As much as natural language processing has evolved over the past decade, algorithms still have trouble understanding aspects of human language. One free, open-source system the Tracker relies on is an API called spaCy. But programs like spaCy don’t always recognize the same thing when they’re stated differently — say, the “Virginia legislature” and the “Virginia General Assembly.”

Another challenge is coping with the quality of news show transcripts, Schultz said. The transcripts can contain many typos, in addition to sometimes being either all caps or all lowercase, which the API can have trouble reading.

Talking Point Tracker’s logo

And the API doesn’t always know where sentences break. Too often, the system will return sentences that contain just “Mr.” because it concludes that a period signifies the end of the sentence. To get around this, Schultz is using another NLP technology to clean the transcripts he obtains.

To prepare for the Tech & Check Conference, Schultz is building better searching tools and further cleaning up the Tracker’s design. “It’s always good to have your feet close to the fire,” Schultz said.

The biggest question he hopes to get answered before leaving is whether Talking Point Tracker could be useful for journalists, he said.

“There’s a lot things we can gain from feedback. If we have the capacity and interest from whoever, we will continue to iterate and build on top of that,” Schultz said.

 

         

 

Comments closed

Tech & Check in the news

It’s been more than a year since the Reporters’ Lab received $1.2 million in grant funding to launch the Duke Tech & Check Cooperative.

Our goal is to link computer scientists and journalists to better automate fact-checking and expand how many people see this vital, accountability reporting.

Here’s a sampling of some of the coverage about the range of projects we’re tackling:

Tech & Check:

Associated Press, Technology Near For Real-Time TV Political Fact-Checks

Digital Trends, Real-time fact-checking is coming to live TV. But will networks use it?

Nancy Watzman, Tech & Check: Automating Fact-Checking
Poynter, Automated fact-checking has come a long way. But it still faces significant challenges.
MediaShift, The Fact-Checking Army Waging War on Fake News

FactStream:
NiemanLab, The red couch experiments, early lessons in pop-up fact-checking.
WRAL, Fake news? App will help State of the Union viewers sort out fact, fiction
Media Shift, An Experiment in Live Fact-Checking the State of the Union Speech by Trump
American Press Institute, President Trump’s first State of the Union address is Tuesday night. Here’s how to prepare yourself, factually speaking.
WRAL, App will help views sort fact, fiction in State of the Union
NiemanLab, Automated, live fact-checks during the State of the Union? The Tech & Check Cooperative’s first beta test hopes to pull it off
NiemanLab, FactStream debuted live fact-checking with last night’s SOTU. How’d it go?

Tech & Check Alerts:
Poynter, This Washington Post fact check was chosen by a bot

Truth Goggles:
NiemanLab, Truth Goggles are back! And ready for the next era of fact-checking

And …
NiemanLab, So what is that, er, Trusted News Integrity Trust Project all about? A guide to the (many, similarly named) new efforts fighting for journalism
MediaShift, Fighting Fake News: Key Innovations in 2017 from Platforms, Universities and More NiemanLab, With $4.5 million, Knight is launching a new commission — and funding more new projects — to address declining public trust in media
Poynter, Knight’s new initiative to counter misinformation includes more than $1.3 million for fact-checking projects
Axios, How pro-trust initiatives are taking over the internet
Recode, Why the Craig behind Craigslist gave big bucks to a journalism program
Digital News Report (with Reuters and Oxford), Understanding the Promise and Limits of Automated Fact-Checking
Democratic Minority Staff Report, U.S. House Committee on Science, Space & Technology, Old Tactics, New Tools: A Review of Russia’s Soft Cyber Influence Operations

Comments closed

Reporters’ Lab students are fact-checking North Carolina politicians

Duke Reporters’ Lab students expanded vital political journalism during a historic midterm campaign season this fall with the North Carolina Fact-Checking Project.

Five student journalists reviewed thousands of statements that hundreds of North Carolina candidates vying for state and federal offices made online and during public appearances. They collected newsy and checkable claims from what amounted to a firehose of political claims presented as fact.

Duke computer science undergraduates with the Duke Tech & Check Cooperative applied custom-made bots and the ClaimBuster algorithm to scrape and sort checkable political claims from hundreds of political Twitter feeds.

Editors and reporters then selected claims the students had logged for most of the project’s 30 plus  fact-checks and six summary articles that the News and Observer and PolitiFact North Carolina published between August and November.

Duke senior Bill McCarthy

Duke senior Bill McCarthy was part of the four-reporter team on the project, which the North Carolina Local News Lab Fund supported to expand local fact-checking during the 2018 midterms and beyond in a large, politically divided and politically active state.

“Publishing content in any which way is exciting when you know it has some value to voters, to democracy,” said McCarthy, who interned at PolitiFact in Washington, D.C. last summer. “It was especially exciting to get so many fact-checks published in so little time.”

Reporters found politicians and political groups often did not stick with the facts during a campaign election season that that fielded an unusually large number of candidates statewide and a surge in voter turnout.

The N.C. Fact-Checking Project produces nonpartisan journalism

NC GOP falsely ties dozens of Democrats to single-payer health care plan,” read one project fact-check headline. “Democrat falsely links newly-appointed Republican to health care bill,” noted another.  The fact-check “Ad misleads about NC governors opposing constitutional amendments” set the record straight about some Democratic-leaning claims about six proposed amendments to the state constitution.

And on and on.

Digging for the Truth

Work in the lab was painstaking. Five sophomores filled weekday shifts to scour hundreds of campaign websites, social media feeds, Facebook and Google political ads, televised debates, campaign mailers and whatever else they could put their eyes on. Often they recorded one politician’s attacks on an opponent that might, or might not, be true.

Students scanned political chatter from all over the state, tracking competitive state and congressional races most closely. The resulting journalism was news that people could use as they were assessing candidates for the General Assembly and U.S. Congress as well as six proposed amendments to the state constitution.

The Reporters’ Lab launched a mini news service to share each fact-checking article with hundreds of newsrooms across the state for free.

One of more than 30 N.C. Fact-Checking Project articles

The Charlotte Observer, a McClatchy newspaper like the N&O, published several checks. So did smaller publications such as Asheville’s Citizen-Times  and the Greensboro News and Record. Newsweek cited  a fact-check report by the N&O’s Rashaan Ayesh and Andy Specht about a fake photo of Justice Kavanaugh’s accuser, Christine Blasey Ford, shared by the chairman of the Cabarrus County GOP, which WRAL referenced in a roundup.

Project fact-checks influenced political discourse directly too. Candidates referred to project fact-checks in campaign messaging on social media and even in campaign ads. Democrat Dan McCready, who lost a close race against Republican Mark Marris in District 9, used project fact-checks in two campaign ads promoted on Facebook and in multiple posts on his Facebook campaign page, for instance.

While N&O reporter Andy Specht was reporting a deceptive ad from the Stop Deceptive Amendments political committee, the group announced plans to change it.

The fact-checking project will restart in January, when North Carolina’s reconfigured General Assembly opens its first 2019 session.

 

Comments closed

Lessons learned from fact-checking 2018 midterm campaigns

Five Duke undergraduates monitored thousands of political claims this semester during a heated midterm campaign season for the N.C. Fact-Checking Project.

That work helped expand nonpartisan political coverage in a politically divided state with lots of contested races for state and federal seats this fall. The effort resumes in January when the project turns its attention to a newly configured North Carolina General Assembly.

Three student journalists who tackled this work with fellow sophomores Alex Johnson and Sydney McKinney reflect on what they’ve learned so far.

Lizzie Bond

Lizzie Bond: After spending the summer working in two congressional offices on Capitol Hill, I began my work in the Reporters’ Lab and on the N.C. Fact-Checking Project with first-hand knowledge of how carefully elected officials and their staff craft statements in press releases and on social media. This practice derives from a fear of distorting the meaning or connotation of their words. And in this social media age where so many outlets are available for sharing information and for people to consume it, this fear runs deep.

Yet, it took me discovering one candidate for my perspective to shift on the value of our work with the N.C. Fact-Checking Project. That candidate, Peter Boykin, proved to be a much more complicated figure than any other politician whose social media we monitored. The Republican running to represent Greensboro’s District 58 in the General Assembly, Boykin is the founder of “Gays for Trump,” a former online pornography actor, a Pro-Trump radio show host, and an already controversial, far-right online figure with tens of thousands of followers. Pouring through Boykin’s nearly dozen social media accounts, I came across everything from innocuous self-recorded music video covers to contentious content, like hostile characterizations of liberals and advocacy of conspiracy theories, like one regarding the Las Vegas mass shooting which he pushed with little to no corroborating evidence.

When contrasting Boykin’s posts on both his personal and campaign social media accounts with the more cautious and mild statements from other North Carolina candidates, I realized that catching untruthful claims has a more ambitious goal that simply detecting and reporting falsehoods. By reminding politicians that they should be accountable to the facts in the first place, fact-checking strives to improve their commitment to truth-telling. The push away from truth and decency in our politics and toward sharp antagonism and even alternate realities becomes normalized when Republican leaders support candidates like Boykin as simply another GOP candidate. The N.C. Fact-Checking Project is helping to revive truth and decency in North Carolina’s politics and to challenge the conspiracy theories and pants-on-fire campaign claims that threaten the self-regulating, healthy political society we seek.

Ryan Williams

Ryan Williams: I came into the Reporters’ Lab with relatively little journalism experience. I spent the past summer working on social media outreach & strategy at a non-profit where I drafted tweets and wrote the occasional blog post. But I’d never tuned into writing with the immense brevity of political messages during an election season. The N.C. Fact-Checking Project showed me the importance of people who not only find the facts are but who report them in a nonpartisan, objective manner that is accessible to an average person.

Following the 2016 election, some people blamed journalists and pollsters for creating false expectations about who would win the presidency. I was one of those critics. In the two and a half months I spent fact-checking North Carolina’s midterm races, I learned how hard fact-checkers and reporters work. My fellow fact-checkers and I compiled a litany of checkable claims made by politicians this midterm cycle. Those claims, along with claims found by the automated claim-finding algorithm ClaimBuster were raw material for many fact-checks of some of North Carolina hottest races. Those checks were made available for voters ahead of polling.

Now that election day has come and gone, I am more than grateful for this experience in fact-finding and truth-reporting. Not only was I able to hone research skills, I gained a deeper understanding of the intricacies of political journalism. I can’t wait to see what claims come out of the next two years leading up to, what could be, the presidential race of my lifetime.

Jake Sheridan

Jake Sheridan: I’m a Carolina boy who has grown up on the state’s politics. I’ve worked on campaigns, went to the 2012 Democratic National Committee in my hometown of Charlotte and am the son of a long-time news reporter. I thought I knew North Carolina politics before working in the Reporter’s Lab. I was wrong.

While trying to wrap my head around the 300-plus N.C. races, I came to better understand the politics of this state. What matters in the foothills of the Piedmont, I found out, is different than what matters on the Outer Banks and in Asheville. I discovered that campaigns publicly release b-roll so that PACs can create ads for them and saw just how brutal attack ads can be. I got familiar with flooding and hog farms, strange politicians and bold campaign claims.

There was no shortage of checkable claims. That was good for me. But it’s bad for us. I trust politicians less now. The ease with which some N.C. politicians make up facts troubles me. Throughout this campaign season in North Carolina, many politicians lied, misled and told half truths. If we want democracy to work — if we want people to vote based on what is real so that they can pursue what is best for themselves and our country — we must give them truth. Fact-checking is essential to creating that truth. It has the potential to place an expectation of explanation upon politicians making claims. That’s critical for America if we want to live in a country in which our government represents our true best interests and not our best interests in an alternate reality.

 

Comments closed

FactStream app now shows latest fact-checks from Post, FactCheck.org and PolitiFact

FactStream, our iPhone/iPad app, has a new feature that displays the latest fact-checks from FactCheck.org, PolitiFact and The Washington Post.

FactStream was conceived as an app for live fact-checking during debates and speeches. (We had a successful beta test during the State of the Union address in January.) But our new “daily stream” makes the app valuable every day. You can check it often to get summaries of the newest fact-checks and then click through to the full articles.

The new version of FactStream lets users get notifications of the latest fact-checks.

By viewing the work of the nation’s three largest fact-checkers in the same stream, you can spot trends, such as which statements and subjects are getting checked, or which politicians and organizations are getting their facts right or wrong.

The new version of the app includes custom notifications so users can get alerts for every new fact-check or every “worst” rating, such as Four Pinocchios from Washington Post Fact Checker Glenn Kessler, a False from FactCheck.org or a False or Pants on Fire from PolitiFact.

The daily stream shows the latest fact-checks.

The new daily stream was suggested by Eugene Kiely, the director of FactCheck.org. The app was built by our lead technologist Christopher Guess and the Durham, N.C., design firm Registered Creative. It gets the fact-check summaries from ClaimReview, our partnership with Google that has created a global tagging system for fact-checking. We plan to expand the daily stream to include other fact-checkers in the future.

The app also allows users to search the latest fact-checks by the name of the person or group making the statement, by subject or keyword.

Users can get notifications on their phones and on their Apple Watch.

FactStream is part of the Duke Tech & Check Cooperative, a $1.2 million project to automate fact-checking supported by Knight Foundation, the Facebook Journalism Project and the Craig Newmark Foundation.

FactStream is available as a free download from the App Store.

 

Comments closed

At Global Fact V: A celebration of community

My opening remarks at Global Fact V, the fifth annual meeting of the world’s fact-checkers, organized by the International Fact-Checking Network, held June 20-22 in Rome.

A couple of weeks ago, a photo from our first Global Fact showed up in my Facebook feed. Many of you will remember it: we had been all crammed into a classroom at the London School of Economics. When we went outside for a group photo, there were about 50 of us.

To show how our conference has grown, I posted that photo on Twitter along with one from our 2016 conference that had almost twice as many people. I also posted a third photo that showed thousands of people gathered in front of the Vatican. I said that was our projected crowd for this conference.

I rate that photo Mostly True.

What all of our conferences have in common is that they are really about community. It all began in that tiny classroom at the London School of Economics when we realized that whether we were from Italy or the U.K. or Egypt, we were all in this together. We discovered that even though we hadn’t talked much before or in many cases even met, we were facing the same challenges — fundraising and finding an audience and overcoming partisanship.

It was also a really powerful experience because we got a sense of how some fact-checkers around the world were struggling under difficult circumstances — under governments that provide little transparency, or, much worse, governments that oppress journalists and are hostile toward fact-checkers.

Throughout that first London conference there was an incredible sense of community. We’d never met before, but in just a couple of days we formed strong bonds. We vowed to keep in touch and keep talking and help each other.

It was an incredibly powerful experience for me. I was at a point in my career where I was trying to sort out what I would do in my new position in academia. I came back inspired and decided to start an association of fact-checkers – and hold these meetings every year.

The next year we started the IFCN and Poynter generously agreed to be its home. And then we hired Alexios as the leader.

Since then, there are have been two common themes. One you hear so often that it’s become my mantra: Fact-checking keeps growing. Our latest census of fact-checking in the Reporters’ Lab shows 149 active fact-checking projects and I’m glad to see that number keep going up and up.

The other theme, as I noted earlier, is community. I thought I’d focus this morning on a few examples.

Let’s start with Mexico, where more than 60 publishers, universities and civil society organizations have started Verificado 2018, a remarkable collaboration. It was originally focused largely on false news, but they’ve put more emphasis on fact-checking because of public demand. Daniel Funke wrote a great piece last week about how they checked a presidential debate.

In Norway, an extraordinary team of rivals has come together to create Faktisk, which is Norwegian for “actually” and “factually.” It launched nearly a year ago with four of the country’s biggest news organizations — VG, Dagbladet, NRK and TV 2 – and it’s grown since then. My colleague Mark Stencel likened it to the New York Times, The Washington Post and PBS launching a fact-checking project together.

 

At Duke, both of our big projects are possible because of the fact-checkers’ commitment to help each other. The first, Share the Facts and the creation of the ClaimReview schema, grew out of an idea from Glenn Kessler, the Washington Post Fact Checker, who suggested that Google put “fact-check” tags on search results.

That idea became our Duke-Google-Schema.org collaboration that created what many of you now use so search engines can find your work. And one unintended consequence: it makes automated fact-checking more possible. It all started because of one fact-checker’s sense of community.

Also, FactStream, the new app of our Tech & Check Cooperative, has been a remarkable collaboration between the big US fact-checkers — the Post, FactCheck.org and PolitiFact. All three took part in the beta test of the first version, our live coverage of the State of the Union address back in January. Getting them together on the same app was pretty remarkable. But our new version of the app –which we’re releasing this week – is even cooler. It’s like collaboration squared, or collaboration to the second power!

It took Glenn’s idea, which created the Share the Facts widget, and combined it with an idea from Eugene Kiely, the head of FactCheck.org, who said we should create a new feature on FactStream that shows the latest U.S. widgets every day.

So that’s what we did. And you know what: it’s a great new feature that reveals new things about our political discourse. Every day, it shows the latest fact-checks in a constant stream and users can click through, driving new traffic to the fact-checking sites. I’ll talk more about it during the automated demo session on Friday. But it wouldn’t be possible if it weren’t for the commitment to collaboration and community by Glenn and Eugene.

We’ve got a busy few days ahead, so let’s get on with it. There sure are a lot of you!

As we know from the photographs: fact-checking keeps growing.

 

Comments closed

New Tech & Check projects will provide pop-up fact-checking

For years, fact-checkers have been working to develop automated “pop-up” fact-checking. The technology would enable users to watch a political speech or a campaign debate while fact-checks pop onto their screens in real time.

That has always seemed like a distant dream. A 2015 report on “The Quest to Automate Fact-Checking” called that innovation “the Holy Grail” but said it “may remain far beyond our reach for many, many years to come.”

Since then, computer scientists and journalists have made tremendous progress and are inching closer to the Holy Grail. Here in the Reporters’ Lab, we’ve received $1.2 million in grants to make automated fact-checking a reality.

The Duke Tech & Check Cooperative, funded by Knight Foundation, the Facebook Journalism Project and the Craig Newmark Foundation, is an effort to use automation to help fact-checkers research factual claims and broaden the audience for their work. The project will include about a half-dozen pop-up apps that will provide fact-checking on smartphones, tablets and televisions.

One key to the pop-up apps is a uniform format for fact-checks called the ClaimReview schema. Developed through a partnership of Schema.org, the Reporters’ Lab, Jigsaw and Google, it provides a standard tagging system for fact-checking articles that makes it easier for search engines and apps to identify the details of a fact-check. ClaimReview, which can be created using the Share the Facts widget developed by the Reporters’ Lab, will enable future apps to quickly find relevant fact-checking articles.

“Now, I don’t need to scrape 10 different sources and try to wrangle permission because there’s this database that will be growing increasingly,” says Dan Schultz, senior creative technologist at the Internet Archive.

This works because politicians repeat themselves. For example, many politicians and analysts have claimed that the United States has the highest corporate tax rate.

FactStreamThe Reporters’ Lab is developing several pop-up apps that will deliver fact-checking in real time. The apps will include:

  • FactStream, which will display relevant fact-checks on mobile devices during a live event. The first version, to be tested this month during the State of the Union address Jan. 30, will be a “manual” version that will rely on fact-checkers. When they hear a claim that they’ve checked before, the fact-checkers will compose a message containing the URL of the fact-check or a brief note about the claim. That message will appear in the FactStream app on a phone or tablet.
  • FactStream TV, which will use platforms such as Chromecast or Apple TV for similar pop-up apps on television. The initial versions will also be manual, enabling fact-checkers to trigger the notifications.

Another project, Truth Goggles, will be a plug-in for a web browser that will automatically scan a page for content that users should think about more carefully. Schultz, who developed a prototype of Truth Goggles as a grad student at the MIT Media Lab, will use the app to experiment with different ways to present accurate information and help determine which methods are most valuable for readers.

The second phase of the pop-up apps will take the human fact-checker out of the equation. For live events, the apps will rely on voice-to-text software and then match with the database of articles marked with ClaimReview.

The future apps will also need natural language processing (NLP) abilities. This is perhaps the biggest challenge because NLP is necessary to reflect the complexities of the English language.

“Human brains are very good at [NLP], and we’re pretty much the only ones,” says Chris Guess, the Reporters’ Lab’s chief technologist for Share the Facts and the Tech & Check Co-op. Programming a computer to understand negation or doublespeak, for instance, is extremely difficult.

Another challenge comes from the fact that there are few published fact-checks relative to all of the claims made in conversation or articles. “The likelihood of getting a match to the 10,000 or so stored fact-checks will be low,” says Bill Adair, director of the Reporters’ Lab.

Ideally, computers will eventually research and write the fact checks, too. “The ultimate goal would be that it could pull various pieces of information out, use that context awareness to do its own research into various data pools across the world, and create unique and new fact-checks,” Guess says.

The Reporters’ Lab is also developing tools that can help human fact-checkers. The first such tool uses ClaimBuster, an algorithm that can find claims fact-checkers might want to examine, to scan transcripts of newscasts and public events and identify checkable claims.

“These are really hard challenges,” Schultz says. “But there are ways to come up with creative ways around them.”

Comments closed