Integrating On-demand Fact-checking with Public Dialogue
CSCW 2014 – Mobilizing for Action
Travis Kriplean – University of Washington (acm)
Caitlin Bonnar – University of Washington (acm)
Alan Borning – University of Washington (acm)
Bo Kinney – Seattle Public Library (acm)
Brian Gill – Seattle Pacific University (acm) <- stats guy.
A description of the development and use of a Value-Centered-Design approach to a referendum fact checking website. Users developed pro and con lists and could request a fact check, which was done by lobriarians at the Seattle Public Library.
Things to Remember
- “Our goal is to enhance public dialogue by providing authoritative information from a trusted third party.”
- “Crowdsourcing systems can benefit from reintroducing professionals and institutions that have been up to now omitted. As socio-technical architects, we should be combining old and new to create entirely new possibilities.”
- The journalistic fact-checking frame did not map smoothly to librarian reference practice.
- Clever use of simulation statistics to compensate for comment drop off as election approached.
- Librarians are very trustworthy.
- This is a second or third generation site, and is still incorporating lessons learned.
- LVG – Living Voters Guide
We explore the design space for introducing authoritative information into public dialogue, with the goal of supporting constructive rather than confrontational discourse.
We also present a specific design and realization of an archetypal sociotechnical system of this kind, namely an on-demand fact-checking service integrated into a crowdsourced voters guide powered by deliberating citizens.
Public deliberation is challenging, requiring communicators to consider tradeoffs, listen to others, seek common ground, and be open to change given evidence.
A few interfaces such as Opinion-Space  and ConsiderIt  have demonstrated that it is possible to create lightweight communication interfaces that encourage constructive interactions when discussing difficult issues.
We describe a new approach for helping discussants decide which factual claims to trust. Specifically, we designed and deployed a fact-checking service staffed by professional librarians and integrated into a crowdsourced voters guide.
It also serves as an archetype of a more general class of systems that integrate the contributions of professionals and established institutions into what Benkler  calls “commons-based peer production.”
One key design element is that the fact-checks are performed at the behest of discussion participants, rather than being imposed from outside.
To help explore design alternatives and evaluate this work, we turn to Value Sensitive Design (VSD), a methodology that accounts for human values in a principled and systematic way throughout the design process [6, 18]. As with prior work involving VSD in the civic realm , we distinguish between stakeholder values and explicitly supported values. Stakeholder values are important to some but not necessarily all of the stakeholders, and may even conflict with each other.
Explicitly supported values, on the other hand, guide designers’ choices in the creation of the system: here, the values are democratic deliberation, respect, listening, fairness, and civility.
Most commonly used communication interfaces, especially online comment boards, implicitly support liberal individualist and communitarian values through the particular interface mechanisms they provide.
Because communicative behaviors are context sensitive and can be altered by interface design [35, 38, 44], we believe we can design lightweight interfaces that gently nudge people toward finding common ground and avoiding flame wars.
In ConsiderIt, participants are first invited to create a pro/con list that captures their most important considerations about the issue. This encourages users to think through tradeoffs — a list with only pros or cons is a structural nudge to consider both sides. Second, ConsiderIt encourages listening to other users by enabling users to adopt into their own pro/con lists the pros and cons contributed by others.
Finally, a per-point discussion facility was added in the 2011 deployment to help participants drill down into a particular pro/con point and have a focused conversation about it. We call special attention to this functionality because one component of our evaluation is an examination of how the focused discussion progressed before and after factchecks.
We have run the LVG for the past three elections inWashington State, with 30,000 unique visitors from over 200 Washington cities using LVG for nearly ten minutes on average. Our analysis of behavioral data has revealed a high degree of deliberative activity; for example, 41.4% of all submitted pro/con lists included both pros and cons [17, 28, 30]. Moreover, the tone of the community discussion has been civil: although CityClub actively monitored the site for hate speech and personal attacks, less than ten out of a total of 424 comments have been removed over the three deployments.
Participants have difficulty understanding what information to trust. Content analysis of the pro/con points in 2010 found that around 60% contained a verifiable statement of fact, such as a claim about what the ballot measure would implement, or included a reference to numerical data from an external source. Anyone can state a claim, but how do others know whether that claim is accurate?
Suitable primary sources are often unavailable, and most deliberating bodies do not have the ability to commission a report from a dedicated organization like the Congressional Budget Office.
Fact-checkers produce an evaluation of verifiable claims made in public statements through investigation of primary and secondary sources. A fact-check usually includes a restatement of the claim being investigated, some context to the statement, a report detailing the results of the report, and a summative evaluation of the veracity of the claim (e.g., Politifact’s “Truth-O-Meter” ranging from “True” to “Pants-on-Fire”).
Establishing the legitimacy of fact-checks can be challenging because the format of a fact-check juxtaposes the claim and the fact-check.
Those whose prior beliefs are threatened by the result of the fact-check are psychologically prone to dismiss counter-attitudinal information and delegitimate the source of the challenging information [31, 33, 34, 46], sometimes even strengthening belief in the misinformation .
Another approach is to synthesize information available in reliable secondary sources. This differs from fact-checking in that (1) the investigation does not create new interpretations of original sources and (2) the report does not explicitly rate the veracity of the claims.
One of the main roles of librarians is to help patrons find the information they seek amidst a sometimes overwhelming amount of source material. Librarians assess the content of resources for accuracy and relevance, determine the authority of these resources, and identify any bias or point of view in the resource .
Establishing trust in the results of these crowdsourced efforts is often a challenge [7, 9], but can be accomplished to some extent by transparency of process .
Authoritative information is usually published and promoted by dedicated entities. For example, fact-checking is often provided as a stand-alone service, as with Snopes, Politifact, and factcheck.org.
Professionals facilitating a discussion can shepherd authoritative information directly into a discussion.
The ALA’s Code of Ethics , emphasizes the role of libraries in a democracy: “In a political system grounded in an informed citizenry, we are members of a profession explicitly committed to intellectual freedom and the freedom of access to information. We have a special obligation to ensure the free flow of information and ideas to present and future generations.”
The specific guidelines that the librarians settled on were: (1) We will not conduct in-depth legal or financial analysis, but we will point users to research that has already been conducted; (2) We will not evaluate the merits of value or opinion statements, but we will evaluate their factual components; (3) We will not evaluate the likelihood of hypothetical statements, but we will evaluate their factual components.
For this work, we call out the following direct stakeholders: (1) users of the Living Voters Guide, (2) authors of points that are fact-checked and (3) the reference librarians.
The primary design tension we faced was enabling LVG users to easily get a sense of which factual claims in the pro/con points were accurate (in support of deliberative values), while not making the fact-checking a confrontational, negative experience that would discourage contributors from participating again (or at all).
The service was on-demand: any registered user could request a fact-check, submitted with a brief description of what he or she wanted to have checked.
By relying on LVG participants themselves to initiate a fact-check, we hypothesized that the degree of confrontation with an authority would be diffused. Further, we hoped that requests would come from supporters of the point to be checked, not just opponents — for example, a supporter (or even the author) might request a check as a way of bolstering the point’s credibility.
Each fact-check comprised (1) a restatement of each factual claim in the pro or con, (2) a brief research report for each claim, and (3) an evaluation of the accuracy of each claim.
We settled on a simple scheme of “accurate,” “unverifiable,” and “questionable.” Each of the evaluation categories was accompanied by a symbol representing the result: a checkmark for accurate, an ellipsis for unverifiable, and a question mark for questionable.
For each fact-checked pro or con, an icon representing the most negative evaluation was shown at the bottom. When a user hovered over the icon, a fact-check summary was shown (Figure 1).
The full fact-check was presented immediately below the text of the point when users drilled into the point details and discussion page
Every fact-check request generated a notification e-mail to the librarians. Librarians then logged into a custom fact-checking dashboard. The dashboard listed each point where a user had requested a fact-check, showing the fact-check status (e.g. completed), which librarian was taking responsibility for it, and the result of the fact-check (if applicable). Librarians could claim responsibility for new requests, and then conduct the fact-check.
The fact-checking page enabled librarians to restate each factual claim made in a pro or con as a researchable and verifiable question, and then answer the research question.
The fact-checking team decided that the librarians should always identify as many researchable, verifiable questions as a pro or con point contained, even if the request was only for a very specific claim to be checked.
Every fact-check was a collaboration between at least two librarians. One librarian would write the initial fact-check, which was then reviewed by a second librarian before being published.
This communication facilitated learning and coherence for the service, and also drove functionality changes during the early stage of the pilot.
After each fact-check was published, a notification e-mail was sent out to the requester of the fact-check, the author of the pro/con point, and any other interested party (such as people who were participating in a discussion on that point).
In an informal analysis, librarians found that approximately half of all submitted pros and cons in 2011 contained claims checkable by their criteria.
Not all pros and cons are examined with the same degree of scrutiny by other users. For example, some are poorly worded or even incoherent. Because of ConsiderIt’s PointRank algorithm , these points rapidly fall to the bottom and are only seen by the most persistent users.
One factor mitigating this possibility is a structural resiliency in ConsiderIt that disincentivizes political gaming of the fact checking service: if a strong advocate decides to request fact checks of all the points he or she disagrees with, the claims could be evaluated as accurate and end up bolstering the opposing side’s case. The more likely risk in overloading the librarians stems from pranksters operating without a political agenda. This could be handled by triaging requests based on the user requesting the fact-check.
One reason for the large number of “unverifiable” claims is that, for the political domain, what one might think are straightforward factual questions turn out to be more nuanced on closer examination. Another reason was SPL’s policy about not doing legal research;
Users generally agreed with the librarian’s analysis and found value in it (8.6% strongly agreed, 65.7% agreed, 25.7% neutral, 0% disagreed, 0% strongly disagreed). People who were fact-checked felt that librarians generally assessed their points in a fair manner (62.5% “fair,” 0% “unfair,” 37.5% “neither”).
Users did express a desire for better communication with the librarians.
The extensive positive press coverage that the service received also suggests that the legitimacy of LVG increased. For example, a Seattle Times column  praised the librarian’s contribution to the LVG, stating that “there’s something refreshing in such a scientific, humble approach to information.”
To conduct the permutation test, Monte Carlo simulation was used to repeatedly randomly reassign the 47 fact-checks and their original timestamps to 47 of the 294 points. In the randomization process, each fact-check was assigned at random to a point which had at least one view prior to the timestamp at which the original request for the fact-check was submitted.
Some librarians raised concerns about the ability of the library to maintain their reputation as a neutral institution.
The lack of a communication mechanism also prevented librarians from knowing how their fact-checks were received by the author of the point, the requester(s) of the fact-check, and anyone reading the fact-check, leading to a general disconnect between librarian and user.
Librarians felt able to provide authoritative, relevant information to the public. They felt that this project was not only a good way to showcase the skills that they possess in terms of providing answers to complex questions, but also a way to reach a wider audience.
Users welcomed the librarians’ contributions, even those whose statements were challenged. Our perspective is that correcting widely held, deep misperceptions is something that cannot be quickly fixed (e.g., with a link to a Snopes article), but is rather a long process of engagement that requires a constructive environment .
The journalistic fact-checking frame did not map smoothly to librarian reference practice. Librarianship is rooted in guiding people to sources rather than evaluating claims. Discomfort stepping outside these bounds was magnified by the lack of opportunities for librarians to communicate with users to clarify request and the original point. This points to an evolution of our approach to introducing authoritative information into public dialogue that we call interactive fact-seeking.