In the aftermath of the 2012 election, campaign prognosticators Nate Silver, Simon Jackman, Drew Linzer, and Sam Wang make preliminary quantitative assessments of how well their final predictions played out. Others have posted comparisons of these and other election prediction and poll aggregation outfits. Hopefully, we'll one day compare and combine the models based on their long term predictive power. To compare and combine models effectively, we need a good quantitative measure of their accuracy. The prognosticators have used something called the Brier score to measure the accuracy of their election eve predictions of state-level outcomes. Despite its historical success in measuring forecast accuracy, the Brier score fails in at least two ways as a forecast score. I'll review its inadequacies and suggest a better method.

The Brier score measures the accuracy of binary probabilistic predictions. To calculate it, take the average, squared difference between the forecast probability of a given outcome (e.g., Obama winning the popular vote in California) and the observed probability that the event occurred (.e.g, one if the Obama won, zero if he didn't win). The higher the Brier score, the worse the predictive accuracy. As Nils Barth suggested to Sam Wang, you can also calculate a normalized Brier score by subtracting four times the Brier score from one. A normalized Brier score compares the predictive accuracy of a model to the predictive accuracy of a model that perfectly predicted the outcomes. The higher the normalized Brier score, the greater the predictive accuracy.

Because the Brier score (and its normalized cousin) measure predictive accuracy, I've suggested that we can use them to construct certainty weights for prediction models, which we could then use when calculating an average model that combines the separate models into a meta-prediction. Recently, I've discovered research in the weather forecasting community about a better way to score forecast accuracy. This new score ties directly to a well-studied model averaging mechanism. Before describing the new scoring method, let's describe the problems with the Brier score.

Jewson (2004) notes that the Brier score doesn't deal adequately with very improbable or probable events. For example, suppose that the probability that a Black Democrat wins Texas is 1 in 1000. Suppose we have one forecast model that predicts Obama will surely lose in Texas, whereas another model predicts that Obama's probability of winning is 1 in 400. Well, Obama lost Texas. The Brier score would tell us to prefer the model that predicted a sure loss for Obama. Yet the model that gave him a small probability of winning is closer to the "truth" in the sense that it estimates he has a small probably of winning. In addition to its poor performance scoring highly improbable and probable events, the Brier score doesn't perform well when scoring very poor forecasts (Benedetti 2010; sorry for the pay wall).

These issues with the Brier score should give prognosticators pause for two reasons. First, they suggest that the Brier score will not perform well in the "safe" states of a given party. Second, they suggest that Brier scores will not perform well for models whose predictions were poor (here's lookin' at you, Bickers and Berry). So what should we do instead? It's all about the likelihood. Well, actually its logarithm.

Both Jewson and Benedetti convincingly argue that the proper score of forecast accuracy is something called the log likelihood. A likelihood is the probability of a set of observations given the model of reality that we assume produced those observations. As Jewson points out, the likelihood in our case is the probability of a set of observations (i.e., which states Obama won) given the forecasts associated with those observations (i.e., the forecast probability that Obama would win those states). A score based on the log likelihood penalizes measures that are very certain one way or the other, giving the lowest scores to models that are perfectly certain of the outcome.

To compare the accuracy of two models, simply take the difference in their log likelihood. To calculate model weights, first subtract the likelihood score of each model from the minimum likelihood score across all the models. Then exponentiate the difference you just calculated. Then divide the exponentiated difference of each model by the sum of those values across all the models. Voila. A model averaging weight.

Some problems remain. For starters, we haven't factored Occam's razor into our scoring of models. Occam's razor, of course, is the idea that simpler models are better than complex models all else equal. Some of you might notice that the model weight calculation in the previous paragraph is identical to the model weight calculation method based on the information criterion scores of models that have the same number of variables. I argue that we can ignore Occam's razors for our purposes. What we're doing is measuring a model's predictive accuracy, not its fit to previous observations. I leave it up to the first order election prognosticators to decide which parameters they include in their model. In making meta election forecasts, I'll let the models' actual predictive performance decide which ones should get more weight.
 
 
Tomorrow is election day. At this point, the probability is vanishingly small that any single fact checked statement from any of the 2012 presidential or vice presidential candidates could sway anyone's decision. It's similarly unlikely that an analysis of the aggregate truthfulness of the candidates will influence votes. Since everyone's already decided, now is a good time to reflect. On the malarkey scale, how do the presidential candidates this election year compare to the candidates in 2008? Specifically, did the candidates spew more malarkey or less? I'll use Malark-O-Meter's factuality scale and statistical analysis tools to address this question. The question is an important one because it gives us some insight into how the shifts in political climate since Obama entered office have influenced campaign politics.

The malarkey score uses fact checker rulings from PolitiFact's Truth-O-Meter, The Washington Post's Fact Checker's Pinocchio scale, or both to measure the average falsehood of the statements that individuals or groups make. These organizations rate the factuality of statements using categories that range from true to false. Malark-O-Meter turns these categories to numbers, then averages the numeric rating of an individual's or group's statements. The result is a score that ranges from 0, which suggests that 0% of what comes out of your mouth is malarkey, to 100, which suggests that 100% of what you say is malarkey. For more details on its calculation, read this. For caveats to the validity of this measure, read this. For a justification of comparing truthfulness among individuals like I do, read this.

Karen S. at Politi-Psychotics shared with me her collection of all PolitiFact rulings for Obama, Biden, Romney, Ryan, McCain, and Palin. I used that data to construct a malarkey score that estimated the falseness of Obama, Biden, McCain, and Palin as of October 30, 2008. I would have included The Fact Checker's rulings, but Kessler's column wasn't a permanent part of WaPo until 2011, and I was unable to collect all the necessary data in time for the eve of election. To calculate the malarkey scores for the 2012 candidates, I calculate the malarkey score separately for their Truth-O-Meter report cards as of October 30, 2012.

You might wonder why I don't limit the scope of the 2012 malarkey scores to the campaign season. It's because I'm not trying to measure the malarkey spewed during a campaign season. I'm trying to measure the overall factuality of a presidential hopeful. Back in 2008, Obama and Biden might have been more or less factual than they have become in the last four years. Yet we shouldn't be blind to the malarkey that they or the two 2012 Republican candidates spewed before 2011. Moreover, we shouldn't punish candidates back in 2008 for statements that they haven't made yet.

With those caveats out of the way, let's see what the observed malarkey scores are before measuring our statistical uncertainty in them. Our aim is to get a better understanding of how our beliefs about the factuality of the two campaigns compare at identical points in their history.

2008 candidates

candidate malarkey
Obama 42
Biden 48
McCain 54
Palin 46

2012 candidates

candidate malarkey
Obama 44
Biden 48
Romney 55
Ryan 58
If you take the observed data at face value, it suggests that there are some candidate-level differences between the two election years. The differences aren't big. Still, in four years, the candidates range between zero and 12 percent more full of malarkey in 2012 than their counterparts in 2008. Curiously, the data also suggests that, according to her PolitiFact report card, Palin spewed less malarkey during the campaign than McCain (my how things have changed since then). But with what degree of certainty can we make such statements given the evidence we have? Enter Malark-O-Meter's statistical methods for estimating and comparing malarkey scores

Basically, we treat fact checker report cards as a sort of experiment that gauges the factuality of a particular candidate. Because fact checkers only rate a small number of individuals' statements, there's uncertainty in these measures that arises from sampling error. This is particularly important because sampling error is higher in smaller samples. There were fewer statements collected for Obama in 2008 than in 2012, which would increase our level of uncertainty in the comparisons we'd make between Obama's performance in the two years. The number of statements that have been fact checked also differ across the four candidates, with presidential candidates fact checked more than vice presidential candidates and Democratic candidates fact checked more than Republican candidates. 

Anyway, we can use probability theory to simulate the universe of possible malarkey scores given the data that fact checkers have collected, then estimate the likelihood that a candidate's malarkey score is a particular value. Let's start with the observation that each candidate's malarkey scores from 2012 were higher than the malarkey scores of their counterparts in 2008. 

It turns out that we can be 95% confident that Barack Obama's malarkey score was between 6% percent smaller and 14% larger going up to this election year than it was up to 2008. The odds are about 3 to 1 that if we repeated this experiment, we'd find Obama to have spewed more malarkey by 2012 than he had by 2008.

The comparison isn't as clear for Biden because we have less data than we do for Obama. We can be 95% confident that Biden's malarkey score was between 18% smaller and 24% larger by 2012 than it was by 2008. If we repeated this experiment, it is a coin toss whether we'd again find that Biden spewed more or less malarkey in 2012 than 2008. So we can't tell a difference between Biden 2008 and Biden 2012.

Is Romney today more or less truthful as a Republican presidential candidate than McCain was by 2008? We can be 95% confident that Romney's malarkey score was between 6% smaller and 13% larger by 2012 than McCain's was by 2008. The odds are about 3 to 1 that if we did this experiment again, we'd find Romney to have spewed more malarkey by 2012 than McCain had by 2008.

Is Ryan more or less truthful as a Republican presidential candidate than Palin was in 2008? We can be 95% certain that Ryan's malarkey score was between 5% smaller and 56% larger by 2012 than Palin's was by 2008. The odds are better than 15 to 1 that Ryan spewed more malarkey by 2012 than Palin had by 2008. 

The finding about Ryan and Palin strikes me because Palin has spewed numerous falsehoods in her selfish bid for wingnut fame since the 2008 election. Yet Ryan is touted as the facts man of the GOP. Remember, however, that Palin hadn't gone rogue until late in the campaign, and her truthiness has only exploded since then. Moreover, the story in 2008 wasn't so much that Palin was false. It was more that she didn't know her ass from her elbow (which is a reminder that factuality isn't the only important characteristic to look for in a candidate). Recall that in the 2008 vice presidential debate, she said little that was even worthy of fact checking. By comparison, Ryan's rhetoric makes fact checkers salivate because he often ties numbers and report findings to his arguments. He's actually quite knowledgeable, albeit bullshittingly so.

That's the picture for the individual positions on the campaign ticket. What is the picture for the tickets as a whole? Here are the observed collated malarkey scores for the party tickets in 2008 versus 2012. For each party, collated malarkey sums up the statements in each category that the two members of a ticket made. For this reason, collated malarkey measures the average amount of malarkey in the statements made collectively by the members of a ticket.

2008 tickets (collated)

ticket malarkey
Obama/Biden 43
McCain/Palin 53

2012 tickets (collated)

ticket malarkey
Obama/Biden 44
Romney/Ryan 56
Again, it looks like there are small differences between the two years. Let's see what statistical confidence we can place in that assessment. 

We can be 95% certain that Obiden's collated malarkey score is between 6% smaller and 13% larger by 2012 than it was by 2008. If we repeated this experiment, the odds are a bit less than 5 to 2 that we would again conclude that Obiden 2012 has spewed more malarkey than Obiden 2008.

What about Rymney versus McPalin? We can be 95% certain that Rymney's collated malarkey score is between 2% smaller and 18% larger by 2012 than McPalin's was by 2008. The odds are just under 19 to 1 that if we repeated this experiment, we'd find Rymney to have spewed more malarkey than McPalin.

At the ticket level, we can be fairly confident that each party's ticket collectively spewed more malarkey by election 2012 than its counterpart had by election 2008. Also note that we can be more certain that Rymney spews more malarkey than McPain than that Obiden 2012 spews more malarkey than Obiden 2008.

The collated malarkey score rates the average falsehood of the statements a ticket makes. What about the average falsehood of the members of each ticket? That's what the member average malarkey score measures. Simply calculate the malarkey for each candidate on a ticket, then average the malarkey scores of the candidates on that ticket. Here are the observed malarkey scores.

2008 tickets 
(member average)

ticket malarkey
Obama/Biden 42
McCain/Palin 54

2012 tickets
(member average)

ticket malarkey
Obama/Biden 44
Romney/Ryan 55
Again, we observe small differences. Given the data, how certain can we be that these differences exist?

We can be 95% confident that Obama's and Biden's malarkey scores are on average between 10% smaller and 14% larger by the 2012 election than they were by the 2008 election. The odds are only about 3 to 2 that we're right in saying that Obama and Biden were on average more full of malarkey by 2012 than they were by 2008.

For the Republicans, we can be 95% confidence that Romney's and Ryan's are on average between 2% smaller and 27% larger by this election year than McCain and Palin were by election 2008. The odds are 19 to 1 that we'd be right in saying that Romney and Ryan were on average more full of malarkey by election 2012 than McCain and Palin were by election 2008.

Again, there is stronger evidence that the newest Republican ticket is less factual than the 2008 ticket than the evidence for Obiden 2012 being on average less factual than Obiden 2008. Except this time, the difference is in the average factuality of the members on each ticket. Together, these findings are consistent with Politi-Psychotics' evidence that Republicans have become less factual since Obama took office, a phenomenon that can't be interpreted as evidence that PolitiFact has become more partisan since its separation from Congressional Quarterly.

Two steps remain in this analysis, both pertaining to the overall truthfulness of the candidates in each election year. First, let's compare the collated malarkey score of all candidates, regardless of party, between election 2008 and election 2012 (remember, collated scores add up all the statements in each categories for all the individuals included, then calculate a malarkey score from the collated report card). For both years, the collated malarkey score for all 2008 candidates is 48, just under half full of malarkey (actually, the malarkey scores are very slightly smaller for the 2008 election, but we round up to the nearest whole number). We might as well toss a coin to decide whether the candidates's statements were collectively more or less full of malarkey running up to the 2012 election than the 2008 election. 

The differences are clearer when we look at the average malarkey score of the candidates by election year. In 2008, the candidates were on average 42% full of malarkey. The 2012 candidates are on average 44% full of malarkey. How statistically confident can we be in saying that the 2008 candidates were on average more truthful than the 2012 candidates? Sadly, we can be about 92% confident in this conclusion. 

The difference in the candidates' average malarkey between this election year and eleciton 2008 is only two points on the malarkey scale. Yet if the trend continues over the next four presidential elections, then most of what our presidential and vice presidential candidates say could be false by the time my daughter can decide whether or not to give a president a chance at a second term. If the trend continues another six or seven elections after that, then my unborn grandchild will be choosing between two sets of liars.

So what explains the trend? There are at least two hypotheses. First, PolitiFact's rulings could have become tougher since 2008, I doubt this, but I can't rule it out completely without analyzing the full set of PolitiFact rulings. 

An alternative explanation reflects current political reality. Our  country has become increasingly polarized in recent years at the same time that the stakes of obtaining our country's highest political office have increased. Consequently, the premium on strategic deception is higher. This hypothesis is consistent with my finding that neither ticket has a solid record of factuality. The hypothesis also jives with the chilling fact that, as actor Rainn Wilson recently Tweeted,
There's a figure more damning than any malarkey score.
 

    about

    Malark-O-blog published news and commentary about the statistical analysis of the comparative truthfulness of the 2012 presidential and vice presidential candidates. It has since closed down while its author makes bigger plans.

    author

    Brash Equilibrium is an evolutionary anthropologist and writer. His real name is Benjamin Chabot-Hanowell. His wife calls him Babe. His daughter calls him Papa.

    what is malarkey?

    It's a polite word for bullshit. Here, it's a measure of falsehood. 0 means you're truthful on average. 100 means you're 100% full of malarkey. Details.

    what is simulated malarkey?

    Fact checkers only rate a small sample of the statements that politicians make. How uncertain are we about the real truthfulness of politicians? To find out, treat fact checker report cards like an experiment, and use random number generators to repeat that experiment a lot of times to see all the possible outcomes. Details.

    malark-O-glimpse

    Can you tell the difference between the 2012 presidential election tickets from just a glimpse at their simulated malarkey score distributions?

    Picture
    dark = pres, light = vp
    (Click for larger image.)

    fuzzy portraits of malarkey

    Simulated distributions of malarkey for each 2012 presidential candidate with 95% confidence interval on either side of the simulated average malarkey score. White line at half truthful. (Rounded to nearest whole number.)

    Picture
    (Click for larger image.)
    • 87% certain Obama is less than half full of malarkey.
    • 100% certain Romney is more than half full of malarkey.
    • 66% certain Biden is more than half full of malarkey.
    • 70% certain Ryan is more than half full of malarkey.
    (Probabilities rounded to nearest percent.)

    fuzzy portraits of ticket malarkey

    Simulated distributions of collated and average malarkey for each 2012 presidential election ticket, with 95% confidence interval labeled on either side of the simulated malarkey score. White line at half truthful. (Rounded to nearest whole number.)

    malarkometer fuzzy ticket portraits 2012-10-16 2012 election
    (Click for larger image.)
    • 81% certain Obama/Biden's collective statements are less than half full of malarkey.
    • 100% certain Romney/Ryan's collective statements are more than half full of malarkey.
    • 51% certain the Democratic candidates are less than half full of malarkey.
    • 97% certain the Republican candidates are on average more than half full of malarkey.
    • 95% certain the candidates' statements are on average more than half full of malarkey.
    • 93% certain the candidates themselves are on average more than half full of malarkey.
    (Probabilities rounded to nearest percent.)

    Comparisons

    Simulated probability distributions of the difference the malarkey scores of one 2012 presidential candidate or party and another, with 95% confidence interval labeled on either side of simulated mean malarkey. Blue bars are when Democrats spew more malarkey, red when Republicans do. White line and purple bar at equal malarkey. (Rounded to nearest hundredth.)

    Picture
    (Click for larger image.)
    • 100% certain Romney spews more malarkey than Obama.
    • 55% certain Ryan spews more malarkey than Biden.
    • 100% certain Romney/Ryan collectively spew more malarkey than Obama/Biden.
    • 94% certain the Republican candidates spew more malarkey on average than the Democratic candidates.
    (Probabilities rounded to nearest percent.)

    2012 prez debates

    presidential debates

    Simulated probability distribution of the malarkey spewed by individual 2012 presidential candidates during debates, with 95% confidence interval labeled on either side of simulated mean malarkey. White line at half truthful. (Rounded to nearest whole number.)

    Picture
    (Click for larger image.)
    • 66% certain Obama was more than half full of malarkey during the 1st debate.
    • 81% certain Obama was less than half full of malarkey during the 2nd debate.
    • 60% certain Obama was less than half full of malarkey during the 3rd debate.
    (Probabilities rounded to nearest percent.)

    Picture
    (Click for larger image.)
    • 78% certain Romney was more than half full of malarkey during the 1st debate.
    • 80% certain Romney was less than half full of malarkey during the 2nd debate.
    • 66% certain Romney was more than half full of malarkey during the 3rd debate.
    (Probabilities rounded to nearest percent.)

    aggregate 2012 prez debate

    Distributions of malarkey for collated 2012 presidential debate report cards and the average presidential debate malarkey score.
    Picture
    (Click for larger image.)
    • 68% certain Obama's collective debate statements were less than half full of malarkey.
    • 68% certain Obama was less than half full of malarkey during the average debate.
    • 67% certain Romney's collective debate statements were more than half full of malarkey.
    • 57% certain Romney was more than half full of malarkey during the average debate.
     (Probabilities rounded to nearest percent.)

    2012 vice presidential debate

    Picture
    (Click for larger image.)
    • 60% certain Biden was less than half full of malarkey during the vice presidential debate.
    • 89% certain Ryan was more than half full of malarkey during the vice presidential debate.
    (Probabilities rounded to nearest percent.)

    overall 2012 debate performance

    Malarkey score from collated report card comprising all debates, and malarkey score averaged over candidates on each party's ticket.
    Picture
    (Click for larger image.)
    • 72% certain Obama/Biden's collective statements during the debates were less than half full of malarkey.
    • 67% certain the average Democratic ticket member was less than half full of malarkey during the debates.
    • 87% certain Romney/Ryan's collective statements during the debates were more than half full of malarkey.
    • 88% certain the average Republican ticket member was more than half full of malarkey during the debates.

    (Probabilities rounded to nearest percent.)

    2012 debate self comparisons

    Simulated probability distributions of the difference in malarkey that a 2012 presidential candidate spews normally compared to how much they spewed during a debate (or aggregate debate), with 95% confidence interval labeled on either side of the simulated mean difference. Light bars mean less malarkey was spewed during the debate than usual. Dark bars less. White bar at equal malarkey. (Rounded to nearest hundredth.)

    individual 2012 presidential debates

    Picture
    (Click for larger image.)
    • 80% certain Obama spewed more malarkey during the 1st debate than he usually does.
    • 84% certain Obama spewed less malarkey during the 2nd debate than he usually does.
    • 52% certain Obama spewed more malarkey during the 3rd debate than he usually does.
    Picture
    (Click for larger image.)
    • 51% certain Romney spewed more malarkey during the 1st debate than he usually does.
    • 98% certain Romney spewed less malarkey during the 2nd debate than he usually does.
    • 68% certain Romney spewed less malarkey during the 3rd debate than he usually does.

    (Probabilities rounded to nearest percent.)

    aggregate 2012 presidential debate

    Picture
    (Click for larger image.)
    • 58% certain Obama's statements during the debates were more full of malarkey than they usually are.
    • 56% certain Obama spewed more malarkey than he usually does during the average debate.
    • 73% certain Romney's statements during the debates were less full of malarkey than they usually are.
    • 86% certain Romney spewed less malarkey than he usually does during the average debate.

    (Probabilities rounded to nearest percent.)

    vice presidential debate

    Picture
    (Click for larger image.)
    • 70% certain Biden spewed less malarkey during the vice presidential debate than he usually does.
    • 86% certain Ryan spewed more malarkey during the vice presdiential debate than he usually does.

    (Probabilities rounded to nearest percent.)

    2012 opponent comparisons

    Simulated probability distributions of the difference in malarkey between the Republican candidate and the Democratic candidate during a debate, with 95% confidence interval labeled on either side of simulated mean comparison. Blue bars are when Democrats spew more malarkey, red when Republicans do. White bar at equal malarkey. (Rounded to nearest hundredth.)

    individual 2012 presidential debates

    Picture
    (Click for larger image.)
    • 60% certain Romney spewed more malarkey during the 1st debate than Obama.
    • 49% certain Romney spewed more malarkey during the 2nd debate than Obama.
    • 72% certain Romney spewed more malarkey during the 3rd debate than Obama.

    (Probabilities rounded to nearest percent.)

    aggregate 2012 presidential debate

    Picture
    (Click for larger image.)
    • 74% certain Romney's statements during the debates were more full of malarkey than Obama's.
    • 67% certain Romney was more full of malarkey than Obama during the average debate.

    (Probabilities rounded to nearest percent.)

    vice presidential debate

    • 92% certain Ryan spewed more malarkey than Biden during the vice presidential debate.

    (Probabilities rounded to nearest percent.)

    overall 2012 debate comparison

    Party comparison of 2012 presidential ticket members' collective and individual average malarkey scores during debates.
    • 88% certain that Republican ticket members' collective statements were more full of malarkey than Democratic ticket members'.
    • 86% certain that the average Republican candidate spewed more malarkey during the average debate than the average Democratic candidate.

    (Probabilities rounded to nearest percent.)

    observe & report

    Below are the observed malarkey scores and comparisons form the  malarkey scores of the 2012 presidential candidates.

    2012 prez candidates

    Truth-O-Meter only (observed)

    candidate malarkey
    Obama 44
    Biden 48
    Romney 55
    Ryan 58

    The Fact Checker only (observed)

    candidate malarkey
    Obama 53
    Biden 58
    Romney 60
    Ryan 47

    Averaged over fact checkers

    candidate malarkey
    Obama 48
    Biden 53
    Romney 58
    Ryan 52

    2012 Red prez vs. Blue prez

    Collated bullpucky

    ticket malarkey
    Obama/Biden 46
    Romney/Ryan 56

    Average bullpucky

    ticket malarkey
    Obama/Biden 48
    Romney/Ryan 58

    2012 prez debates

    1st presidential debate

    opponent malarkey
    Romney 61
    Obama 56

    2nd presidential debate (town hall)

    opponent malarkey
    Romney 31
    Obama 33

    3rd presidential debate

    opponent malarkey
    Romney 57
    Obama 46

    collated presidential debates

    opponent malarkey
    Romney 54
    Obama 46

    average presidential debate

    opponent malarkey
    Romney 61
    Obama 56

    vice presidential debate

    opponent malarkey
    Ryan 68
    Biden 44

    collated debates overall

    ticket malarkey
    Romney/Ryan 57
    Obama/Biden 46

    average debate overall

    ticket malarkey
    Romney/Ryan 61
    Obama/Biden 56

    the raw deal

    You've come this far. Why not just check out the raw data Maslark-O-Meter is using? I promise you: it is as riveting as a phone book.

    archives

    June 2013
    May 2013
    April 2013
    January 2013
    December 2012
    November 2012
    October 2012

    malark-O-dex

    All
    2008 Election
    2012 Election
    Average Malarkey
    Bias
    Brainstorm
    Brier Score
    Bullpucky
    Caveats
    Closure
    Collated Malarkey
    Conversations
    Dan Shultz
    Darryl Holman
    Debates
    Drew Linzer
    Election Forecasting
    Equivalence
    Fact Checking Industry
    Fallacy Checking
    Foreign Policy
    Fuzzy Portraits
    Gerrymandering
    Incumbents Vs. Challengers
    Information Theory
    Kathleen Hall Jamieson
    Launch
    Logical Fallacies
    Longitudinal Study
    Malarkey
    Marco Rubio
    Meta Analysis
    Methods Changes
    Misleading
    Model Averaging
    Nate Silver
    Origins
    Pants On Fire
    Politifactbias.com
    Poo Flinging
    Presidential Election
    Ratios Vs Differences
    Redistricting
    Red Vs. Blue
    Root Mean Squared Error
    Sam Wang
    Science Literacy
    Short Fiction
    Simon Jackman
    Small Multiples
    Stomach Parasite
    The Future
    The Past
    To Do
    Truth Goggles
    Truth O Meter
    Truth O Meter