## The gist of the malarkey score

Malarkey measures the average degree of falsehood in an individual or group's statements. It goes from 0 to 100. Zero means everything you say (that has been rated by fact checkers) is true (ot at least it's been rated as true). A score of one hundred means you're full of malarkey. Or at least, everything you've said that's been rated by the a couple of professional fact checker groups is false.

We measure malarkey from the report cards of two fact checking systems: the Truth-O-Meter at PolitiFact.com, and the Pinocchio Tracker of The

We measure malarkey from the report cards of two fact checking systems: the Truth-O-Meter at PolitiFact.com, and the Pinocchio Tracker of The

*Washington Post*'s Fact Checker, Glen Kessler.## Calculating malarkey

**Step 1:**Assign comparable values to each of the categories of each of the report cards.

Truth-O-MeterTrueMostly TrueHalf TrueMostly FalseFalse or Pants on Fire |
Pinocchio TrackerGeppetto.One PinocchioTwo Pinocchios Three Pinocchios Four Pinocchios |
Value0 25 50 75 100 |

**Step 2:**For each type of report card, calculate the percentage of statements in each its categories.

**Step 3:**For each type of report card, multiply the percentage of statements in each category by the value of that category.

**Step 4:**For each type of report card, sum up the results from Step 3 over the categories.

**Step 5:**For each type of report card, divide the result from Step 4 by the total number of statements on that report card.

**Step 6:**Average the results from Step 5 over the available report cards. Voila. malarkey. We average the two measures instead of adding up the statements from comparable categories because we are estimating the truthfulness rating across checker groups. So we want to give each fact checker equal weight. This is a rare case when averaging averages is the right thing to do.

## Extension to two or more report cards

## Collated malarkey

Collated malarkey measures the average falsehood of the statements made collectively by a group of individuals. Simply sum up the number of statements in each category over all group members. Then measure malarkey as if the group were one person. With collated malarkey the more statements an individual makes, the greater the influence that individual will have on the group's malarkey score. Because collating report cards increases the sample size of statements, we'll be less uncertain about collated malarkey than individual malarkey.

## Average malarkey

Average malarkey measures the average falseness of individuals within a group. To estimate average malarkey calculate malarkey for each individual in the group, then take the average over the group. With average malarkey every individual has the same weight of influence on the score. Probability theory says that the uncertainty in an average of values about which we are also uncertain will be greater than the uncertainty of those values alone.

## In defense of malarkey

The malarkey scale makes sense. True statements aren't malarkey False statements are 100% malarkey We evenly space out everything in between. You might say, "You can't assign a numeric value to truth!" To that I respond, "Yes, you can, so long as you know what assumptions you're making, and that you're not doing rocket science," and then adding, "Plus, fact checkers implicitly do it anyway."

You might also say, "You don't know exactly how true something is that was rated 'Mostly True'!". You're right. I could just collapse everything into two categories, true and false. But that would be a disservice to the hard work fact checkers do counting the grains of truth in people's statements.

I assume that statements rated with four Pinocchios are completely false, which is reasonable; otherwise Kessler would have a 5 Pinocchio rating. This implies that two Pinocchios make something half malarkey which is comparable to being half true. I also assume that categories in between True, Half True, and False (and their equivalents) have an average level of truthfulness in between the categories next to it.

If you really want to make a federal case about it, contact me.

Just be prepared to have a rational discussion.

And keep learning about the methods by learning about how and why we measure uncertainty.

You might also say, "You don't know exactly how true something is that was rated 'Mostly True'!". You're right. I could just collapse everything into two categories, true and false. But that would be a disservice to the hard work fact checkers do counting the grains of truth in people's statements.

I assume that statements rated with four Pinocchios are completely false, which is reasonable; otherwise Kessler would have a 5 Pinocchio rating. This implies that two Pinocchios make something half malarkey which is comparable to being half true. I also assume that categories in between True, Half True, and False (and their equivalents) have an average level of truthfulness in between the categories next to it.

If you really want to make a federal case about it, contact me.

Just be prepared to have a rational discussion.

And keep learning about the methods by learning about how and why we measure uncertainty.