I've recently noticed that my early writing about comparisons of malarkey scores, which I report as ratios, are easily misinterpreted. For example, I might say that candidate X spews 17% more malarkey than candidate Y, or that PolitiFact might have a 17% bias against party Z over party W. What I really mean is that the malarkey score is 17% larger for candidate X or party Z, not that there is a difference in 17 points along the malarkey scale. The confusion arises from the fact that I report comparisons as ratios. I'm not going to do that anymore. Why? Because if I report comparisons as differences instead, it makes more sense given that the malarkey score ranges from 0 to 100, and could be interpreted as the percentage of one's utterances that are malarkey-laden. I'll make the changes to the side bar reports sometime this week.
Call me partisan since it's Joe Biden's favorite word. Call me unoriginal since everyone started making malarkey jokes after the vice presidential debate. But it just makes sense that a website called Malark-O-Meter measures something called malarkey. Amirite?
I said I would launch the site tonight. I'm not doing that. Here's why:
Yesterday, I decided I would no longer analyze the average bullpucky of a group, but instead would analyze the collated report cards of a group.
I just changed my mind again on that. I'm going to do both! Here's why.
If I simulate the probability distribution of average bullpucky, I treat evey member of a group the same. That answers an important question, especially for the people on a presidential ballot ticket. How truthful are they, on average? An analysis of collated report cards asks what the average truthfulness of the statements made by a group are. That's also an important question.
So from here on out, there are two aggregate measures of bullpucky. One measures the average truthfulness of people, and the other the average truthfulness of the statements a group makes. One will be called collated bullpucky. The other will be called average bullpucky.
Here's a rhetorical stats question. If you get it right, I'll think you're nifty. The question: which of these measures will we usually measure as more uncertain?
*sigh* Now I have to edit some of my methods web pages and
More things I've changed
Check out the methods section for more details.
Things I need to do
Because it made the Truth-O-Meter based bullpucky score less comparable to Pinocchio based bullpucky score, and because it is one of the most controversial things about the Truth-O-Meter, I've decided I'm no longer treating the "Pants on Fire" category differently from the "False" category. From now on, the two ratings have the same value on the bullpucky scale.