I’ll admit, this fundraiser isn’t exactly twisting my arm. I’ve been mulling over how I’d teach Bayesian statistics for a few years. Overall, I’ve been most impressed with E.T. Jaynes’ approach, which draws inspiration from Cox’s Theorem. You’ll see a lot of similarities between my approach and Jaynes’, though I diverge on a few points.
For instance, after introducing a reasoning robot he then proposes these “Desiderata:”
I. Degrees of Plausibility are represented by real numbers.
II. Qualitative Correspondence with common sense.
IIIa. If a conclusion can be reasoned out in more than one way, then every possible way must lead to the same result.
IIIb. The robot always takes into account all of the evidence it has relevant to a question. It does not arbitrarily ignore some of the information, basing its conclusions only on what remains. In other words, the robot is completely non-ideological.
IIIc. The robot always represents equivalent states of knowledge by equivalent plausibility assignments. That is, if in two problems the robot’s state of knowledge is the same (except perhaps for the labeling of the propositions), then it must assign the same plausibilities in both.
Some of those make sense, but others are mere value judgements. For instance, on a technical level we never actually represent plausibility with real numbers: computer storage is finite, so we instead use floating point numbers. These have all sorts of quantization and precision issues, and yet the math works out fine. Flip to the Appendix and you’ll see Jaynes equivocates between real and rational numbers, which are very different beasts. I’ve got a different way, which borrows some concepts from de Finetti’s approach to arrive at the same basic outcome but (I think) gets there in a more intuitive way.
Thinking Through the Basics
It also doesn’t take Jaynes’ book long to start filling up with symbols. I have no problem with that, but it can make for an intimidating read. On the contrary, I’d start with building an intuition. I know it’s possible to convey the basics without resorting to much formalism, because I’ve done it. The highest grade I got on a philosophy essay was for one where I derived Bayes from first principles. All the math was implied! This is great starting point for those who are math-phobic or only care about the basics, so it’s how I’d kick off the series.
I don’t think you can truly understand Bayesian statistics without invoking math or logical symbols, though, so from there I’ll lay out the basics of deductive logic. Most of this is pretty straightforward propositional logic, though I’d like to get into the weeds just to make sure the basics are hammered home. By luck I stumbled on some relevant concepts via zero-knowledge proofs that make a great bridge to what comes next:
Inductive logic, or reasoning that incorporates uncertainty. The transition between these two should be relatively smooth, though there are a lot of little gotchas we’ve got to be mindful of. Rather than front-loading those modifications, I think it’s more powerful to see why they’re necessary before I propose them. Once we’ve gotten to this point, it’s kind of shocking how trivial Bayes’ Theorem becomes.
What is Probability?
Alas, getting to that point required sweeping a lot under the carpet. At this point I’ll start pointing out some of the assumptions we’ve made, and really dig into the nature of probability. Some alternatives will be explored, if only because modern approaches to Bayesian statistics rely on them. This will also help make sense of the conflict between frequentist and Bayesian statistics.
My plans after this point aren’t very solid. I suppose the natural thing would be to cover how to use Bayesian statistics in practice, primarily through some worked examples. This would also be the ideal place to cover conjugate priors and Markov-chain Monte Carlo, which are essential for working with Bayesian stats. Hopefully if or when we get to this point, writing the earlier examples will have given me a better idea of what to put here.
That was kind of jumbled, so let’s hammer it into a point-form outline.
Thinking Through The Basics: A symbol-free attempt at developing an inductive logic.
Deductive Logic: Basic prepositional logic, and the associated symbolic language.
Inductive Logic: Extending deductive logic to cover uncertainty.
What is Probability?: Coming to terms with the assumptions made to get to this point.
Consequences: What we can do with this inductive logic.
I really like this approach, and I hope I’ll be able to do it justice.