Among my many forms of cobbled-together self-employment I provide specialized tutoring to graduate students in ancient history and philosophy around the world. Which is rewarding in lots of ways. One of which is when my student ends up correcting an error of mine. That’s when you know you are a successful teacher, and they are starting to surpass you in knowledge and acumen. I’ve actually been excited to report on this, and correct the record. Gratitude goes to Nick Clarke.
The short of it is that long ago in a comments thread on my blog many years ago I was incorrect in my analysis of Gettier Problems. I was on to the right solution, but I made the mistake of assuming an unsound conclusion could not be considered justified (and without realizing that’s what I was doing). Conclusions in Gettier Problems rely on false premises to reach true conclusions. I was right about that. But I wasn’t right about that being grounds to dismiss them.
Backstory is required.
The Definition of Knowledge
All the way since ancient times the most popular definition of “knowledge” has been “justified true belief.” The notion is credited to Plato, although it’s doubtful he was the first to float the idea. That knowledge is a belief is obvious (clearly if you don’t believe x, it can’t be said that you know x is true). And that it’s only knowledge if it’s true is almost as obvious (although there are philosophers who would challenge this requirement). But we generally also require that it be justified. And the catch hangs on just what that means. What counts as a justified belief? Usually philosophers settle on justification being a conclusion reached by a valid line of reasoning, although that’s not the only way to resolve the matter.
Justification can more broadly be defined as “deriving a belief by a reliable means,” the term reliable allowing for some error (since absolute certainty is impossible), so we say we are justified in believing our car is in the garage if our means of determining that our car is in our garage is reliable enough to make it highly probable that our car really is in our garage, given all the information we have access to at the time. But of course, we could be wrong. For all we really know, our car could have been stolen, or quantum mechanically dissolved. But those are very unlikely to have occurred given all we do know (e.g. the garage is well nigh impenetrable without our knowledge; the QM probability of spontaneous dissolution is vanishingly small; and so on). So our belief can still be justified. But a belief can certainly be justified and false. That’s why knowledge is usually required to be both justified and true.
A problem arises, however, when philosophers choose to define “justified” as being conclusions reached by valid reasoning. Because conclusions reached by valid reasoning can still be reached by unsound reasoning. In technical parlance a conclusion is valid when it follows logically from the premises, but it is only sound when the premises are all true (see Validity and Soundness). This is typically meant, though, of deductive reasoning, and most reasoning is actually inductive (see Deduction vs. Induction). That my car is in my garage is not a conclusion of deductive logic. It could be, but only if stated as a probability, as in fact it should be (the point I was getting at in my original discussion of this). In fact, when we start rephrasing knowledge claims as claims about probabilities, everything changes.
The fact that in reality we can only ever know the probabilities of things, and not the things themselves (see Proving History, pp. 23-26), I think belies a fundamental mistake in common philosophical thinking about the nature of knowledge. (Indeed, even an omnible God is in the same boat, since there will always be some nonzero probability that He is being tricked into thinking he is omniscient and infallible by a Cartesian Demon, so even for the greatest conceivable God all knowledge would still be probabilistic.) But let’s suppose we could reduce all knowledge claims to claims about probabilities that are arrived at by deductive logic. I think they could be, although most would doubt it. But here I’m just asking you to suppose it for the sake of argument, since it would entail the best possible state of knowledge: all conclusions are the products of deductive logic.
What has been shown is that even if that were the case, the definition of knowledge still has a serious flaw.
This was cleverly demonstrated by Edmund Gettier in 1963. His method is highly convoluted and involves esoteric aspects of deductive logic that have even at times been questioned, but you can learn all about this elsewhere (see Gettier Problems for a really good treatment). For I can demonstrate the same point he was making using a much more straightforward approach (below). Key to being able to do this is afforded by the brilliant analysis of Linda Zagzebski, in “The Inescapability of Gettier Problems,” The Philosophical Quarterly 44.174 (1994), pp. 65-73. Zagzebski demonstrates that Gettier problems actually just illustrate (and thus reduce to) a broader and more basic insight: that it is possible to have “justified true belief” entirely by chance coincidence. And that seems to chafe at philosophers (or indeed most people), since it seems strange to say you know something, when really it’s only by sheer luck that what you claim to know happens to be correct.
Now, some philosophers don’t have a problem with this. They are happy to allow that accidental knowledge is knowledge. It’s true, after all. So what’s the big deal? Most philosophers, however, are intuitively disturbed by the idea that accidental knowledge can be credited as knowledge. Although when knowledge is stated as a knowledge of a probability this is less disturbing. If I say I know there is a 99.99% chance my car is in my garage, and it just happened to be the case that my car was not in my garage, I have not actually been contradicted. My belief was still true. Because my actual belief entailed there is already a 0.01% chance that my car is not in the garage, so it’s not being there is already a possibility fully included in my belief. What has been contradicted is my expectation that my car is in my garage. So my prediction that it is there is falsified by its not being there, but my prediction that it only had a small chance of not being there is not falsified by its not being there.
By similar logic, if I use a belief-forming method that is highly reliable, let’s say it produces conclusions with 99.99% certainty, I can say I know that any x produced by that method has a 99.99% chance of being true, yet if that method had a flaw–a flaw that rarely affects its conclusions (hence the method’s high reliability)–which resulted in my mistakenly believing my car was in my garage when in fact indeed it was in my garage, I can still legitimately say I know there is a 99.99% probability that my car is in my garage. Because that is entailed by the method’s reliability, which measure of reliability already includes a full accounting of any such flaws (like being accidentally correct). This is less disturbing.
But philosophers often confuse “I know x is probably true” with “I know x is true” (even layfolk often confuse those two statements), even though the latter, in all practical respects, is either saying the same thing as the former, or else can never actually be true–because it then would translate as “I know the probability of x is 100%,” which is knowledge no one ever has about anything (other than immediate uninterpreted experiences, but that’s not what we usually need to know). So when we get knowledge right, as a belief in epistemic probabilities, Gettier Problems already look a lot less problematic.
The Impending Death of Socrates the Lizard
I will illustrate this in a manner much more straightforwardly than Gettier did (and here I am in gratitude to Zagzebski, although she does not use this approach herself). The stock example of the most basic deductive reasoning is this:
- P1. Socrates is a man.
- P2. All men are mortal.
- C1. Therefore, Socrates is mortal.
This is a valid argument. If the premises are true, then it is also sound. In traditional parlance, my belief that Socrates is mortal would be justified as long as I arrive at it by a valid means, and here all that requires is the two premises P1 and P2. It does not require those premises to be true. They could be false, and my conclusion would still be valid. It just wouldn’t be sound. I would then have a justified false belief that Socrates was mortal.
Of course usually a little more than that is required. No philosopher would consider my belief justified if I knew (or even so much as strongly suspected) that either premise was false. So generally a belief is only considered justified if our belief in the premises is also justified. And so on all the way down the ladder of reasoning (which ladder ends in the basement of raw immediate uninterpreted experience: see Epistemological End Game). But it’s still possible to have a justified belief in the premises (to be really sure they are true, by a very reliable means) and those premises still be false.
For example, suppose I overhear a conversation about a certain Socrates. The nature of the conversation is such that the parties involved are “obviously” talking about a man named Socrates. I therefore have a very reliable belief that this Socrates is a man. I also have a very reliable belief that all men are, at least currently, mortal (given my grasp of such things as biology). And I can validly reach the conclusion that this Socrates is mortal from those two premises. So I have a justified belief that Socrates is mortal.
But suppose, lo and behold, those people were actually talking about a pet lizard named Socrates. It’s highly improbable that anyone would speak the way they did about a lizard, but alas that’s what happened. Unbeknownst to me. And I certainly can’t be expected to have known that. I can’t even have been expected to know it was in any way probable. To the contrary, I know (like the improbably undetected theft of my car) that it’s very improbable, which is why my mistaken belief that this Socrates is a man remained justified. But this means the deductive reasoning I am engaging in, P1 + P2 = C1, is valid and unsound. Because P1 happens to be false. But P1 being false does not make my belief in C1 unjustified, because justified beliefs can be false. My belief in P1 was justified, and C1 justifiably follows from P1 (and P2), so my belief in C1 is justified as well.
But guess what? All lizards also happen to be mortal. So in fact C1 is true. I just reached it by assuming a false premise. The fact that my belief in C1 is true is simply an accident. I got lucky–the mistake I made (believing P1 is true when it’s not) didn’t change the conclusion I reached. This is the same situation Gettier Problems aim to demonstrate, although by a far more esoteric and circuitous route. We can have a justified true belief…that we derived from false beliefs!
So is my belief that this Socrates is mortal knowledge? In other words, can I honestly say I know that Socrates the lizard is mortal? Even though in fact what I believed was that a certain Socrates the man was mortal? I had no belief involving a lizard. Yet my belief that this Socrates is mortal is true. And also, as it happens, justified. One might object that if I have a belief about Socrates the man, I do not have a belief about Socrates the lizard, except via a masked man fallacy. But by the same token, in Gettier Problems, the rule of disjunction introduction is used, where “if P, then P or Q,” and that requires that the first term (P) be true, or else the rule does not follow and thus any deduction relying on it is unsound. And yet being mistaken that P is true is just the product of another fallacy somewhere down the line (it can even be a masked man fallacy, though it does not have to be).
Philosophers are troubled by the idea that I can “know” Socrates is mortal when in fact I sort of kind of really don’t, since although my belief that Socrates is mortal is both true and justified, I really only came by it by accident. It’s just a lucky coincidence that lizards are also mortal. In Gettier Problems, the lucky coincidence each example hinges on is even more improbable than that.
This is actually easily solved. Of course, semantically, it solves itself. If knowledge simply is “justified true belief,” then the beliefs generated in Gettier cases (like my belief that Socrates the lizard is mortal) are simply knowledge. Full stop. The objection to that cannot be semantic, because justified true belief is justified true belief, whether arrived at accidentally or not. The objection therefore is really at root pragmatic: philosophers don’t want accidental knowledge to be knowledge, so the fact that their definition of knowledge allows that is something that annoys them. More charitably, we can say it creates a problem for us when we want to distinguish knowledge reached non-accidentally from knowledge reached accidentally.
But that problem is solved by changing the definition of knowledge accordingly. Zagzebski overlooked the obvious way to do that: just say knowledge is “justified true belief not arrived at accidentally.” This simply brackets away all Gettier cases. Since they reach “justified true belief” only by accident, if we simply declare that any belief reached by accident is not knowledge, then all Gettier cases are eliminated. As is my knowledge that Socrates the lizard is mortal: I do not really know that, because it’s being true is simply a coincidence, and I am not aware of that. In other words, I do not know “it’s true only by coincidence,” therefore I do not really know it’s true. (Of course, I will go on believing it’s knowledge, but that’s true of all justified false beliefs.)
Zagzebski demonstrates that what Gettier Problems really show is that “since justification does not guarantee truth, it is possible for there to be a break in the connection between justification and truth, but for that connection to be regained by chance,” so any theory of knowledge that accounts for chance error (both the false positive and the false negative) will solve the Gettier problem. Her recommended solution is thus to do that (pp. 72-73), although she doesn’t appear to realize that her suggested solution can be reduced to a very simple redefinition of knowledge as justified true belief not arrived at accidentally. Her conclusion is that any “truth + x” redefinition of knowledge is vulnerable to Gettier Problems, but that’s not the case when the “random chance” element is made a disqualifier. You might say that’s a “truth – x” redefinition of knowledge, but if we define x as “not generated by random chance” then it’s actually a “truth + x” redefinition of knowledge. Zagzebski’s conundrum is resolved. Her conclusion was incorrect after all. But in a way she was getting close to discerning herself.
It’s worth noting that disagreement on this is perfectly legitimate because all we are really talking about is how we want to define the word “knowledge” (and cognate terms like “to know”), which is actually not a question of objective fact but simply a cultural or practical choice about what symbols to assign to what concepts and when. We can define “knowledge” any way we want. So philosophers who disagree about how to define it are really just doing nothing more than making pragmatic proposals about which definition is most useful in practice. So we have Jonathan Weinberg, Shaun Nichols, and Stephen Stich in “Normativity and Epistemic Intuitions,” Philosophical Topics 29 (2001), pp. 429-60, and Stephen Hetherington in “Actually Knowing,” Philosophical Quarterly 48 (1998), pp. 453-69, arguing that Gettier knowledge, indeed any lucky knowledge, simply is knowledge, so get over it already. And they’re right. So long as you don’t mind the consequences of that definition, there can be no objection to it.
Philosophers who oppose that outcome are merely saying that they find that definition confusing, because its consequences muddy up what they usually want to use the word “knowledge” for, and so they’d rather, for convenience, restrict “knowledge” to beliefs that aren’t arrived at by mere luck. And they are welcome to do that. Because again, we can define words any way we want to. As long as our audience knows how we have defined them. Which can present a problem when the most popular linguistic convention is not the definition you are using. If you want to speak or write without having to teach your audience a new language, but instead just rely on the lexicons already in their brains, you have to actually rely on the lexicons already in their brains. Which is what dictionaries attempt to empirically document, for example. I discuss these issues in more requisite detail in Sense and Goodness without God (II.2, pp. 27-48, esp. 2.2.1, pp. 35-37). For the present point, there is no English convention on whether “knowledge” vocabulary is inclusive of lucky knowledge, so the philosophical debate over which to prefer can’t be resolved by appealing to how the words are used in practice. Most people never even think about it. And those who do are divided on the matter.
So you can choose to be content with either. And that’s fine. You can agree that my accidental knowledge about the mortality of Socrates the lizard is knowledge (by one definition, that being the most popular and traditional definition in the formal systematic study of knowledge), or you can say that it is not knowledge because when you use the word “knowledge” you mean to exclude true beliefs formed by accident (my reformed definition of knowledge). Either is correct. You just have to explain which it is. When it matters. Which is rarely.