Tags

, , , ,

How can we, or should we, learn what is true and what is false? This is one of the most enduring and basic questions in philosophy – “basic” because it is fundamental to so many others, not because the answers are in any way easy or simple.

The question, or some form of it, came up a number of times in recent discussions of “common sense”: if common sense isn’t reliable, I was asked, what is? I’m going to try to avoid the word “reliable” as I think its different uses became confusing in the previous debate; I have little stake in its use as a term. But the basic question of determining truth from falsehood is a crucial one and worth asking.

That’s not to say, however, that it admits easy answers, for I don’t think we should expect easy answers on the most basic philosophical questions. If the answers were easy, it would be a stunning and bizarre fact that so many intelligent people have spent so long trying to answer them and explain them without coming to a resolution (as indeed has, so far, been the case in the recent debates, though these have lasted only weeks and not centuries). This is one reason why I don’t identify knowledge of the truth as deriving from a single source like “common sense” – though my posts and comments should make clear I have many more specific problems with that concept, especially as defined by Thill and other commenters on this blog.

How should we identify truth instead? The question of how we should discern truth is closely linked to the question of how, in practice, we do discern it. I like to say that we start where we are: we assess new information learned by reasoning out its coherence with the information we have already accepted. The new information comes in through sense perception one way or another, though the perception might be of someone else’s testimony: I observe you tell me something.

So I think the Vedānta schools are probably right when they describe the means of knowledge (pramāṇas) as perception, inference and authority – that is, the testimony of sources we trust. But that’s not to say any of these sources are always right. Rather, they’re right often enough to be worthy of our belief unless there is some reason to mistrust them in a particular case: for example, I would normally believe my eyes telling me that there is a large yellow stick floating in front of me, but I can’t touch the stick and I have heard that this perception is a symptom of eye diseases, so I don’t.

When a particular belief is in question, though, it’s not enough to refute it merely by saying we learned its contrary through any of these means of knowledge; for they can be, and often are, wrong. Moreover, this is not a matter of one means taking precedence over another. Yes, my senses tell me that the sun revolves around the earth; but because I trust the authority of trained astronomers, I know that this is not the case. Or alternately: a scientist friend (in this case our esteemed commenter Ben) tells me there’s a new article in a refereed psychology journal telling us that caffeine doesn’t actually increase alertness; but I don’t accept this claim because it is so completely contrary to my felt and observed experience of caffeine’s effects on myself. The conclusions must have been misreported, or something wrong with the methodology, or the sample unrepresentative, or the definitions of “alertness” something very different from what I understand by it.

But how do I, or should I, make the decision in those cases where means of knowledge conflict with each other or with themselves? I don’t think a hard-and-fast rule can be provided. Providing an easy and definitive answer to the question “How can I tell true from false?” is like providing an easy and definitive answer to the question “How can I become a better fiddle player?” Discernment of true and false is a virtue, a skill learned with time and practice; there is a wealth of tips and advice one can offer about how to do it better, but one can’t provide a formula for it that will settle disputes in advance. (Or rather, one can; it’s just that one will be wrong.) In saying this, I’m expressing agreement with a contemporary school of analytic philosophy known as virtue epistemology.

Thill disputes such claims:

If you have easy answers to determine what is unreliable, indeed, if you can go to the absurd length of deeming common sense (on which you rely for your very survival) unreliable, you can surely specify what you consider reliable and what you depend on to function in the world…. your claim that it is not easy to ascertain what is reliable implies that it is not easy to ascertain what is unreliable. This is at odds with your easy dismissal of the appeal to common sense on the grounds that it is unreliable.

But I’ve made no such easy dismissal. The easy answer Thill asked for, as far as I can tell, is a statement of “that which is X is reliable and that which is not-X is not,” an exaltation of one single source of knowledge in the way that Thill exalts common sense, which is what I’ve refused to provide here and elsewhere. My refutation of “common sense” as a reliable source of knowledge didn’t rely on a single-sentence knockdown; more importantly, it didn’t say simply “all X is true and all Y is not,” but tried to show us the complexity of the world and of knowledge. I have never said that the items of knowledge included in “common sense” are always wrong; indeed, I suspect most of them are right. The point was that we do not have any special reason to believe a claim based on the fact that it is said to belong to “common sense” (in the sense of knowledge learned without training).

If my alternative view can be described in a sentence, it is probably this: we need to engage in the complex process of knowing as best we can. And if that sounds vague, that’s because it is, intentionally. You should be suspicious of anyone who claims to give you a single easy tip that sums up the whole of how to play the fiddle, do successful biology experiments, or pick up romantic partners. You should be similarly suspicious of anyone who claims to easily sum up how to tell truth from falsehood in the general case.

There is, of course, plenty to be learned in each of these practices; that’s one of the reasons they’re not easy. There are various tips and tricks that can aid in each: play emphasized notes with a down stroke of the bow; control as many variables as you can; groom your hair carefully; trust the conclusions of scientists with expertise in their fields. All of these tips are generally wise, but still admit exceptions: there are two emphasized notes of the same pitch in a row; controlling an additional variable would cost so much that you’d need to hire fewer staff and make careless mistakes; you’re courting someone who likes the dishevelled look; the scientist misspoke because she’s having a bad day. And in each field there is also advice offered that is well meaning but inappropriate, advice we should not take: play as fast as you can; fudge your data a bit and nobody will notice; pretend to be wealthier than you are; treat a claim as true because one can learn it without specialized training. The acceptance or refutation of one of these tips may be a relatively simple matter by itself; but that doesn’t make the whole practice an easy one.

Is this a definitive account of how we can discern truth? No, it’s just a start. But that’s the point.