, , , , , , , , , ,

Perhaps the trickiest thing about trying to be a philosopher today is the explosion of information in natural science: we are in the era of “rapid-discovery science,” as Randall Collins calls it in The Sociology of Philosophies. Aristotle could write not merely a Metaphysics but a Physics, and his wide range of general knowledge was enough to make him one of the experts on the subject. Even as recently as the 19th century, Schelling and Hegel could have a decent shot at writing “philosophies of nature,” in which they tried to think philosophically through the whole scope of the way the natural world works. But today, not even a professor of natural science can know all the science that’s out there, even in relatively general terms. To some extent, we need to rely on the authority of experts we trust to know their fields well – what Indian philosophers called the śabdapramāṇa, the source of knowledge beyond inference and personal experience. And even if we somehow could know all the science for a moment, we’d lose it almost instantly as the science changes. Ken Wilber, trained as a biochemist, tries to isolate science from mysticism and enlightenment in order to make sure that his conception of mysticism is protected when the science inevitably changes.

I have my doubts about Wilber’s approach. It seems to me hypothetically possible that carefully defined controlled experiments could prove that the Buddha’s path does not actually reduce our suffering. I prefer to affirm knowledge but deny certainty: we must learn enough to have confidence in our views, but accept that some of them will nevertheless be proven wrong. New experimental evidence is one way we can be proven wrong; so is existing evidence that we weren’t always aware of; so even are new arguments that don’t depend on evidence. Descartes thought that the one thing he could be certain of was that he existed, but Buddhists have raised powerful challenges even to that view. If Descartes could be wrong in his certainty on the self, how can we really be certain of anything else? (Some teachers like to tell their students: “25% of what I’m telling you is wrong. I just don’t know which 25%.”)

To deny certainty is not to deny knowledge; it’s just to deny certain knowledge. We need, it seems to me, to accept knowledge as in some respects provisional, but as no less knowledge for that. It’s self-contradictory to deny the existence of truth or knowledge, but there is no contradiction in denying certain knowledge. (It’s perfectly consistent to be uncertain that there’s no certainty.) The knowledge derived from controlled experiment is, in this respect, not different in kind from any other knowledge.

The irresponsible option is to simply avoid the science. I’ve previously lambasted Stephen Jay Gould’s concept of “non-overlapping magisteria” (NOMA), in which “religion” and “science” don’t overlap, so “religion” can proceed without thinking of science. (The NOMA view is sometimes called “Averroism,” but that’s an awful term for the view because its namesake Averroës, ibn Rushd, never actually held it.) It’s not merely the “religious” who are tempted by the NOMA option, either. John Doris in his Lack of Character, and the resulting debates over it, complained that philosophers too often see themselves as examining a priori questions of pure reason, waving scientific research away with “That’s an empirical question.” I don’t agree with Doris’s specific claim that current psychological research strongly undercuts virtue ethics, but I think he’s right on the more general point: experimental research matters for philosophy. Experimental studies of cognition matter for the theory of knowledge; experimental studies of happiness matter for practical philosophy. Experimentally derived knowledge does not and cannot exhaust philosophical reflection, as E.O. Wilson seems to think it does, but it does matter.

It matters not only for philosophical reflection, but also for political participation. And here the question of authority comes to the forefront. What’s put these issues fresh in my mind is George Monbiot‘s discussion of climate change denial. As you likely now, the present climate talks in Copenhagen are now operating in the shadow of a scandal to the effect that climate scientists fudged data to exaggerate the evidence for global warming. Their behaviour was surely wrong, and it calls the authority of these particular scientists into question. But Monbiot notes another, earlier leak in the opposite direction. It’s not news that well funded consortia of energy companies have been trying to push public opinion against action on climate change, partially by denying that it exists or that it’s human-caused. But what the leak reveals is the consortium’s rhetorical strategy: “members of the public feel more confident expressing opinions on others’ motivations and tactics than they do expressing opinions of scientific issues.” Portray climate scientists as sleazy and dishonest and you will sow public doubt about the existence of climate change, no matter how solid the evidence for climate change remains.

Such an ad hominem approach, unfortunately, seems to be working all too well for the energy companies. But there’s another problem: unearthing the energy companies’ motivation is merely an ad hominem attack in the other direction. Both sides push rhetoric that departs from the actual evidence. Why? Because the political leaders who make most of the decisions about action on climate change know so little about the science involved, and they will be elected by a citizenry that knows even less. The motivations of the participants in the debate are not irrelevant – they affect the degree to which those participants can be considered reliable authorities for knowledge – but they must be less important than the evidence the participants use to generate their authority.

So what’s the responsible thing to do about science for laypeople, in politics as in philosophy? We cannot but act on the knowledge we presently have, as uncertain as it may be. We need an epistemological humility; we need to allow for the possibility that we may be wrong. We also need to consider exactly what the fudging of data implies, and what it doesn’t imply. Ideally we would fully examine the evidence ourselves; to the extent that we can’t do that, we must still rely on authority. The particular scientists involved in this scandal have had their authority compromised; but there have been plenty of others who haven’t.