Tags
Aristotle, Edward O. Wilson, Friedrich Schelling, G.W.F. Hegel, George Monbiot, John Doris, Ken Wilber, natural environment, Randall Collins, René Descartes, Stephen Jay Gould
Perhaps the trickiest thing about trying to be a philosopher today is the explosion of information in natural science: we are in the era of “rapid-discovery science,” as Randall Collins calls it in The Sociology of Philosophies. Aristotle could write not merely a Metaphysics but a Physics, and his wide range of general knowledge was enough to make him one of the experts on the subject. Even as recently as the 19th century, Schelling and Hegel could have a decent shot at writing “philosophies of nature,” in which they tried to think philosophically through the whole scope of the way the natural world works. But today, not even a professor of natural science can know all the science that’s out there, even in relatively general terms. To some extent, we need to rely on the authority of experts we trust to know their fields well – what Indian philosophers called the śabdapramāṇa, the source of knowledge beyond inference and personal experience. And even if we somehow could know all the science for a moment, we’d lose it almost instantly as the science changes. Ken Wilber, trained as a biochemist, tries to isolate science from mysticism and enlightenment in order to make sure that his conception of mysticism is protected when the science inevitably changes.
I have my doubts about Wilber’s approach. It seems to me hypothetically possible that carefully defined controlled experiments could prove that the Buddha’s path does not actually reduce our suffering. I prefer to affirm knowledge but deny certainty: we must learn enough to have confidence in our views, but accept that some of them will nevertheless be proven wrong. New experimental evidence is one way we can be proven wrong; so is existing evidence that we weren’t always aware of; so even are new arguments that don’t depend on evidence. Descartes thought that the one thing he could be certain of was that he existed, but Buddhists have raised powerful challenges even to that view. If Descartes could be wrong in his certainty on the self, how can we really be certain of anything else? (Some teachers like to tell their students: “25% of what I’m telling you is wrong. I just don’t know which 25%.”)
To deny certainty is not to deny knowledge; it’s just to deny certain knowledge. We need, it seems to me, to accept knowledge as in some respects provisional, but as no less knowledge for that. It’s self-contradictory to deny the existence of truth or knowledge, but there is no contradiction in denying certain knowledge. (It’s perfectly consistent to be uncertain that there’s no certainty.) The knowledge derived from controlled experiment is, in this respect, not different in kind from any other knowledge.
The irresponsible option is to simply avoid the science. I’ve previously lambasted Stephen Jay Gould’s concept of “non-overlapping magisteria” (NOMA), in which “religion” and “science” don’t overlap, so “religion” can proceed without thinking of science. (The NOMA view is sometimes called “Averroism,” but that’s an awful term for the view because its namesake Averroës, ibn Rushd, never actually held it.) It’s not merely the “religious” who are tempted by the NOMA option, either. John Doris in his Lack of Character, and the resulting debates over it, complained that philosophers too often see themselves as examining a priori questions of pure reason, waving scientific research away with “That’s an empirical question.” I don’t agree with Doris’s specific claim that current psychological research strongly undercuts virtue ethics, but I think he’s right on the more general point: experimental research matters for philosophy. Experimental studies of cognition matter for the theory of knowledge; experimental studies of happiness matter for practical philosophy. Experimentally derived knowledge does not and cannot exhaust philosophical reflection, as E.O. Wilson seems to think it does, but it does matter.
It matters not only for philosophical reflection, but also for political participation. And here the question of authority comes to the forefront. What’s put these issues fresh in my mind is George Monbiot‘s discussion of climate change denial. As you likely now, the present climate talks in Copenhagen are now operating in the shadow of a scandal to the effect that climate scientists fudged data to exaggerate the evidence for global warming. Their behaviour was surely wrong, and it calls the authority of these particular scientists into question. But Monbiot notes another, earlier leak in the opposite direction. It’s not news that well funded consortia of energy companies have been trying to push public opinion against action on climate change, partially by denying that it exists or that it’s human-caused. But what the leak reveals is the consortium’s rhetorical strategy: “members of the public feel more confident expressing opinions on others’ motivations and tactics than they do expressing opinions of scientific issues.” Portray climate scientists as sleazy and dishonest and you will sow public doubt about the existence of climate change, no matter how solid the evidence for climate change remains.
Such an ad hominem approach, unfortunately, seems to be working all too well for the energy companies. But there’s another problem: unearthing the energy companies’ motivation is merely an ad hominem attack in the other direction. Both sides push rhetoric that departs from the actual evidence. Why? Because the political leaders who make most of the decisions about action on climate change know so little about the science involved, and they will be elected by a citizenry that knows even less. The motivations of the participants in the debate are not irrelevant – they affect the degree to which those participants can be considered reliable authorities for knowledge – but they must be less important than the evidence the participants use to generate their authority.
So what’s the responsible thing to do about science for laypeople, in politics as in philosophy? We cannot but act on the knowledge we presently have, as uncertain as it may be. We need an epistemological humility; we need to allow for the possibility that we may be wrong. We also need to consider exactly what the fudging of data implies, and what it doesn’t imply. Ideally we would fully examine the evidence ourselves; to the extent that we can’t do that, we must still rely on authority. The particular scientists involved in this scandal have had their authority compromised; but there have been plenty of others who haven’t.
Scientists don’t make it especially easy for laypeople to follow the subject: internecine feuds and vicious disagreements are common, making it hard to know what’s actually going on. If a reporter picks 2 scientists to interview for an article, it’s impossible to tell which guy has a peripheral/fringe opinion, which person is producing better-respected work. Then again, as I can rant about at length, science reporting is also pretty terrible, frequently skewed by sensationalism.
Of course, none of that addresses the problem of expanding knowledge; there’s just too much out there to even be passingly familiar with all of it. Personally, I dream of some agency of career scientist-reviewers who publish reviews every few years that summarize and contrast the lines of competing and converging research in all sorts of fields. But I suspect in practice, these would end up getting as politicized and polarized as the decentralized sources of science information do now.
Very likely. The sociology of science is a fascinating field – one I also wish I knew more about…
Amod: After Kum?rila’s sarcasm against the Buddha’s being reliable as far as dharma, just because he has proven reliable in other fields, Bochenski explicitly says that to be reliable in one field does not say anything at all about one’s authority in others. This simple rule may be of help in distinguishing the epistemic authority of scientists’ reports. They are reliable as far as data, etc., but they are just simple citizens whenever they try to draw conclusions out of them. So, I am glad to read about the embryo’s development but I do not believe that an embryologist should decide whether abortion is legal or not. The same applies to climate change, I believe. (I am not speaking about fake data, which may be just a criminal act).
Ben: great idea (to be applied in all fields of knowledge, by the way). Apart from the one you mention, a further problem is linked with the fact that such reports are hard to be written BUT are not valued as a major contribution to the research (so called “originality” being instead over-valued). I pointed out that in a previous post of mine (http://elisafreschi.blogspot.com/2009/07/in-praise-of-reading.html).
The tricky question here is what counts as a “field.” I’m not sure it’s quite so easy to separate data and conclusion. One scientist’s conclusion is another’s data; and if we don’t have time to evaluate the data ourselves, we will also have a hard time evaluating the conclusions. There’s a general point that you’re getting right here (e.g. climate scientists do not know enough about policy to be the ones to set it) but it’s tough to figure out how to phrase that point in the general case.
I agree that the academy overvalues writing and undervalues reading. That seems unlikely to change soon, alas.
They are reliable as far as data, etc., but they are just simple citizens whenever they try to draw conclusions out of them.
So, I am glad to read about the embryo’s development but I do not believe that an embryologist should decide whether abortion is legal or not.
These are two very distinct claims here: I would posit that your first one is dead wrong, though your second one is more or less correct. Learning how to draw conclusions from data is exactly what science is: taking messy and incomplete information, and fitting it into a sensible story. Note that the only disagreement here may basically be over different definitions of “conclusions”; but it’s a word that has clear meaning in the domain of science: interpretations and stories that turn data into meaningful (and testable) statements about the world.
However, that’s distinct from the question of drawing policy or other philosophical lessons from their conclusions. It may be that scientists are useful for bringing actual data into public debate, and interpreting that data, but policy and ethical questions involve plenty of other issues on which scientists are not necessarily experts.
You’re certainly right that reviews of that sort are not highly valued: that’s why I “proposed” that the reviewers do so as a career, so the reviews are the source of their professional advancement.
Pingback: Philosophy and science: comic takes | Love of All Wisdom