Tags
Aristotle, Ayn Rand, Denis Dutton, Ethan Mills, G.E. Moore, game theory, Jesse (commenter), Neil Sinhababu
I’ll be the first to admit that last week’s post was insufficiently argued. But I think it may have been helpful as a springboard for further (potentially more carefully argued) reflection; I expect that next week’s post, as well as this one, will follow up on it. I argued last week that attempts to explain value judgements seem to run into trouble when they don’t ground those judgements in a deeper metaphysical reality. I looked at this problem there largely in terms of the early twentieth-century analytic tradition. But I didn’t address one of the most common non-metaphysical attempts to explain value judgements: the evolutionary explanation.
Several comments from Jesse took this approach. “Morality,” he claims, “has existed in some form or other since the first self-replicating proteins formed in the primordial ocean.” Citing game theory, he notes that organisms which helped each other out would have been far more likely to survive and thrive. Ethan Mills, while somewhat skeptical of the game-theoretic explanation, still cites James Rachels for another kind of evolutionary explanation: at the social rather than individual level, societies wouldn’t have lasted long without morality.
Now I am not and was not speaking only of “morality” in the sense of aiding (or refusing to harm) others. (There was a reason the word “morality” didn’t appear in that post.) As I noted in my comment, I was also speaking of other kinds of value – including virtues like self-discipline and patient endurance that would be valuable whether or not anyone else is around, and for that matter of aesthetic value, the value in good art or the beauty of nature.
But that’s not the big issue here, for it’s not so hard to come up with evolutionary explanations for these other kinds of value either. Self-disciplined creatures would very likely have adapted better to their environments. There are plenty of people, perhaps most notably Denis Dutton, who have even tried to find evolutionary explanations for aesthetics.
I am not going to pass judgement here on whether evolution is a correct or adequate causal explanation for the origins of human value judgements. For the sake of argument, in this post, I am going to assume that such accounts get the causal origin of value judgements basically correct. Because far more important is a deeper criticism: they miss the point.
The error being made here is parallel to the one that tries to prove God’s existence merely as a First Cause of the universe, not as a First Explanation. In this bastardized version of the cosmological argument, the causal processes of the universe must have a starting point, identified with God – a rather useless God, one that doesn’t mean anything more than the Big Bang. But the intellectually respectable form of the cosmological argument isn’t just about causes, but about other kinds of explanation: not just where the world comes from, but what its essence is and what it’s for.
Return more directly to the present topic: to explain the causes of value judgements, to identify where they have their origin, is not actually to explain value. What this kind of explanation explains is the bare fact that people happen to make judgements of value. What it doesn’t and can’t explain is the truth or falsity of those judgements. To have an adequate account of ethics and values, we need to know not merely why people happen to think some things good and some things bad (or why they act accordingly). We need to know why things actually are good and bad. (Our mode of explanation needs to be ethics, not ethics studies.)
Few are seriously prepared to jettison this distinction, between what actually is good or bad and what people merely believe to be so. We want to say that Pol Pot was wrong when he thought it was a good thing to commit genocide on his own people. To consistently say such a thing requires that we believe value judgements can be correct or incorrect; they need to have a referent, to refer to the action having a real goodness or badness independent of whether the agent takes the action or believes in its goodness. (Some do try and advocate a thoroughgoing value relativism, of course; I have responded to some such arguments here, here and here.)
I think the early analytic philosophers, more than those who find evolutionary explanations sufficient, at least grappled with this problem – they just failed to solve it. They asked: what do we mean when we call something good or bad? Those who try to reduce judgements of good or bad to a simple descriptive property – good is what fosters the species, produces pleasure, etc. – run into trouble pretty quickly, for it’s pretty clear that a great many usages of “good” do not simply mean any of these things. One could try and argue that those who use “good” to mean anything other than species preservation or the production of pleasure are mistaken, but they’ll have a pretty hard time making the case. Neil Sinhababu made a valiant effort, but I have argued that he failed, with some additional thoughts.
I have many problems with G.E. Moore’s concept of the naturalistic fallacy, and especially with the inadequate alternative he provided (as I discussed last week) – but I suspect that this important point is where he was coming from when he came up with it. Moore took the idea of the naturalistic fallacy much too far; I think one can legitimately make inferences from “is” to “ought” statements, but one should still be careful about doing so, especially when one puts a particular kind of descriptive claim at the heart of one’s ethics. The problem is nicely illustrated by Ayn Rand in this deeply problematic passage:
the fact that living entities exist and function necessitates the existence of values and of an ultimate value which for any given living entity is its own life. Thus the validation of value judgments is to be achieved by reference to the facts of reality. The fact that a living entity is, determines what it ought to do. So much for the issue of the relation between “is” and “ought.” (The Virtue of Selfishness, p.17)
What Rand doesn’t seem to have thought of is that such a view can have absolutely nothing to say to the person who chooses to kill herself – in suicide, in war, in civil disobedience. If your system of values comes out of the desire to live, it is irrelevant to anyone who takes that desire as unimportant. I think there’s a similar problem with the point Ethan makes in his comment that Buddhism is ultimately based on our desire to end suffering; not everybody treats that desire as decisively important, and I think Buddhists have a problem addressing those who don’t. (I intend to take up this point more next week.)
How to get around this problem? I note that Ethan lists Aristotle as having a “naturalist” theory of ethics comparable to Rand’s or Sinhababu’s. But Aristotle’s theory is a bit different from theirs, in that he sees value as an inextricable part of the natural world. The idea of God as First Explanation ultimately derives from his thought, because for him explanation needs to be teleological as well as causal: you need to explain things in terms of their purposes, what they’re for, as well as their (efficient) causes, what put them there. Aristotle’s views of nature are of course tied up with beliefs about causes that we cannot share in a scientific age. But it seems to me that we may still need something like them in order to make sense of value.
This is a really interesting post. Thanks.
As for Rachels, I think he might be on to something, but he’s the first to admit it isn’t completely fleshed out. But it’s a place to start.
As for Aristotle, you are absolutely right that value is inherent to reality given the idea of a final cause. In calling him a “naturalist” I just meant that he starts with how human beings are. This is the whole point of the function argument. Of course, Aristotle can go on to give an account of ultimate value in a way that we can’t, living as we do in the aftermath of the banishment of final causes in the scientific revolution of the 16th and 17th centuries. You might look at Philippa Foot’s book, Natural Goodness, in which she tries to do what you suggest: give an updated Aristotelian account of goodness. I’m not sure she succeeds, but I’m not entirely sure I understood her many subtleties.
As for evolutionary accounts of value in general, I agree that a causal explanation, while incredibly interesting in itself, isn’t really what philosophers are after. Also, evolution seems to have given us some pretty nasty tendencies toward things many of us think are bad (violence, tribalism, adultery, bad taste, etc.) while at the same time giving us the ability to re-evaluate our own tendencies or to “rebel against our genes” as Richard Dawkins puts it (which in itself is pretty amazing). Nonetheless, I think evolutionary accounts may be promising if we give them a chance. There may be some option we haven’t thought of yet: some sort of non-transcendent, non-relativist account of value. I haven’t studied enough of this literature to have much of an idea of what the proposals are, but someday I hope to do so.
We could also go the way that I think Hume and some of those who appeal to evolutionary accounts go and say that there simply is no further answer to the ultimate question of value aside from how we happen to be. Hume is bashed for not being normative enough, but I’m inclined to take him to be skeptical of this whole project of ultimate value and getting on with ethics anyway.
But I’m not sure that’s going to work either, because it’s still perfectly reasonable to ask: but why is x a good thing? ( where x = the survival/replication/ evolutionary fitness/happiness/non-suffering/flourishing of human individuals/genes/groups). Being of a skeptical bent, I’m willing to entertain the thought that we may not be able to answer that question, although the fact that we can even ask it at all is amazing in itself!
Right, I think we’re mostly in agreement here. To the extent that evolutionary psychology provides an adequate causal account of human behaviour (and I am skeptical of that point as well), it is relevant to accounts of the human good; but there is a big piece of the picture that it does not and cannot reveal.
Also, Foot’s book does sound interesting. I’ll have to go back and have a look.
Oh, you quoted Rand, now my skin is crawling.
Rand is – to put it bluntly – an idiot. Her assumptions regarding individuality, virtue and selfishness are so horrifically simplified and based on such poor assumptions, that if you follow them to their logical conclusions you would have no choice by to assume that society should never have formed at all.
Her simplistic view of the human psyche assumes that there can be no greater good than the personal – and I mean solely personal – good. Her work fails to recognize any the benefit of any social instinct or capacity for self-sacrifice whatsoever in humans (or other animals) and assumes that any signs of such are presumably negative aberrations.
Any trivial examination of her work against the rendering of history proves the fallacy of her work without even wasting much effort. No war could ever have been fought or won by the human beings she describes. No cities built, no countries formed. Nothing. The world she describes would be inhabited by solitary hunters.
In fact, you have to move quite far away from humanity on the evolutionary scale to even find *animals* that behave in the manner she espouses – solitary great cats being the best example. Any creature that lives in bands, packs, hives, colonies or tribes engages in more social behavior and personal sacrifice than Rand’ absurdly nonsensical philosophy would allow for.
Frankly, libertarians of all stripes start off on the worst possible foot philosophically when they espouse strong protection of private property ownership – that being *the single greatest sacrifice of freedom that humans have ever undertaken*.
Before personal property, you could touch and use *anything in the world* that another person was not holding. Pass that one law, and suddenly 99.999% of the world is closed to you – you may no longer touch or intrude upon anything that is not yours. You literally give up the entire world with that one law, and yet libertarians defend it vigorously.
Think about that for a moment and let it sink in – it’s a very important point, and one that very few people understand when they discuss freedom, the role of laws and so on.
As for evolution being our guide to society – well, no. It moves too slowly.
Human social and technological advancement changed everything, in every way. It changed the pace of the entire world irrevocably.
Up until a couple million years ago, change – real change – was incredibly slow, punctuated only rarely by epochal events such as asteroid impacts or volcanic upheavals. Evolution worked at its own pace as all life competed for its various places on the globe, and change was measured in eons.
No longer. As of about two million years ago, proto-humans started regularly using tools and basic language all the rules changed, things began to speed up. Other species suddenly had difficulty keeping up with human development and many began to go extinct – there were some that simply couldn’t evolve fast enough to deal with us.
Then about 14,000 years ago, early societies and then writing appeared, and everything went absolutely haywire, the entire world began changing at 1000’s of times the normal rate. Suddenly *nothing* could keep up with us, and wholesale extinction began. This was still well before the age of Reason.
Then the scientific enlightenment began, and the entire world is in extreme crisis as a result. Rates of global change are now quite literally millions of times the ‘natural’ rate that evolution normally takes place over. Nothing can keep up with it – even us.
The result upon our psychology is stark. Our inborn values and emotions are in severe crisis. Nothing we feel lines up with the reality around us anymore, everything changes millions of times too quickly for evolution to teach us how to think and behave – even our vaunted ability to learn and adapt as individuals is strained to its utmost limits as our world careens out of control around us.
Now my main argument regarding your primary thesis – that things and concepts have inherent objective value – is that you’re most likely asking a meaningless question.
Value (almost by the very definition of the word) is generally taken to be subjective – it may be a value that a great many people share, but it remains subjective nonetheless. The term value itself implies that one is applying a quantitative judgement upon something – not that it has a price sticker applied to it by God at the dawn of creation that defines its value.
For example, there is no virtue that cannot be warped into a negative contradiction – ANY virtue, including such stalwart values as bravery, self-sacrifice, and love (actually, love is probably the worst of the lot – just ask Shakespeare…).
Bravery or confidence is of no value in the face of unbeatable odds – under such conditions it transforms into pointless bravado or suicidal foolishness. In the opposite condition (overwhelming strength), bravery may transform into cruelty or bullying.
Self-sacrifice is the ultimate hallmark of the suicide bomber or the kamikaze pilot. Clearly this virtue has absolutely opposing values depending on which group is viewing the action.
Wisdom or caution can become paralytic, should the practitioner indulge in ever greater degrees of self-reflection. They ultimately trap themselves in a mental hall of mirrors, recursively retreating further and further from observable reality – each layer of conjecture having less real meaning than the last.
I prefer to call this Vizzini’s Trap, and honestly I think intellectuals end up here a lot more often than they are comfortable admitting – esp. when dealing with squishy subjective concepts like ‘virtue’. ;)
http://www.youtube.com/watch?v=E2y40U2LvKY
I guess I would need some sign of a truly incorruptible virtue, some unmitigated good in the world for which there is no negative, no possible drawback, no trade-off. I’m not really aware of any at all.
The idea that value is subjective is part of what I’ve tried to refute; I don’t think it’s a position that’s ultimately sustainable. If value is purely subjective, we cannot truthfully say that Pol Pot or Hitler or any other villain is actually wrong. They just happen to be different from us. We like working with computers and talking philosophy; they like killing millions of people. You say to-may-toe, I say to-mah-toe, let’s call the whole thing off. We might happen to decide to fight them, but our grounds for doing so are no better than their justifications for their actions. Is that a claim you’re prepared to defend?
Most of your comment deals with a quite different point than the subjectivity of virtue, namely that virtue is variable by circumstance and/or difficult to define. This is why the ancient Greeks said virtue is a mean between vices; foolhardiness or bravado isn’t real courage, but an excess with respect to courage. But this isn’t to say virtue doesn’t exist (or that it is subjective), only that it is difficult – both to identify and to practise. And that’s also a central feature of the Greek account; virtue is basically like a skill, something you can’t just identify intellectually before the fact but must develop with practice and habit.
Coincidently, there is an interesting book review in this Sunday’s NY Times Book Review of a new book by Robert Bellah on religion and evolution.
So then, enough with evolution, social logic and all that crap.
You’re talking about First Explanation – which to my mind is basically another way of asking ‘Why Are We Here?’, which is about as heavyweight as it gets in philosophy – interestingly it is also one of the first truly philosophical questions most of us asks as children.
I like to ask that question a bit differently – because first off I couldn’t care less ‘why’ we are here. If the universe has seen fit to plonk me down without a user’s manual to answer that for me, I’ll happily decide the answer for myself – *I* will decide why I am here, thank you very much. No deity, philosopher, or priest gets to decide that for me.
But by changing the question slightly, it returns to relevance, at least in the intellectual sense, and that formulation is:
‘Why is Anything here?’
Not me, not humanity, or the world, or even this universe – the real question is why did anything ever exist, at all, in any form whatsoever?
That question by definition cannot be answered by standard causal arguments, so that I rather doubt it *can* be answered in a conclusively logical fashion. Existence itself is technically paradoxical – the one seemingly magical act in the otherwise logical play of reality.
Now the only thing I am familiar with that might retain some meaning outside of the flow of Causality is the abstract realm of Mathematics. One can quite easily describe numbers in an absolute void with or without Time, beginning with the concepts of 0 and 1. Existence and Non-Existence.
Nothing else shares this property – everything else loses its meaning in the void, bereft of its relative relationships to all other things. The more philosophical concepts do not even exist in the void, much less retain any meaning.
Thus, I suspect this basic mathematical duality may be the only truly Objective concept. ‘I Think therefor I Am’ is the usual proof for this in terms of personal existence.
Some philosophies do seem to hint at this sort of Yin/Yang duality and I suspect that if we ever do find a solution it is likely to lie somewhere in there.
My own vague thoughts on the matter are that causative reality has always existed in one form or another – simply because of the inescapable objective reality of Duality – the rest of existence evolving from that one ultimate fact.
Alas, that proposition offers relatively little in the way of philosophical guidance regarding virtues or values or morality – save that the rules are probably quite a bit more arbitrary than we might prefer to live by.
My thoughts on this matter are NOT very philosophically developed, but that’s precisely why I’m going to throw them out there without further ruminating: I’d love to hear other peoples’ inputs.
Scientists are beginning to understand the neurobiological mechanisms that underly moral behavior; I heard a talk by Pat Churchland about it a few years back. In essence, moral behavior and sentiment are based on bonding. “Good” things are those which promote positive relationships between you and another person, and on a larger scale lead towards harmonious and successful societies.
Is that sufficient (or at least a useful start) as a groundwork for ‘what is good’: that which enables us to get along? I think it still falls into the basic trap Amod lays out here: to say “this is how we evolved” is not synonymous with “this is truly best.” But I think (hope?) it’s still philosophically useful, to better understand the common intuitions that push us towards morality.
(Obviously, this doesn’t at all touch on some meanings of the word ‘good’, like good art. Personally I feel like those are linguistic confusions, that there’s no fundamental reason why all those concepts (e.g. moral good and functional good) the same word.)
On another (related) issue: I think Jesse has a point in deriding “objective value” as “a price sticker applied to it by God.” But I don’t think incorruptible virtues are of interest; a virtue isn’t a dial that you have to worry about setting too high, the virtue is the correct setting on the dial. A better question might be: What would an answer even look like? If there were some external objective good, how might we find, recognize, or measure it? I don’t see how, unless we’re all willing to buy into the same sticker-applying god.
It’s an interesting claim that the different uses of “good” are a linguistic confusion. I would tend to disagree with you strongly, but spelling out the reasons would probably require its own post… hm, perhaps something to start working on.
“Is that sufficient (or at least a useful start) as a groundwork for ‘what is good’: that which enables us to get along? I think it still falls into the basic trap Amod lays out here: to say “this is how we evolved” is not synonymous with “this is truly best.” But I think (hope?) it’s still philosophically useful, to better understand the common intuitions that push us towards morality.”
Ah, this is a fruitful looking avenue of inquiry.
I would suggest that most discussions of morality, virtue and so on must be framed first by goals to provide a context to judge things by.
If I don’t know what I want, then I certainly can’t rightly judge how I can best achieve it.
Here we receive surprisingly little instinctive guidance beyond the fundamental Rule of Continuity (survive & reproduce) – though that is a pretty strong one in and of itself.
Beyond that, we all seem to have surprisingly different innate likes and dislikes, goals and visions. Some of us form concrete visions of what we wish to achieve, while many never do – content to allow others to choose for them.
But until we make that choice for ourselves, there is little basis upon which to develop any but the most basic moral framework. We have little basis by which to judge choices until we have some goal.
Given that most people never do develop a strong sense of what they wish to achieve in life, it is little wonder that many must rely upon defined moralities laid out by their peers to function comfortably.
It isn’t surprising that we are left ill-directed in this manner. Our primary strength as a species is Adaptability, and any built in instinctive goal would have greatly constrained our potential for exploration and problem solving.
An analogy with music: I was watching a documentary called The Music Instinct: (http://www.pbs.org/wnet/musicinstinct/), which reminded me of a book I once read (http://www.amazon.com/This-Your-Brain-Music-Obsession/dp/0525949690), and the discussion turned to music and human evolution. “What is music FOR?” is a perfectly legitimate evolutionary question (as long as you can cash it out without final causes, a big issue in philosophy of biology). Possible answers are that music is for group cohesion, looking more attractive to mates, etc. But do any of those really tell us what music is and why we should pursue it? Or what is “good” about music?
By analogy, would an evolutionary account of value tell us what value is and why we should or shouldn’t value certain things? Or is there something more to be said about value in addition to its evolutionary origin? These questions, it seems to me, are the real crux of the debate we’ve been having here.
I think evolution is fascinating and can offer real insight into philosophical topics (see Dennett’s Darwin’s Dangerous Idea for some good examples), but what I’ve never been entirely convinced of is that evolutionary explanations are complete explanations. That is, is there more to say after evolutionary biologists have done all their work? And could an evolutionary explanation ever become a philosophical justification? I tend to think evolution can be used as a premise in a philosophical argument, but not as the whole argument.
What about desire-based theories of value? i.e., where that which is valuable is that which tends to fulfill some set of desires held by intentional agents. They seem quite sufficient to me.
I’ve been heavily influenced by Alonzo Fyfe’s “desire utilitarianism”: http://atheistethicist.blogspot.com/
In his ethical theory, desires are the object of evaluation, and are judged “good” or “bad” (or neutral) according to all other relevant desires: i.e., does the presence of this desire in a population tend to fulfill or thwart other desires in that population? Each agent’s own desires give that agent reasons to use the tools of praise and reward, condemnation and punishment, to shape the desires of others in the surrounding population. It’s a form of both rule-utilitarianism and virtue ethics: the “rules” being evaluated are the behavioral dispositions of intentional agents.
He doesn’t put it quite this way, but while some desires might be “good” only relative to the desires of a particular species or culture, other desires are arguably necessarily “good” from the viewpoint of any intentional agent, based merely on the fact that the agent has desires at all, regardless of what they are. Regardless of what my desires are, I won’t be able to fulfill them if I cease to exist, hence it’s “good” for other agents to have an aversion to killing. Assuming I’m less than omnipotent and therefore rely on other agents for information, it’s unlikely I’ll be able to fulfill my desires if I have inaccurate information, hence it’s “good” for other agents to have an aversion to lying. Etc.
He has harsh words for “morality evolved!” arguments, which is why I thought of him here. He presents the problem as just another form of the Euthyphro Dilemma: are certain species-wide inherited dispositions good because we evolved to have them, or did we evolve them because they’re good by some independent standard? It seems to me that the answer is clearly the latter, and that, of course, it was far from a reliable process (i.e., we also evolved many dispositions that are aren’t good, at least not in our modern environment). Of course, we did evolve the ability to have many of our dispositions influenced by each other, which is pretty essential to our ability to behave morally. And I’d argue that empathy, or more generally the ability to imagine others as basically the same as ourselves, is also key.
Zerotarian, I have a fair bit of sympathy for desire-based accounts, and have often held them before myself. The problem I keep bumping into with them (or other similarly internalist accounts) is that they slide too easily into relativism. It’s very difficult on this score to privilege one desire above another – to say, for example, that a heroin addict’s desire for a healthy and productive life is more important than his desire for the next fix. Deciding between these seems to require some sort of criterion that goes beyond desire per se.
The problem becomes more acute when it’s extended into an other-regarding desire utilitarianism, which I have less sympathy for. The big appeal of starting from desires is that it works with the motivations we already have; the question “Why should I follow my desires?” doesn’t seem to need to be asked. The problem is that when we start with our existing desires, these often do not include the desire to satisfy others’ desires. At least, not the desires of others in general (we probably desire to satisfy the desires of our children or our friends). From my perspective, your desire to hit my head with a large hammer is bad; but from your perspective, it is a desire like any other, perhaps one that would give you sadistic satisfaction. In an account that derives purely from desire, it’s difficult to see how my desire can adequately function as a critique of yours if you had no preexisting reason to desire that my desires be satisfied. Does that make sense?
Amod, I think that, at the very least, the problems with a desire-based view don’t crop up as early as you’re saying.
First off, there’s not necessarily a need for anyone to have a desire to fulfill others’ desires; there’s only a need to have desires that tend to fulfill others’ desires indirectly (or that at least tend to restrain one from thwarting others’ desires). The desire to fulfill others’ desires is but one such desire among many.
If internal desires are the only reasons for action that exist, then it’s true that you might not be able to convince a particular individual not to hit your head with a hammer, if that’s what he wants to do. There’s no desire of his that you can appeal to that would override his desire to hit you.
But many desires are malleable. What you can do — hopefully long before you find yourself in that situation — is convince the vast majority of other people that their existing desires would be better fulfilled in a society where an aversion to hitting people over the head (or letting others do so, for that matter) is strong and widespread. Their own desires (aversion to pain, plus any and all desires that would be thwarted if hospitalized or brain-damaged) give them reasons for action to use moral tools (praise, reward, condemnation, punishment) to change others’ desires in the direction of not hitting people.
Such change tends to be a long-term process and not possible for all individuals. In the meantime, the same desires also give us all reasons to establish institutions that reward and punish people so as to indirectly motivate them. i.e., the original guy may not have any direct internal reason to refrain from hitting me, but he probably does have internal reasons to avoid jail or to eat chocolate, so we might establish the threat of jail or the promise of chocolate as indirect motivations.
So in a desire-utilitarian view, I can’t necessarily convince others to perform or not perform certain acts; but it’s likely that I can convince others to help me change the prevalent desires and institutions in our shared society in a particular direction. Given the common physical/biological constraints that humans have, there are certain ways a society needs to be arranged to facilitate the fulfillment of desires in general, such that there are significantly more people who have their own diverse reasons to arrange society in that way than there are people who share my specific desires.
Even in what Fyfe calls the “1000 Sadists Problem,” where a large majority (Group A) wants to torture a small minority (Group B), there are three mitigating facts: (1) The motivation for anyone to maintain the distinction between the majority and the minority is almost certainly weak and arbitrary, so even the people in Group A are less likely to be tortured in the future if they change their society to have a general aversion to torture rather than only a specific aversion to torturing others in Group A. (2) In a society with fewer people who have a desire to torture, fewer desires will be thwarted overall; fewer torturers’ desires are thwarted simply because there are fewer of them, and fewer torturees’ desires are thwarted because fewer people get tortured. (3) The desire to torture is probably (if these agents are anything like humans) far more malleable than the aversion to pain, so it’s far easier to change society to have fewer people with a desire to torture than to change society to have fewer people with an aversion to pain.
To be continued, maybe…
Sorry for the delayed replies.
I agree that one can get quite far by acknowledging that typically, the desires to benefit other people often do tend to benefit oneself; more generally (and this includes many self-regarding cases of the drug addiction sort), virtue and happiness often tend to coincide. This is a point well worth stressing. I have often in the past thought that it was sufficient myself, and you’re right to push me on it. However, just because it is such an empirically based claim, it is open to falsification in a worrisome manner. I think you see that problem in the self-regarding hypothetical cases you cite below; the same, I think, holds true in other-regarding cases. It may turn out to be that some people’s sadism (or even just greed) really is stronger and more basic than the desire to help or not harm other people. Are we then willing to bite the bullet and say that such people are not wrong to act on that sadism?
So in other words, your intuition that my desire to avoid pain should be privileged above your desire to hit me over the head is grounded in certain empirical assumptions that are very likely to be objectively true: (1) that aversion to pain is less malleable than sadism, and (2) that most people have desires (possibly diverse desires that are not the same as my own) to live in a society with fewer rather than more sadists.
The heroin addict problem is a more interesting case to me. With a weaker addiction such as nicotine, the balance of my internal motivations is easily in favor of quitting: direct reasons such as discomfort and pain down the road, and indirect reasons such as all the desires I have that will be harder to fulfill if I have medical problems or am dead.
But suppose heroin is strong enough (is it?), or we have a stronger hypothetical drug, such that the addict cycles between a state of having other desires that are thwarted by the addiction, and a state of having only the desire for another fix. In this case, yes, there’s a problem — it’s almost like we have two different agents, the “diverse desires” agent and the “I only want my fix” agent. “Whose” desires take priority?
Or suppose a woman is pregnant and has a hypothetical pill that will prevent her future child from having any desires whatsoever — no aversion to pain, no desire to take any action that might be thwarted by others. In what sense can we say, on a desire-utilitarian account, that letting the child be born normally is objectively “better” than giving it the pill that extinguishes all desires?
I have to admit Amod, I’m drawing a blank on the source of your confusion regarding the rather straightforward points Zerotarian offered (s/he’s a good bit more eloquent than I).
You state that you keep running into the ‘problem’ of relativism – and a problem it is indeed, for it seems that you have a pre-formulated conclusion stating that morality *cannot* be relative, even though a vast preponderance of the world’s evidence appears to suggest otherwise.
From my point of view, Zerotarian is correct to state that the fundamental basis of moral decision making transcends evolution, and that what we see as evolved behavior is mainly an encoding of more essential mathematical truths (re: my prior discussions of Game Theory).
Unfortunately, that still doesn’t make morality objective – those mathematical underpinnings still shift to reflect a vast array of possible states, within which various moral behaviors change, sometimes to positions that we normally find rather abhorrent.
I mean, lets face it – Cats are EVIL. Cute, Fuzzy, Evil.
Jesse, let me repeat my comment above:
“If value is purely subjective, we cannot truthfully say that Pol Pot or Hitler or any other villain is actually wrong. They just happen to be different from us. We like working with computers and talking philosophy; they like killing millions of people. You say to-may-toe, I say to-mah-toe, let’s call the whole thing off. We might happen to decide to fight them, but our grounds for doing so are no better than their justifications for their actions. Is that a claim you’re prepared to defend?”
This is a central objection to relativism of any kind, and one I haven’t seen you answering yet. So I will repeat: are you prepared to defend the claim that Pol Pot was not wrong but just different?
Also, I’m not sure what exactly you mean by “pre-formulated”… But if you mean to say that I conclude morality isn’t relative without any argument to establish it, I don’t think that’s a fair claim, especially when in this post I link to three previous posts where I did make arguments for the point.
(For reference:
https://loveofallwisdom.com/2010/06/a-relativist-gongfu-ethics/
https://loveofallwisdom.com/2010/02/what-does-postmodernism-perform/
https://loveofallwisdom.com/2010/01/why-worry-about-contradictions/
… though the latter doesn’t quite deal with moral relativism itself, but with a view I think is closely related.)
I haven’t thought about aesthetics nearly as much as morality. But I don’t see a reason to value art aside from its fulfillment of desires, either.
It seems to me that art may well only be good relative to a particular species, based on the quirks of that species’ perceptual abilities and the ways that their perceptions are wired to their emotions. It’s conceivable that if we ever met alien life and found aesthetic (near-)universals, it would be for one of the following two reasons:
(1) There are fairly universal physical/biological constraints on the way these things (perceptual abilities and their connections to emotion) can evolve alongside agency. e.g., most creatures evolve to see roughly the same frequencies of electromagnetic radiation because those are the most useful frequencies for navigation, and most creatures associate lower wavelengths with heat and combustion, which almost always play a key role in the evolution of intelligent lifeforms. (Similarly, maybe plants are green everywhere — are there chemical alternatives to chlorophyll?)
(2) There are certain things that fall out of being the kind of entity that has aesthetics at all, in some way analogous to the likely examples of moral universals I mentioned above.
Interesting post and comments, thanks.
I am fascinated by the issue (and I have to admit that I tend to favour the “ought does not derive from is” side). As for evolution applied to value, it seems to work only a posteriori. I can’t remember in which book I read that people find flowers “beautiful” because it is an advantage for their species to like them, since where there are flowers there will also be fruits. But suppose people usually found flowers “ugly”. One could consistently argue that this is an advantage for our species, since they are not edible!
Pingback: The Buddhist problem of value | Love of All Wisdom
Pingback: Value as proof of God | Love of All Wisdom