The Moral Landscape
Sam Harris initiated the modern intellectual movement that many refer to as “The New Atheism” with his book, The End of Faith. In his new book, The Moral Landscape: How Science Can Determine Human Values, Dr. Harris hopes to enliven a new, more important, movement. Too many scientists and secular liberals, he believes, have willingly allowed religion to monopolize the discourse of morality. Science and reason, he argues, are the only tools we have to analyze how we ought to behave. A science of morality strikes many people as impossible – how can a subject so burdened by cultural diversity and incompatibility be standardized and studied objectively, they might ask? Harris believes that we can throw out what many people mean when they talk about morality.
We should observe the double standard in place regarding the significance of consensus: those who do not share our scientific goals have no influence on scientific discourse whatsoever; but, for some reason, people who do not share our moral goals render us incapable of even speaking about moral truth. It is, perhaps, worth remembering that there are trained “scientists” who are Biblical Creationists, and their “scientific” thinking is purposed toward interpreting the data of science to fit the Book of Genesis. Such people claim to be doing “science,” of course, but real scientists are free, and indeed obligate, to point out that they are misusing the term. Similarly, there are people who claim to be highly concerned about “morality” and “human values,” but when we see that their beliefs cause tremendous misery, nothing need prevent us from saying that they are misusing the term “morality” or that their values are distorted. How have we convinced ourselves that, on the most important questions in human life, all views must count equally?
So what does Harris mean by “values” and “morality?” He observes that despite all this assumed disagreement almost what everyone is really concerned about is human well-being (his argument applies to all conscious creatures). What else could anyone even possibly care about that doesn’t affect well-being. Even the most devoutly religious care about it – sure, it comes after death, but they worry about the “well-being” of our eternal souls. If they’re right about the supernatural nature of reality, Harris concedes, then they are also right that the most moral thing we could do is bow to God and do everything we can get into heaven and avoid hell whatever the temporal cost – eternity is a lot longer after all.
Luckily, there is no evidence for that religious worldview, so for purposes of our discussion and this proposed discipline, we’ll concern ourselves with this world and our terrestrial lives. If you imagine the worst possible misery for all people all the time that’s clearly “bad” while the opposite is clearly “good.” If you don’t grant Harris that, there is probably nothing that can convince you but I can’t even think of any way someone wouldn’t be able to grant that the worst possible misery for all people all of the time isn’t by every measure bad – it is bad by every measure by definition. Since humans’ well-being corresponds at a fundamental level to their brain states and the reality around them we should in principle be able to scientifically study ways that lead to better and worse well-being. Yes, “well-being” is loosely defined but so is “health” and that doesn’t prevent scientists from discovering objective truths about whether a medical procedure or personal action is beneficial or harmful to a person’s health.
I wonder if there is anyone on earth who would be tempted to attack the philosophical underpinnings of medicine with questions like: “What about all the people who don’t share your goal of avoiding disease and early death? Who is to say that living a long life free of pain and debilitating illness is ‘healthy’? What makes you think that you could convince a person suffering from fatal gangrene that his is not as healthy as you are?” And yet these are precisely the kinds of objections I face when I speak about morality in terms of human and animal well-being. Is it possible to voice such doubts in human speech? Yes. But that doesn’t mean we should take them seriously.
Above you see the characteristic incisive humor and insight in Sam’s writing.
Critics don’t appreciate what they see as arrogance and condemn his argument for it (the non sequitur doesn’t bother them). Here’s John Horgan blogging in Scientific American.
Harris asserts in Moral Landscape that ignorance and humility are inversely proportional to each other; whereas religious know-nothings are often arrogant, scientists tend to be humble, because they know enough to know their limitations. “Arrogance is about as common at a scientific conference as nudity,” Harris states. Yet he is anything but humble in his opus. He castigates not only religious believers but even nonbelieving scientists and philosophers who don’t share his hostility toward religion.
Harris further shows his arrogance when he claims that neuroscience, his own field, is best positioned to help us achieve a universal morality. “The more we understand ourselves at the level of the brain, the more we will see that there are right and wrong answers to questions of human values.” Neuroscience can’t even tell me how I can know the big, black, hairy thing on my couch is my dog Merlin. And we’re going to trust neuroscience to tell us how we should resolve debates over the morality of abortion, euthanasia and armed intervention in other nations’ affairs?
We may have read a different book, but I was actually disappointed by the how many questions of morality Harris didn’t attempt to resolve. He even highlights many of the toughest questions we face in order to show the types of problems science needs to tackle. Harris goes to great lengths to explain that these questions are extremely difficult and complex and may not ever be answerable to us. But that does not mean that they don’t have answers in principle. At his recent talk at Tufts University that I attended, he asked (as he has before), “how many birds are in flight right now in the world?” The question is trivially easy to understand and has an equally trivial numerical answer, but science may never be able to answer it in practice. Many questions of morality could be the same, especially when trying to settle disputes about how to balance one individual’s well-being with the well-being of everyone else. His call for that type of humbleness need not, he reminds us, render us silent on all questions. It doesn’t take a moral genius to notice that a society that degrades women and engages in violent feuds doesn’t maximize well-being and therefore doesn’t represent a peak on his “moral landscape.”
If well-being corresponds to our underlying biology, what are we to make of our biological differences? First of all, the differences are often greatly exaggerated. None of us is better off poor, starving, and running from machete wielding killers, as life is like for many people in failed states throughout our world, no matter what our biological differences. In principle it is perfectly possible that different ways of organizing society could lead to different moral peaks that might be better for different people. I couldn’t help but recollect Malcolm Gladwell’s TED talk on spaghetti sauce, where he observes that there is no perfect sauce only perfect sauces. Why? Different people have different tastes.
Yet, as Sam notes, there is a big difference between food and poison. If I put arsenic in spaghetti sauce, no one would be tempted to argue that it was an equally valid spice choice that my culture sensibly considers healthy and delicious. We still have boundaries on what does and does not constitute food and even spaghetti sauce itself.
Back to Horgan’s quip about neuroscience’s shortcomings. Again, Harris isn’t arguing that we currently have all the answers; so does Horgan think that science in principle can’t tell him “how [he] can know the big, black, hairy thing on my couch is [his] dog Merlin?” The rest of Horgan’s piece complains that science can’t replace religion as the arbiter of moral truth. What other way can we arbitrate truth? If we can agree that there is anything provisionally called truth (postmodernists be damned) then science and reason are the only tools we have. If I say claim X such as an “armed intervention” leads to greater suffering and claim Y such as a diplomatic resolution leads to greater human flourishing those claims can be investigated and falsified using the scientific method. Are such scenarios extremely difficult to judge given all the possible variables? Of course, but does anyone doubt that those claims have answers that correspond to measurable effects on individuals and societies (made up of individuals)? It is odd to condemn a discipline that is in its infancy for not having all the answers. Would he have denounced medicine or every other domaine of science in the same way had he been around at its inception?
When I first learned of Sam’s argument I questioned the role liberty had in his calculation. I wrote,
[It] seems a strong case can be made that liberty is a moral value that doesn’t rely on well-being as its foundation. Sure, supports can be garnered to strengthen the moral case for liberty but humans, for example, could theoretically be worse off because of liberty and a strong case can still be made for its moral value. Kant, of course, made a strong moral case that humans are ends not means. Therefore, conscious beings as autonomous agents might make suboptimal decisions, but restricting their free choice through a benevolent paternalism might be less moral even if it leads to greater well-being.
Sam responded to such criticisms before his book was published writing,
And those philosophical efforts that seek to put morality in terms of duty, fairness, justice, or some other principle that is not explicitly tied to the wellbeing of conscious creatures—are, nevertheless, parasitic on some notion of wellbeing in the end.
I had my doubts, but I this argument in his book convinced me.
Some people worry that a commitment to maximizing a society’s welfare could lead us to sacrifice the rights and liberties of the few wherever these losses would be offset by the greater gains of the many. Why not have a society in which a few slaves are continually worked to death for the pleasure of the rest? The worry is that a focus on collective welfare does not seem to respect people as ends in themselves. And whose welfare should we care about? The pleasure that a racist takes in abusing some minority group, for instance, seems on all fours with the pleasure a saint takes in risking his life to help a stranger. If there are more racists than saints, it seems the racists will win, and we will be obliged to build a society that maximizes the pleasure of unjust men.
But such concerns clearly rest on an incomplete picture of human well-being. To the degree that treating people as ends in themselves is a good way to safeguard human well-being, it is precisely what we should do. Fairness is not merely an abstract principle-it is a felt experience. We all know this from the inside, of course, but neuroimaging has also shown that fairness drives reward-related activity in the brain, while accepting unfair proposals requires the regulation of negative emotion. Taking others’ interests into account, making impartial decisions (and knowing that others will make them), rendering help to the needy-these are experiences that contribute to our psychological and social well-being. It seems perfectly reasonable, within a consequentialist framework, for each of us to submit to a system of justice in which our immediate, selfish interests will often be superseded by considerations of fairness. It is only reasonable, however, on the assumption that everyone will tend to be better off under such a system. As, it seems, they will.
He goes on about fairness and other values that he argues can be reduced to concerns about well-being. The following excerpt put me over the edge to his side on the question of liberty. He’s writing about Rawls and fairness, but it applies equally to liberty and I’ve edited in liberty/freedom for reader’s ease.
How would we feel if, after structuring our ideal society from behind a veil of ignorance, we were told by an omniscient being that we had made a few choices that, though [maximizes individual freedom], would lead to the unnecessary misery of millions, while parameters that were ever-so-slightly less [free] would entail no such suffering? Could we be indifferent to this information? The moment we conceive of justice [or liberty] as being fully separable from human well-being, we are faced with the prospect of there being morally “right” actions and social systems that are, on balance, detrimental to the welfare of everyone affected by them. To simply bite the bullet on this point, as Rawls seemed to do, saying “there is no reason to think that just institutions will maximize the good” seems a mere embrace of moral and philosophical defeat.
It may be useful to start from a default position such as liberty, but that doesn’t mean that liberty as a value is good in and of itself regardless of the consequences.
As I explained Sam’s argument to a friend who is a self-described “moral skeptic”, I wrote, “imagine if a culture ritually murdered an innocent child at random; it seems very unlikely that would lead to greater well-being.” He responded asking, “what if in that scenario, a future Hitler was killed?” In other words, he’s wondering if what we see as an immoral action leads to greater well-being does Harris’s argument mean that it is actually moral. Let’s unpack that scenario a bit. If a society somehow knew, in advance, that by killing a currently innocent child who would one day go one to lead a war that resulted in the deaths of 10s of millions of humans along with millions more suffering than, yes, it might actually be moral to kill that child. If someone told you that if we didn’t sanction the killing of this one child, many millions would suffer and die, would you really not agree to take that action?
Yet, notice that this wouldn’t necessarily be the most moral action. First of all, there is almost no way we could actually predict such a thing but even if we could, why would we need to kill all those other children, which leads to unnecessary suffering? Also, it seems that if we knew in advance that a child was predisposed to such evil we might be able to find better ways to mitigate that potential such as counseling or even incarceration at an older age. It seems very unlikely (maybe impossible) that the best way to maximize overall well-being would be ritually sacrifice random children in the hopes that one would be Hitler (as opposed to another Einstein). Regardless of all this, it is maybe more important to recognize that this question has an answer (killing kids will lead to more or less suffering) whether we can realistically know the answer or not.
Another critic, Kwame Anthony Appiah, seems to have bought a copy of the book with whole sections cut out or seems to want Harris to personally answer every one of his pet philosophers – note to Appiah: he’s arguing consequences of moral actions matter not that he knows personally how to resolve every moral paradox.
Such puzzles merely suggest that certain moral questions could be difficult or impossible to answer in practice; they do not suggest that morality depends upon something other than the consequences of our actions and intentions. This is a frequent source of confusion: consequentialism is less a method of answering moral questions than it is a claim about the status of moral truth. Our assessment of consequences in the moral domain must proceed as it does in all others: under the shadow of uncertainty, guided by theory, data, and honest conversation. The fact that it may often be difficult, or even impossible, to know what the consequences of our thoughts and actions will be does not mean that there is some other basis for human values that is worth worrying about.
Harris, of course, isn’t the only person to ever argue for a morality grounded in natural facts about our actual experience. Many critics complain that Harris doesn’t deal directly with much of the contemporary academic philosophy. He received similar objections that he didn’t deal with contemporary theology in his attacks on faith. Harris avoids much of this terrain because, he argues, it will bore readers and isn’t necessary for a popular case for science studying morals. Besides him being right about the attention span of the average reader, I also think that focusing on the minutiae of the academic philosophy is irrelevant to the larger case he’s trying to make. If there is an argument that suggests we should value something other than well-being and the consequences of our actions we probably won’t find it looking at an endless catalogue of moral paradoxes. If someone has an argument for values that reduces to something other than consequences, they’re welcome to put it forward.
For those more inclined to delve into more technical philosophy, it’s clear that Harris relies on the work of philosopher William Casebeer (among others). So if Moore’s Open-Question argument or the Analytic/Synthetic Distinction are sticking points for you, I encourage you to pick up Casebeer’s helpful book, Natural Ethical Facts. Yet, I’m forever frustrated that critics (many I previously catalogued here) too often fail to just come out and say exactly what their knock-down arguments are against Harris’s premises. Generously, Harris provides the ways that his premises and thesis could be falsified. For example,
A neural correlate of human well-being might exist, but it could be invoked to the same degree by antithetical states of the world. In this case, there could be no connection between a person’s inner life and his or her outer circumstances.
It is also conceivable that a science of human flourishing could be possible, and yet people could be made equally happy be very different “moral” impulses. Perhaps there is no connection between being good and feeling good-and, therefore, no connection between moral behavior (as generally conceived) and subjective well-being. […] However, if evil turned out to be as reliable a path to happiness as goodness is, my argument about the moral landscape would still stand, as would the likely utility of neuroscience for investigating it. It would no longer be an especially “moral” landscape; rather it would be a continuum of well-being, upon which saints and sinners would occupy equivalent peaks.
Worries of this kind seem to ignore some very obvious facts about human beings: we have all evolved from common ancestors and are therefore, far more similar than we are different; brains and primary human emotions clearly transcend culture, and they are unquestionably influenced by states of the world (as anyone who has ever stubbed his toe can attest). No one, to my knowledge, believes that there is so much variance in the requisites of human well-being as to make the above concerns seem plausible.
Despite the overall strength and clarity of his wider argument, Harris spends too much time attacking religious scientist Francis Collins. Most of that criticism was published before and although somewhat relevant (I also happen to agree with it), the length of the combativeness felt a bit ungracious for a book on morality.
More of that space could have been better spent talking more explicitly about the role of intention in moral action. How much does intention matter? Is it immoral if someone had the reasonable expectation that their actions would lead to greater happiness but they were wrong?
Also, how much do we have to universalize an action to judge its morality? If an action in a particular instance would lead to greater or lesser well-being, but if everyone did it the opposite resulted how would that effect its goodness or badness? It seems likely we’d just judge them on those particular merits as no particular action has an inherent morality. But I wish this was spelled out a bit more clearly.
If, for example, we take the classic moral experiment where a doctor kills one relatively healthy individual in order to distribute his good organs to five patients in need, what would the morality of that action be? In Sam’s Tufts talk he brought this scenario up himself and suggested it would clearly be immoral because who would want to live in a world where you could be sacrificed for the greater good – that’s not a recipe for maximizing overall well-being. Yet, if the action could somehow be limited to that one case and it leads to greater well-being, is it still immoral? Now, it’s obvious that there is no way to reasonably conceive of the situation that is all good (leading to the most well-being). Killing someone in this way will always subtract from well-being in someway, but the separation between an action in particular and it universalized is fuzzy.
Of course, it is not Harris’s responsibility to know how to resolve all these questions. He’s surely correct when he writes, “If we are not able to perfectly reconcile the tension between personal and collective well-being, there is still no reason to think that they are generally in conflict.”
Some Personal Interest
I’ve wrote a few blog posts on free will and Sam’s experience with meditation and neuroscience allows him to provide some subtle insight on the topic. His whole discussion of the subject is worthwhile, but this bit of wisdom stuck out.
The problem is that no account of causality leaves room for free will. Thoughts, moods, and desires of every sort simply spring into view – and move us, or fail to move us, for reasons that are, from a subjective point of view, perfectly inscrutable. Why did I use the term “inscrutable” in the previous sentence? I must confess that I do not know. Was I free to do otherwise? What could such a claim possibly mean? Why, after all, didn’t the word “opaque” come to mind? Well, it just didn’t – and now that it vies for a place on the page, I find that I am still partial to my original choice. Am I free with respect to this preference? Am I free to feel that “opaque” is the better word, when I just do not feel that it is the better word? Am I free to change my mind? Of course not. It can only change me.
It means nothing to say that a person would have done otherwise had he chosen to do otherwise, because a person’s “choices” merely appear in his mental stream as though sprung from the void. In this sense, each of us is like a phenomenological glockenspiel played by an unseen hand. From the perspective of your conscious mind, you are no more responsible for the next thing you think (and therefore do) than you are from the fact that you were born into this world.
This aptly fits my definition of insightful and wise. Something that once said seems obvious and captivating. I’ve argued for determinism before and, of course, I think everyday but I never fully appreciated the nature of each thought’s genesis until I pictured it in that way.
Harris has another interesting take on probability in morality. Before reading this I created my own moral thought experiment:
Say you could save 2 children that might fall off a bridge by jumping out and grabbing them, but another child is standing near the edge and for whatever reason the child would be at risk of being knocked off in your attempt to save the other two. If you knock the 1 child off and save the other 2 is that ok for a consequentialist? If 2 children were knocked off to save the 1 is that now morally wrong? Is it the same moral wrongness of deliberately killing 2 children to save 1? If not, doesn’t that show that intentions matter? Furthermore, does the level of risk affect the morality of the choice and, if so, why? If you can save 2 kids but you understood that you’d have only a 1% chance of knocking the single kid off is that more morally acceptable then if you thought you had a 99% chance? Also, does the probability of saving the kid(s) affect the morally of the choice as well?
Here is one example Sam on the topic.
If I were asked, for instance, whether I would sanction the murder of an innocent person if it would guarantee a cure for cancer, I would find it very difficult to say “yes,” despite the obvious consequentialist argument in favor of such an action. If I were to impose a one in a billion risk of death on everyone for this purpose, however, I would not hesitate. The latter course would be expected to kill six or seven people, and yet it still strikes me as obviously ethical. In fact, such a diffusion of risk aptly describes how medical research is currently conducted. And we routinely impose far greater risks than this on friends and strangers whenever we get behind the wheel of our cars. If my next drive down the highway were guaranteed to deliver a cure for cancer, I would consider it the most ethically important act of my life. No doubt the role that probability is playing here could be experimentally calibrated.
Read the Book
The Moral Landscape should be read by anyone that cares about the ways we can better plan our moral lives. I hope readers of this review or of the book will share their thoughts here. Also, anyone that thinks they have some challenges to his ideas I’m very curious to hear them. The book fully persuaded me after my initial skepticism, but I’d like to subject the thesis to further interrogation. What else could we value besides what affects our well-being? Something is of value only if it is of value to someone, as Sam argues, correct? Are there any other reliable ways to arbitrate the effects of our actions on our well-being other than science and reason more generally? Feel free to try to answer these questions or others that stick out to you.