Home > Sam Harris, TED > The Moral Landscape

The Moral Landscape

Gregg LaGambinaSam Harris initiated the modern intellectual movement that many refer to as “The New Atheism” with his book, The End of Faith. In his new book, The Moral Landscape: How Science Can Determine Human Values, Dr. Harris hopes to enliven a new, more important, movement. Too many scientists and secular liberals, he believes, have willingly allowed religion to monopolize the discourse of morality. Science and reason, he argues, are the only tools we have to analyze how we ought to behave. A science of morality strikes many people as impossible – how can a subject so burdened by cultural diversity and incompatibility be standardized and studied objectively, they might ask? Harris believes that we can throw out what many people mean when they talk about morality.

We should observe the double standard in place regarding the significance of consensus: those who do not share our scientific goals have no influence on scientific discourse whatsoever; but, for some reason, people who do not share our moral goals render us incapable of even speaking about moral truth. It is, perhaps, worth remembering that there are trained “scientists” who are Biblical Creationists, and their “scientific” thinking is purposed toward interpreting the data of science to fit the Book of Genesis. Such people claim to be doing “science,” of course, but real scientists are free, and indeed obligate, to point out that they are misusing the term. Similarly, there are people who claim to be highly concerned about “morality” and “human values,” but when we see that their beliefs cause tremendous misery, nothing need prevent us from saying that they are misusing the term “morality” or that their values are distorted. How have we convinced ourselves that, on the most important questions in human life, all views must count equally?

So what does Harris mean by “values” and “morality?” He observes that despite all this assumed disagreement almost what everyone is really concerned about is human well-being (his argument applies to all conscious creatures). What else could anyone even possibly care about that doesn’t affect well-being. Even the most devoutly religious care about it – sure, it comes after death, but they worry about the “well-being” of our eternal souls. If they’re right about the supernatural nature of reality, Harris concedes, then they are also right that the most moral thing we could do is bow to God and do everything we can get into heaven and avoid hell whatever the temporal cost – eternity is a lot longer after all.

Luckily, there is no evidence for that religious worldview, so for purposes of our discussion and this proposed discipline, we’ll concern ourselves with this world and our terrestrial lives. If you imagine the worst possible misery for all people all the time that’s clearly “bad” while the opposite is clearly “good.” If you don’t grant Harris that, there is probably nothing that can convince you but I can’t even think of any way someone wouldn’t be able to grant that the worst possible misery for all people all of the time isn’t by every measure bad – it is bad by every measure by definition. Since humans’ well-being corresponds at a fundamental level to their brain states and the reality around them we should in principle be able to scientifically study ways that lead to better and worse well-being. Yes, “well-being” is loosely defined but so is “health” and that doesn’t prevent scientists from discovering objective truths about whether a medical procedure or personal action is beneficial or harmful to a person’s health.

I wonder if there is anyone on earth who would be tempted to attack the philosophical underpinnings of medicine with questions like: “What about all the people who don’t share your goal of avoiding disease and early death? Who is to say that living a long life free of pain and debilitating illness is ‘healthy’? What makes you think that you could convince a person suffering from fatal gangrene that his is not as healthy as you are?” And yet these are precisely the kinds of objections I face when I speak about morality in terms of human and animal well-being. Is it possible to voice such doubts in human speech? Yes. But that doesn’t mean we should take them seriously.

Above you see the characteristic incisive humor and insight in Sam’s writing.

Some Challenges

Critics don’t appreciate what they see as arrogance and condemn his argument for it (the non sequitur doesn’t bother them). Here’s John Horgan blogging in Scientific American.

Harris asserts in Moral Landscape that ignorance and humility are inversely proportional to each other; whereas religious know-nothings are often arrogant, scientists tend to be humble, because they know enough to know their limitations. “Arrogance is about as common at a scientific conference as nudity,” Harris states. Yet he is anything but humble in his opus. He castigates not only religious believers but even nonbelieving scientists and philosophers who don’t share his hostility toward religion.

Harris further shows his arrogance when he claims that neuroscience, his own field, is best positioned to help us achieve a universal morality. “The more we understand ourselves at the level of the brain, the more we will see that there are right and wrong answers to questions of human values.” Neuroscience can’t even tell me how I can know the big, black, hairy thing on my couch is my dog Merlin. And we’re going to trust neuroscience to tell us how we should resolve debates over the morality of abortion, euthanasia and armed intervention in other nations’ affairs?

We may have read a different book, but I was actually disappointed by the how many questions of morality Harris didn’t attempt to resolve. He even highlights many of the toughest questions we face in order to show the types of problems science needs to tackle. Harris goes to great lengths to explain that these questions are extremely difficult and complex and may not ever be answerable to us. But that does not mean that they don’t have answers in principle. At his recent talk at Tufts University that I attended, he asked (as he has before), “how many birds are in flight right now in the world?” The question is trivially easy to understand and has an equally trivial numerical answer, but science may never be able to answer it in practice. Many questions of morality could be the same, especially when trying to settle disputes about how to balance one individual’s well-being with the well-being of everyone else. His call for that type of humbleness need not, he reminds us, render us silent on all questions. It doesn’t take a moral genius to notice that a society that degrades women and engages in violent feuds doesn’t maximize well-being and therefore doesn’t represent a peak on his “moral landscape.”

If well-being corresponds to our underlying biology, what are we to make of our biological differences? First of all, the differences are often greatly exaggerated. None of us is better off poor, starving, and running from machete wielding killers, as life is like for many people in failed states throughout our world, no matter what our biological differences. In principle it is perfectly possible that different ways of organizing society could lead to different moral peaks that might be better for different people. I couldn’t help but recollect Malcolm Gladwell’s TED talk on spaghetti sauce, where he observes that there is no perfect sauce only perfect sauces. Why? Different people have different tastes.

Yet, as Sam notes, there is a big difference between food and poison. If I put arsenic in spaghetti sauce, no one would be tempted to argue that it was an equally valid spice choice that my culture sensibly considers healthy and delicious. We still have boundaries on what does and does not constitute food and even spaghetti sauce itself.

Back to Horgan’s quip about neuroscience’s shortcomings. Again, Harris isn’t arguing that we currently have all the answers; so does Horgan think that science in principle can’t tell him “how [he] can know the big, black, hairy thing on my couch is [his] dog Merlin?”  The rest of Horgan’s piece complains that science can’t replace religion as the arbiter of moral truth. What other way can we arbitrate truth? If we can agree that there is anything provisionally called truth (postmodernists be damned) then science and reason are the only tools we have. If I say claim X such as an “armed intervention” leads to greater suffering and claim Y such as a diplomatic resolution leads to greater human flourishing those claims can be investigated and falsified using the scientific method. Are such scenarios extremely difficult to judge given all the possible variables? Of course, but does anyone doubt that those claims have answers that correspond to measurable effects on individuals and societies (made up of individuals)? It is odd to condemn a discipline that is in its infancy for not having all the answers. Would he have denounced medicine or every other domaine of science in the same way had he been around at its inception?

When I first learned of Sam’s argument I questioned the role liberty had in his calculation. I wrote,

[It] seems a strong case can be made that liberty is a moral value that doesn’t rely on well-being as its foundation. Sure, supports can be garnered to strengthen the moral case for liberty but humans, for example, could theoretically be worse off because of liberty and a strong case can still be made for its moral value. Kant, of course, made a strong moral case that humans are ends not means. Therefore, conscious beings as autonomous agents might make suboptimal decisions, but restricting their free choice through a benevolent paternalism might be less moral even if it leads to greater well-being.

Sam responded to such criticisms before his book was published writing,

And those philosophical efforts that seek to put morality in terms of duty, fairness, justice, or some other principle that is not explicitly tied to the wellbeing of conscious creatures—are, nevertheless, parasitic on some notion of wellbeing in the end.

I had my doubts, but I this argument in his book convinced me.

Some people worry that a commitment to maximizing a society’s welfare could lead us to sacrifice the rights and liberties of the few wherever these losses would be offset by the greater gains of the many. Why not have a society in which a few slaves are continually worked to death for the pleasure of the rest? The worry is that a focus on collective welfare does not seem to respect people as ends in themselves. And whose welfare should we care about? The pleasure that a racist takes in abusing some minority group, for instance, seems on all fours with the pleasure a saint takes in risking his life to help a stranger. If there are more racists than saints, it seems the racists will win, and we will be obliged to build a society that maximizes the pleasure of unjust men.

But such concerns clearly rest on an incomplete picture of human well-being. To the degree that treating people as ends in themselves is a good way to safeguard human well-being, it is precisely what we should do. Fairness is not merely an abstract principle-it is a felt experience. We all know this from the inside, of course, but neuroimaging has also shown that fairness drives reward-related activity in the brain, while accepting unfair proposals requires the regulation of negative emotion. Taking others’ interests into account, making impartial decisions (and knowing that others will make them), rendering help to the needy-these are experiences that contribute to our psychological and social well-being. It seems perfectly reasonable, within a consequentialist framework, for each of us to submit to a system of justice in which our immediate, selfish interests will often be superseded by considerations of fairness. It is only reasonable, however, on the assumption that everyone will tend to be better off under such a system. As, it seems, they will.

He goes on about fairness and other values that he argues can be reduced to concerns about well-being. The following excerpt put me over the edge to his side on the question of liberty. He’s writing about Rawls and fairness, but it applies equally to liberty and I’ve edited in liberty/freedom for reader’s ease.

How would we feel if, after structuring our ideal society from behind a veil of ignorance, we were told by an omniscient being that we had made a few choices that, though [maximizes individual freedom], would lead to the unnecessary misery of millions, while parameters that were ever-so-slightly less [free] would entail no such suffering? Could we be indifferent to this information? The moment we conceive of justice [or liberty] as being fully separable from human well-being, we are faced with the prospect of there being morally “right” actions and social systems that are, on balance, detrimental to the welfare of everyone affected by them. To simply bite the bullet on this point, as Rawls seemed to do, saying “there is no reason to think that just institutions will maximize the good” seems a mere embrace of moral and philosophical defeat.

It may be useful to start from a default position such as liberty, but that doesn’t mean that liberty as a value is good in and of itself regardless of the consequences.

As I explained Sam’s argument to a friend who is a self-described “moral skeptic”, I wrote, “imagine if a culture ritually murdered an innocent child at random; it seems very unlikely that would lead to greater well-being.” He responded asking, “what if in that scenario, a future Hitler was killed?” In other words, he’s wondering if what we see as an immoral action leads to greater well-being does Harris’s argument mean that it is actually moral. Let’s unpack that scenario a bit. If a society somehow knew, in advance, that by killing a currently innocent child who would one day go one to lead a war that resulted in the deaths of 10s of millions of humans along with millions more suffering than, yes, it might actually be moral to kill that child. If someone told you that if we didn’t sanction the killing of this one child, many millions would suffer and die, would you really not agree to take that action?

Yet, notice that this wouldn’t necessarily be the most moral action. First of all, there is almost no way we could actually predict such a thing but even if we could, why would we need to kill all those other children, which leads to unnecessary suffering? Also, it seems that if we knew in advance that a child was predisposed to such evil we might be able to find better ways to mitigate that potential such as counseling or even incarceration at an older age. It seems very unlikely (maybe impossible) that the best way to maximize overall well-being would be ritually sacrifice random children in the hopes that one would be Hitler (as opposed to another Einstein). Regardless of all this, it is maybe more important to recognize that this question has an answer (killing kids will lead to more or less suffering) whether we can realistically know the answer or not.

Another critic, Kwame Anthony Appiah, seems to have bought a copy of the book with whole sections cut out or seems to want Harris to personally answer every one of his pet philosophers – note to Appiah: he’s arguing consequences of moral actions matter not that he knows personally how to resolve every moral paradox.

Such puzzles merely suggest that certain moral questions could be difficult or impossible to answer in practice; they do not suggest that morality depends upon something other than the consequences of our actions and intentions. This is a frequent source of confusion: consequentialism is less a method of answering moral questions than it is a claim about the status of moral truth. Our assessment of consequences in the moral domain must proceed as it does in all others: under the shadow of uncertainty, guided by theory, data, and honest conversation. The fact that it may often be difficult, or even impossible, to know what the consequences of our thoughts and actions will be does not mean that there is some other basis for human values that is worth worrying about.

Harris, of course, isn’t the only person to ever argue for a morality grounded in natural facts about our actual experience. Many critics complain that Harris doesn’t deal directly with much of the contemporary academic philosophy. He received similar objections that he didn’t deal with contemporary theology in his attacks on faith. Harris avoids much of this terrain because, he argues, it will bore readers and isn’t necessary for a popular case for science studying morals. Besides him being right about the attention span of the average reader, I also think that focusing on the minutiae of the academic philosophy is irrelevant to the larger case he’s trying to make. If there is an argument that suggests we should value something other than well-being and the consequences of our actions we probably won’t find it looking at an endless catalogue of moral paradoxes. If someone has an argument for values that reduces to something other than consequences, they’re welcome to put it forward.

For those more inclined to delve into more technical philosophy, it’s clear that Harris relies on the work of philosopher William Casebeer (among others). So if Moore’s Open-Question argument or the Analytic/Synthetic Distinction are sticking points for you, I encourage you to pick up Casebeer’s helpful book, Natural Ethical Facts. Yet, I’m forever frustrated that critics (many I previously catalogued here) too often fail to just come out and say exactly what their knock-down arguments are against Harris’s premises.  Generously, Harris provides the ways that his premises and thesis could be falsified. For example,

A neural correlate of human well-being might exist, but it could be invoked to the same degree by antithetical states of the world. In this case, there could be no connection between a person’s inner life and his or her outer circumstances.

[…]

It is also conceivable that a science of human flourishing could be possible, and yet people could be made equally happy be very different “moral” impulses. Perhaps there is no connection between being good and feeling good-and, therefore, no connection between moral behavior (as generally conceived) and subjective well-being. […] However, if evil turned out to be as reliable a path to happiness as goodness is, my argument about the moral landscape would still stand, as would the likely utility of neuroscience for investigating it. It would no longer be an especially “moral” landscape; rather it would be a continuum of well-being, upon which saints and sinners would occupy equivalent peaks.

Worries of this kind seem to ignore some very obvious facts about human beings: we have all evolved from common ancestors and are therefore, far more similar than we are different; brains and primary human emotions clearly transcend culture, and they are unquestionably influenced by states of the world (as anyone who has ever stubbed his toe can attest). No one, to my knowledge, believes that there is so much variance in the requisites of human well-being as to make the above concerns seem plausible.

Some Weaknesses

Despite the overall strength and clarity of his wider argument, Harris spends too much time attacking religious scientist Francis Collins. Most of that criticism was published before and although somewhat relevant (I also happen to agree with it), the length of the combativeness felt a bit ungracious for a book on morality.

More of that space could have been better spent talking more explicitly about the role of intention in moral action. How much does intention matter? Is it immoral if someone had the reasonable expectation that their actions would lead to greater happiness but they were wrong?

Also, how much do we have to universalize an action to judge its morality? If an action in a particular instance would lead to greater or lesser well-being, but if everyone did it the opposite resulted how would that effect its goodness or badness? It seems likely we’d just judge them on those particular merits as no particular action has an inherent morality. But I wish this was spelled out a bit more clearly.

If, for example, we take the classic moral experiment where a doctor kills one relatively healthy individual in order to distribute his good organs to five patients in need, what would the morality of that action be? In Sam’s Tufts talk he brought this scenario up himself and suggested it would clearly be immoral because who would want to live in a world where you could be sacrificed for the greater good – that’s not a recipe for maximizing overall well-being. Yet, if the action could somehow be limited to that one case and it leads to greater well-being, is it still immoral? Now, it’s obvious that there is no way to reasonably conceive of the situation that is all good (leading to the most well-being). Killing someone in this way will always subtract from well-being in someway, but the separation between an action in particular and it universalized is fuzzy.

Of course, it is not Harris’s responsibility to know how to resolve all these questions. He’s surely correct when he writes, “If we are not able to perfectly reconcile the tension between personal and collective well-being, there is still no reason to think that they are generally in conflict.”

Some Personal Interest

I’ve wrote a few blog posts on free will and Sam’s experience with meditation and neuroscience allows him to provide some subtle insight on the topic. His whole discussion of the subject is worthwhile, but this bit of wisdom stuck out.

The problem is that no account of causality leaves room for free will. Thoughts, moods, and desires of every sort simply spring into view – and move us, or fail to move us, for reasons that are, from a subjective point of view, perfectly inscrutable. Why did I use the term “inscrutable” in the previous sentence? I must confess that I do not know. Was I free to do otherwise? What could such a claim possibly mean? Why, after all, didn’t the word “opaque” come to mind? Well, it just didn’t – and now that it vies for a place on the page, I find that I am still partial to my original choice. Am I free with respect to this preference? Am I free to feel that “opaque” is the better word, when I just do not feel that it is the better word? Am I free to change my mind? Of course not. It can only change me.

It means nothing to say that a person would have done otherwise had he chosen to do otherwise, because a person’s “choices” merely appear in his mental stream as though sprung from the void. In this sense, each of us is like a phenomenological glockenspiel played by an unseen hand. From the perspective of your conscious mind, you are no more responsible for the next thing you think (and therefore do) than you are from the fact that you were born into this world.

This aptly fits my definition of insightful and wise. Something that once said seems obvious and captivating. I’ve argued for determinism before and, of course, I think everyday but I never fully appreciated the nature of each thought’s genesis until I pictured it in that way.

Harris has another interesting take on probability in morality. Before reading this I created my own moral thought experiment:

Say you could save 2 children that might fall off a bridge by jumping out and grabbing them, but another child is standing near the edge and for whatever reason the child would be at risk of being knocked off in your attempt to save the other two. If you knock the 1 child off and save the other 2 is that ok for a consequentialist? If 2 children were knocked off to save the 1 is that now morally wrong? Is it the same moral wrongness of deliberately killing 2 children to save 1? If not, doesn’t that show that intentions matter? Furthermore, does the level of risk affect the morality of the choice and, if so, why? If you can save 2 kids but you understood that you’d have only a 1% chance of knocking the single kid off is that more morally acceptable then if you thought you had a 99% chance? Also, does the probability of saving the kid(s) affect the morally of the choice as well?

Here is one example Sam on the topic.

If I were asked, for instance, whether I would sanction the murder of an innocent person if it would guarantee a cure for cancer, I would find it very difficult to say “yes,” despite the obvious consequentialist argument in favor of such an action. If I were to impose a one in a billion risk of death on everyone for this purpose, however, I would not hesitate. The latter course would be expected to kill six or seven people, and yet it still strikes me as obviously ethical. In fact, such a diffusion of risk aptly describes how medical research is currently conducted. And we routinely impose far greater risks than this on friends and strangers whenever we get behind the wheel of our cars. If my next drive down the highway were guaranteed to deliver a cure for cancer, I would consider it the most ethically important act of my life. No doubt the role that probability is playing here could be experimentally calibrated.

Read the Book

The Moral Landscape should be read by anyone that cares about the ways we can better plan our moral lives. I hope readers of this review or of the book will share their thoughts here. Also, anyone that thinks they have some challenges to his ideas I’m very curious to hear them. The book fully persuaded me after my initial skepticism, but I’d like to subject the thesis to further interrogation. What else could we value besides what affects our well-being? Something is of value only if it is of value to someone, as Sam argues, correct? Are there any other reliable ways to arbitrate the effects of our actions on our well-being other than science and reason more generally? Feel free to try to answer these questions or others that stick out to you.

Advertisements
Categories: Sam Harris, TED Tags:
  1. John Corlian
    October 19, 2010 at 9:18 pm

    Is this supposed to be a review or a 4,000 word summary? You couldn’t trim it down to a nice three paragraphs and a conclusion? I feel like there is no point in reading the damn thing now.

    Good God… (Irony intended).

  2. October 19, 2010 at 10:03 pm

    Ya sorry about that. Honestly, I wasn’t trying to write a “review” in a conventional sense. I was trying to explain his thesis and deal with many of the criticisms while pointing out things I thought were interesting along the way.

    I’m trying to advance the argument in order to persuade people not to just get you to “buy the book.” As much as I hope people read the book (and they really should), I think that getting people to buy into the theory is much more important.

    Most of my blog posts are very short. Unfortunately, people can’t seem to read anything that takes more than a couple minutes anymore. I could have wrote, “science can study well-being so science can determine morality, the book was well written.” That would have the benefit of being short – not sure it’d convince too many people though.

    I really do apologize for the length, I want people to read it of course. If you find the topic interesting I think it’ll be worth your time.

  3. Fraser
    October 21, 2010 at 1:22 am

    Continuing from our facebook conversation… by the way, I totally agree with your comment above regarding being disappointed over having to write short blog posts – there’s too much ADD in online reading!

    In your most recent repsonse to our Facebook conversation, you identified Harris’s scientific model as asserting “moral actions lead to greater well-being and immoral actions lead to greater suffering.” Saying that alone doesn’t require any science. In fact, I think it’s tautological, if you define morality as well-being, which Harris does.

    Rather, what I think you mean to say is that we can scientifically reveal what actions lead to greater well being, and then label those actions as moral.

    Hopefully I got that right? I haven’t read the book; I’m just going off of your blog post and our facebook conversation.

    My original hangup was that such science seemed impossible. I don’t mean impossible to measure – I took the neuro research Harris cites for granted. Rather, applying those measurements into production of a robust “morality algorithm” that would spit out the optimal decision in a given scenario was too computationally complex. Explaining why was the crux of my first three comments on the FB thread.

    Your concern was that I misunderstood what Harris’s goal for the science was: Harris doesn’t intend to make such a model. He just wants to go off the idea that we measure well-being in experiments, then take the causal relationship we observed and apply it to any moral scenario we come across.

    Doing that wouldn’t be scientific, because we don’t know how to apply the results of those experiments to every circumstance without first having a general theory (which would be a model) with which to match up all of the inputs. We can’t be objective with examples found outside of the experiment without a general theory, so such application would be arbitrary.

    This is especially true when you consider the research Harris is citing that identifies this well-being to begin with. I haven’t read the paper, but I suspect it provides on a long list of controls, as well as statistical analysis to suggest this causality, meaning they are only suggesting the causality to a degree of certainty, and are not proving this phenomenon under any other circumstances (different controls).

    Say the circumstances were complicated – there were three players whose well-beings conflicted based on the outcome. We would need to depend on quantified, cardinal inputs to run the calculations and find the decision which creates the net optimal group well-being. This would be like game-theory, which is also utilitarian when applied to morality. In such a circumstance, we wouldn’t know where to begin if all we had to go by was trends from experimenta data.

    I’ve left some contentions out to focus on the core of our discussion, let me know if there is anything I mentioned previous that you would like me to address directly. In particular, I wouldn’t mind discussing falsification and philosophy of science, because I think reaching a common understanding there is central to drawing any conclusions about how Harris can connect something like moral truth with something like the scientific method.

  4. October 21, 2010 at 1:53 am

    I think the crux of our argument rests on what you’re defining “science” as. Maybe it would help if you knew Harris is using the term broadly. When he says science he means an empirical understanding of the world. He’s not claiming that this is science like physics. Again, I’ll quote him on this topic. I really advise reading the book, of course, as he delves much more deeply into all of these topics we’re discussing – but i hope this helps.
    “Some people maintain this view by defining “science” in exceedingly narrow terms, as though it were synonymous with mathematical modeling or immediate access to experimental data. However, this is to mistake science for a few of its tools. Science simply represents our best effort to understand what is going on in this universe, and the boundary between it and the rest of rational thought cannot always be drawn.”

    Further on he makes a comparison:

    “Science cannot tell us why, scientifically, we should value health. But once we admit that health is the proper concern of medicine, we can then study and promote it through science. Medicine can resolve specific questions about human health – and it can do this even while the very definition of “health” continues to change. Indeed, the science of medicine can make marvelous progress without knowing how much its own progress will alter our conception of health in the future. […]
    It is essential to see that the demand for radical justification leveled by the moral skeptic could not be met by any branch of science. Science is defined with reference to the goal of understanding the processes at work in the universe. Can we justify this goal scientifically? Of course not. Does this make science itself unscientific? If so, we appear to have pulled ourselves down by our bootstraps.”

    I think the philosophy of science is interesting but I tend to think it gets lost in its own debates. As Sam notices, no other branch of science, has to meet the same standards. We’re ignoring the most powerful tools we humans have on the most important subject. I’m not saying epidemiology isn’t important, but what’s the harm if we attempt to use science to help us maximize well-being and to study if certain actions are actually making things better off than they otherwise would be. This argument does not mean we shouldn’t be skeptical of findings or humble in their application, but how many more years must pass before we throw off the yoke of faith-based moralizing?

  5. Fraser
    October 21, 2010 at 3:06 am

    Dan can, of course, define science however he likes, but he has to be consistent. Because I suspect the neuro research about well-being follows the proper, narrow definition of science (otherwise it wouldn’t get published), Dan has to as well. Without being consistent to the definitions he accepts by using this research, Dan would be using science arbitrarily and inconsistently. Simply put, the research would not apply.

    And these strict scientific rules don’t only apply to physics – I only used examples from physics because they are the easiest to discuss.

    Further, your quote regarding medicine is not addressing the same concerns as those which I raised. The definition of “health” is a target for scientists to direct their theory towards on a meta-level, but it is not connected directly to medical (scientific) research. Rather, medical research is directed towards solving one particular, clearly defined issue (such as hypertension, or blindness, or whatever). It needs to be this way, because the resulting theory must be reproducible (something particularly important in medicine, for obvious reasons).

    You most likely know much more about philosophy of science than I do, but I am not familiar with any instances of science getting lost in its own debates – could you give me an example of this?

    And I’m not sure what you mean by “As Sam notices, no other branch of science, has to meet the same standards.” As I’ve mentioned, science is defined by these standards. Where those standards disappear, we begin to tread into “soft sciences” like economics or sociology.

    “I’m not saying epidemiology isn’t important, but what’s the harm if we attempt to use science to help us maximize well-being and to study if certain actions are actually making things better off than they otherwise would be.” Like I said on the facebook thread – there is nothing wrong with this, unless you are trying to be scientific 🙂

  6. October 21, 2010 at 1:08 pm

    So are you telling me that a science of morality can’t even be a soft science? Also, we study “health” using different disciplines of science (e.g. medicine, biology, nutrition, psychology, etc) are you saying that the study of “health” is “arbitrary” and anti-scientific? If you are I’m afraid you’re misusing the term “arbitrary” and narrowing the definition of “scientific” so much that it’s difficult to see what would apply. If you accept that studying health in this way is scientific, you should be free to look at morality the same way. We can look at morality scientifically with different disciplines that still have to follow at the same rules as every other branch (e.g. biology, neuroscience, psychology, economics, sociology, etc).

    Let’s not tangent off too far into a discussion of the philosophy of science; just realize that very few practicing scientists of any kind actually know very much about the philosophy and somehow manage to be considered scientific and do their jobs.

    Back to “health” being a target for scientists… well let me try to persuade you that that should be the type of concern your focusing on. Morality and well-being would be targets for scientists in different sub-disciplines to focus on. They could notice, for example, in an fMRI machine that people suffer losses more than forsaken gains even if the result is exactly the same. That has moral implications (to minimize suffering we can take this research into account and try to correct that bias or frame things in different ways to minimize the feeling of loss). Also, it turns out that if you prolong a small bit of suffering at the end of a colonoscopy as opposed to ending quickly after some of the sharper associated pain, people report their overall experience as better despite a small net increase in suffering during the procedure. Therefore, it may be more moral to increase suffering temporarily and in a controlled way in order to minimize the memory of suffering (the memory is a felt experience). With greater advances in this type of research we’ll be able to more accurately draw that line exactly where to best minimize total suffering (between the experiencing self and remembering self).

    So if science has shown that cooperation corresponds to positive neutral correlates and cruelty leads to more depression, can’t we say science has shown that generally speaking cooperation is good and cruelty is bad?

  7. Fraser
    October 21, 2010 at 10:12 pm

    I didn’t say that morality can’t be a soft science, I said that the hard neuroscience done can only be applied within the boundaries that it sets for itself. Again, consistency is important, because scientific conclusions tend to be very rigid. And, with what limited context I have, Harris appears to be inconsistently making general claims that the research doesn’t actually try to support. I can’t tell if you are debating my point about the need for consistency. Are you?

    In regards to our conversation about health:

    I certainly did not say anything about the study of health being arbitrary. But, I suppose I am saying that the study of health isn’t scientific. This is because the study of health includes non-scientific pursuits, such as therapy. This doesn’t mean therapy doesn’t work – of course it does. And this doesn’t mean everyone, including scientists, can’t be motivated to improve the health of their subjects.

    My point, again, is that science chooses precise targets for the sake of measurement and reproducability. “Health” isn’t precise enough, for loads of reasons.

    In regards to scientists knowing how to separate good science from bad:

    I haven’t narrowed the definition of science too much for anyone to understand what would apply as science. The guidelines by which good science tries to follow are very strict and well known (appropriate inference of statistical analysis, identification and maintenance of contrals, double-blind experiment structure, sample selection, definition of inputs, repeatability of experiments measurability of outputs and inputs, the list goes on). While they may be unfamiliar to you (I’m not sure of your background), scientists are quite capable of following these rules, and the scientific community has proven to be a good judge of when they are being broken.

    Soft sciences aren’t very scientific because they break some of these rules – normally, the ones relating to the population sample, and maintenance of controls. Why? Because economists and sociologists can’t do experiments in the lab, so they have limited control. Psychologists have contol, but are studying systems that are far too complicated to even begin to reproduce in a lab. This is why these practices haven’t made anything like the progress of the “hard” sciences. This should provide you with some intuition as to why Harris ought to find application of some neruoscience papers to the wide and complicated world of human affairs inconsistent.

    In regards to my using the word arbitrary:

    A scientific paper would at least make implicit the limitations of its conclusions. A good scientist wouldn’t draw wider conclusions from that research, but would instead try to use that research as insight for more generalizable research that could draw wider conclusions. I obviously haven’t read the neuro paper Harris is citing, but I strongly suspect that Harris is drawing conclusions that he wants, rather than ones the research actually supports. I would define that as arbitrary, because his conclusions are based on what he wants, rather than what he objectively has the authority to say.

  8. October 22, 2010 at 8:09 pm

    The difficulty here for me is that you’re arguing (questioning) in the abstract. For example, you write, “Harris appears to be inconsistently making general claims that the research doesn’t actually try to support” and “I strongly suspect that Harris is drawing conclusions that he wants, rather than ones the research actually supports” – if you could give specific examples of either inconsistent unsupported general claims or unsupported conclusions it’d be helpful to see where our divergence really is.

    On therapy, I have no doubt that a lot of therapy is unscientific. But that doesn’t mean therapy can’t in principle be scientific. Do you mean to claim that it can’t be? Can’t scientists study particular therapeutic practices using control groups and placebo-style comparisons (and all the rest) and determine scientifically if the therapy is effective?

    I also dispute your contention that science must be done in a lab to be “scientific.” Field science isn’t science now? What Darwin did in the Galapagos isn’t science? NASA is practicing what exactly? It still seems your definition is amazingly narrow – psychology really can’t be scientific? If the systems they are studying are “too complicated” to reproduce in a lab that doesn’t mean that they can’t be in principle studied scientifically – only that it is incredibly difficult. What if you had an unlimited amount of time and resources and a massive F***ing lab? 😉 When exactly does something become too complex to be studied scientifically?

    Even if something is impossible in practice does not mean that it is impossible in principle. Refer back to the blog post where I retell Sam’s illustration of this about counting the number of birds in flight. If you were trying to learn how many birds were in flight, we all understand that the tools of science could help us discover such a number. No one would think about arguing that counting birds is philosophically incompatible with science. Yet, that is exactly what they do with morality. Does the complexity of moral problems really suggest to you that any other discipline other than scientific and rational inquiry is better equipped to investigate these issues?

  9. Fraser
    October 23, 2010 at 3:30 pm

    When I said that economics and sociology aren’t very scientific because they can’t be done in a lab, I was only trying to point to the fact that they are unable to set up their experiments, and also have too few sample data to go by in order to establish a good set of controls (and what data they do have is subject to someone else’s standards of data collection). Hidden variables thereby come into play that they cannot account for.

    Although I did a poor job of explaining, this doesn’t mean that field research can’t be used. Darwin, for example, accounted for the vulgarities of data collected in the field by the sheer variety of his data. Darwin collected thousands of examples across dozens of species in different environments in order to substantiate his theory of natural selection. Had he not, people may have argued that his theory only applies to finches.

    To further illustrate that point, despite the fact that he made this discovery on the voyage of the Beagle in 1835, Darwin didn’t actually publish On The Origin of Species until 1859, and only then because another scientist was near publishing the same discovery.

    And even then, we didn’t understand the real mechanism behind evolution – genetics.

    Again, when I refer to Harris’s inconsistency, I am predicting that the research Harris is citing is constrained by its authors to only apply to particular moral circumstances, and doesn’t speak about well-being in general, which has far more going on than people sensing cooperation, or other people sensing physical suffering or fear.

    But you’re absolutely right – I’m questioning in the abstract, and I really don’t have the authority to make these claims without actually reading his book, or reading the research Harris is using.

    However, I’ll try to answer your question about being more specific with more shameless hypothesizing 🙂 so that I can at least explain you my intuition on the matter:

    The nero experiment was done in a controlled lab setting. In it, people performed strictly defined tasks meant to simulate a series fo actions that they were going to define as “cooperation.” Monitoring their brains during these tasks revealed that this and this and that happened. Other experiments that monitored people being “happy” showed similar events happening in the brain (“happiness” btw is term which will have its own definitions within those experiments and which I also ought to scrutinize here). The similarity is consistent and close enough to statistically suggest that “cooperation” and “happiness” will always produce the same neurological effect. This would be a nominal value result – does the “well being”/”happiness” state exist or doesn’t it?

    Conclusion: A simulation of what we identify generally as cooperation neurologically improves well-being, as defined by this and this and that paper (other experiment).

    Would this apply to a real-world scenario where we have multi-lateral talks between, Israel, Palestine and the United States? Of course not – these three parties have all sorts of vested interests that they entered into the negiotions with. The neuro experiment couldn’t account for such a dynamic and deep-rooted situation, because it had to assume that every player in the “cooperation” game was unbiased/ had no vested interest in the results. A generalizable theory on moral well-being will have to account for this, and could only do so by quantifying this player bias.

    Further, as soon as you add a third player, you can’t depend on ordinal or nominal values of well-being, you have to compute cardinal values, because you need to net out the positive and negative “well-being” states from each player.

    You ask another fantastic about whether or not therapy could ever be scientific, by how I am defining it. Could we, in principle, have a therapy of science?

    There is a spectrum of what I will call complexity (a bit of a misnomer) to the sciences: Math < Physics < Chemistry < Biology < Psychology < Sociology < Economics. With each increasing step, the systems that define these fields of research increase in complexity and diversity by orders of magnitude. Science, as a practice, is very good at the Physics to Biology part of the spectrum. It isn't needed in the math end, for obvious reasons. And it breaks down at the social sciences end, because of the tension between rigid standards for experimentation and limited resources for data collection. But, in principle, science ought to work along the whole spectrum. So could Harris's moral sciences.

    My original contention was that the sheer limitations of computing makes some of these fields impossible to model in reality. This is why I said on the FB thread that any algorith me make (assuming we could make it) would likely be NP-complete.

  10. October 24, 2010 at 12:56 am

    I’ve actually been struggling most of this time to get you to acknowledge just that; that “in principle, science ought to work along the whole spectrum” (which would include a science of morality). If you recognize that, I’m content. I think the practical science is a little more possible than you give it credit for, but Harris himself stresses throughout the entire book that a lot of moral science is extremely complex and difficult in practice (at times impossible). So he doesn’t appear to be as far apart with you as you assume.

    If you’ve got that far, you’ve accepted a major premise of his thesis and part ways with many comrades on the secular left. Welcome.

  11. Fraser
    October 24, 2010 at 12:06 pm

    Perhaps you stopped reading my comment at that point? My next sentence implied that what can be done in principle is erroneous, considering that it is impossible in application.

    Would I be correct in saying that you understand and agree that Harris’s current claims about applying the results of a neuroscience experiment to morality as he has sounds inconsistent and arbitrary (as I had defined those two terms)? Again, considering I haven’t read the book or the neuro-paper, I can’t say this with certainty, but I would be (very pleasantly) surprised if I was incorrect.

    Again, my original comment on the whole matter (the basis of my first two comments on the FB thread, and then mentioned intermittently as we diverged), was that modeling such a system is likely too computationally complex. Despite the fact that the principles of science suggest there exists a means to attaining a solution – those means cannot always be applied, due to constraints that science generally takes for granted.

    So the major premise of Harris’s thesis is mankind will use science to one day answer all questions of morality through some sort of utilitarian calculation of well-being? Well, my points were: 1) Harris doesn’t yet have a basis from current neuro-experiments to make that claim (the inconsistent and arbitrary point) and 2) We will likely never be able to make that claim because the model that can do so is likely too computationally complex. I was under the impression that you agreed with this, and it sounds like Harris agrees as well.

    These constraints I’m referring to are the ones understood in computer science as computational complexity. I don’t know how interested you are in this topic, so suffice it to say that some problems are intractible, meaning the time required to solve them is longer than there will be time left in the universe. They lack a solution in what is called polynominal time.

    Interestingly, this general algorithm that Harris suggests science is capable of developing would probably fit many definitions we have for artifical intelligehce. By that I mean if we did have this algorithm, and if it was computable, then we’d likely be able to use it for AI.

    And I would already call myself a staunch member of the secular left. I didn’t take this side in our conversation becuase I believe god or something is the only true source of our morality. Rather, I’ve been committed to writing these comments because what makes secular thought so powerful is the appropriate use of its tools – science being premier among them. And secular thought breaks down when these tools are misused, like I suspect Harris is doing.

  12. October 24, 2010 at 12:31 pm

    You need to read the book.

    That said, Harris isn’t suggesting we are on the verge of some “algorithm” to decide morality. I really don’t see how it is anti-scientific to notice that our states of being (from max suffering to max well-being) have neuro-correlates in our brains which are ultimately determined through a mixture of our inner biology and the effects on us from events in the world. Also, for something to have value it has to be of value to somebody (what else would it mean to have value?). It’s obvious that conscious creatures value well-being over misery so we can say that moral choices increase well-being and immoral choices create needless suffering. If research in neuroscience, economics, sociology, and psychology have all shown that beating and degrading women leads to depression, poorer societies, less social cohesion, and less individual and collective well-being, it seems science has shown (as well as science can show just about anything) that beating and degrading women is an immoral choice that leads in almost all cases to more suffering.

    Science doesn’t have to have a precise model mapping all cases to be scientific. Science uses what we can call the preponderance of evidence. Evolution is a strong a theory as any in science and we don’t have a iron clad model to describe all its mechanisms and one that takes in every possible variable – no scientist is tempted to therefore conclude that it is anti-scientific and arbitrary. By the way, here’s a less radically strict definition of science: “Science is an enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the natural world.”

    And no, I don’t think he’s being inconsistent and arbitrary. You’d have to tell me specifically how he’s being those things before I’m persuaded.

  13. Fraser
    October 25, 2010 at 11:38 pm

    I know we’ve been chasing each other’s tails on these points, and I’m sure we could have reached a conclusion far more quickly if this weren’t in writing, but I’ll have another go, using your language as much as I can.

    I’m just glad we didn’t try to argue about the merits/shortfalls of utilitarianism 🙂

    We can take as given what you said: “our states of being (from max suffering to max well-being) have neuro-correlates in our brains which are ultimately determined through a mixture of our inner biology and the effects on us from events in the world.” But the fact that this phenomenon exists doesn’t help us make decisions or cast judgment about morality in general until we are able to identify and quantify any neuro-correlate.

    Why? Because the spectrum of scenarios that have states of suffering (and by Harris’s utilitarian definition, moral rammifications) is unfathomably wide. And while every scenario may have a corresponding neuro-state, all of those neuro-states are going to differ dramatically in order to reflect the diversity of inputs. Also, the number and direction (meaning helps them, hurts me) of the neuro-states being considered in any given moral decision is similarly complex. Finally, in order to measure and quantify, you need a standard neuro-state (like the standard lengths in metric system) – how do we choose that?

    How do we repeatedly and accurately identify those neuro states? How can we quantify them so they can be compared/netted against each other? I doubt the research that Harris cites begins to address these things – they are likely out of the scope of the research because the research isn’t talking about morality. To avoid arguing in the abstract, I’ve presented at the bottom some specific examples that I think illustrate my point.

    You seem to be arguing accuracy (correctly), while I seem to be arguing application. I’ve learned from our conversation that this can be a very fuzzy line, depending on how you look at it, and I think we’re arguing in that fuzz. I’ll explain waht I mean below:

    I agree entirely that science doesn’t need a precise model mapping to function. In fact, your point here is a fundamental tenant of science that gives it so much power: science is only an approximation of reality, presented in a way that can be proven wrong (falsifiable), so that we can always be getting better and better models, rather than cling to dogma that we have grown comfortable with.

    So while science doesn’t need perfect accuracy, it also is not useful unless it is universal- meaning it has to reflect the entirety of the topic matter that it addresses. A science OF morality has to be very robust, and cover a lot of ground, simply because so many things fall under the umbrella of morality. The very nice definition of science you gave includes the goal of providing testable predictions about the world, that means the theory can apply to anything that it generally talks about. To use a theory that can’t be predictive in this way would be inconsistent, the intention to do so would be an arbitrary one.

    Similarly, the algorithm needed doesn’t have to be 100% accurate before science can put it to good use – but it does have to be universally applicable to any neuro-state.

    The specific example you asked for: Could our understanding of the neuro-state of someone being abused or two unbiased parties “cooperating” be used to analyze the neuro-states of the following two examples, in order to make an opinion on the most moral outcome:
    a) Multi-lateral agreements between the US, Israel, and Palestine;
    b) Whether I should donate my year-end bonus to the United Way.

    Regarding Algorithms: Why would a science of morality be an algorithm rather than a principle, like evolution? Becuase moral decisions have multiple parties with competing interests, and the decision chosen among multiple potential outcomes has to be the best by some degree. This means the outcome would have to be a calculation requiring an algorithm.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: