Fully Satisfied

In an interesting portion of Derek Parfit’s brilliant work, Reasons and Persons, he considers an ethical problem in utilitarian philosophy. Allow me to use Parfit’s discussion of the mere addition paradox and the repugnant conclusion to expand on how I believe how the concepts of value, good and bad, and better and worse should be looked at. I’ll also explain why I think the mere addition paradox and the repugnant conclusion make some fundamental errors. I encourage everyone to read his book, but if you haven’t already it might be helpful to read this section on these problems from wikipedia.

When most people consider different moral situations they compare them as better or worse or one as good and the other as bad. When utilitarians do this they attempt to compare the well-being (or “happiness” or some other similar concept) of the people in each scenario. Sam Harris’s Moral Landscape is a good example of this – my discussion of it might help some readers follow the concept more clearly.  Simplistically speaking, well-being is what is valuable so if case A has more well-being than case B: case A is better.

Parfit identifies what he sees as a paradox in certain cases using that framework. If we consider a greater sum of happiness (I’m using this term interchangeably with “well-being”) to be always better a repugnant conclusion could be drawn:

For any population of at least ten billion, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better, even though its members have lives that are barely worth living.

I dispute this conclusion and the existence of any paradox in appropriate utilitarian thinking. To make that case it is essential to understand that for something to have value it must be valuable to somebody. Sam Harris writes,

Consciousness is the only intelligible domain of value. What is the alternative? I invite you to try to think of a source of value that has absolutely nothing to do with the (actual or potential) experience of conscious beings. Take a moment to think about what this would entail: whatever this alternative is, it cannot affect the experience of any creature (in this life or in any other).

For the purposes of our discussion let’s take for granted that well-being (or phrased another way, for life to go as well as possible) is the ultimate goal. You’re welcome to dispute that, but these problems within utilitarianism use that as a starting point. If you just completely reject all possible forms of utilitarianism then me resolving these paradoxes might be of little interest to you anyway. Feel free to read on if you’re curious though.

If something only has value if it is valuable to a conscious creature then well-being is only valuable subjectively (in the sense that it needs a subject to experience it). Well-being isn’t of value inherently or “for its own sake.” I’m not even sure what well-being could mean if it’s not a state of an actual being. So how do we approach measuring whether one situation is better or worse in terms of well-being? As Parfit argues, it’s not worse for more people to exist. Is it?

If I’m right and the goal is to maximize the well-being of conscious beings, do we have to conclude that more people existing is always better if there is more total well-being? Referring to the wiki page charts (“The group’s size is represented by column width, and the group’s happiness represented by column height”), would Z really be better or no worse than A?

File:RepugnantConclusion.svgThe reason why this isn’t a repugnant conclusion or paradox is because if we’re correctly using a utilitarian framework that holds consciousness to be the only proper domain of value we’re comparing the wrong things. Some questions don’t make sense even if they can be said. Consider the question, “What happened before time?” Even though it is difficult to grasp, physicists can explain that the question doesn’t make sense. Similarly, to compare total sums of or average happiness side-by-side doesn’t make sense – or at least isn’t what’s important.

There is no “better” or “worse” unless we ask, better or worse for whom? If we look at the mere addition problem and ask what’s better A or A+ you’ll see what I mean.

File:MereAddition.svg

It is explained that, “In situation A, everyone is happy. In situation A+, there are the extra people.” The extra people lives have enough well-being that they are worth living. So is A+ worse than A? Most people intuitively think, “no, how could it be worse for more people to exist? Their lives are worth living, after all.”

But we haven’t established what is being compared. What is worse and worse for whom? As established before, we only need to consider the state of conscious beings. Something can’t just be “better” or “worse” in itself. So, you may ask, are the extra people “better off”? Well in A they didn’t exist, so you might assume that, yes, they are automatically better off by mere fact that they now exist and have greater than 0 well-being. As a corollary, are the extra people worse off in situation A? I say, no. Here’s where we get to the comparison problem. Nonexistence isn’t a “bad” state because nonexistent people don’t have consciousness. In fact, nonexistent people is a contradiction – nonexistent people aren’t people, they’re nothing. In fact, there is not even a “they.”

Let’s list some true observations. Everyone in A is at their max well-being; their situation is perfect. More people exist in A+, those extras could be better off. If everyone in A+ had the well-being of those in A, A+ would constitute a better situation for those within that world. Even though they are at a lower level of well-being, the extra people are not worse off compared to A because they don’t even exist in situation A. The extra people aren’t worse off in A because there are no extra people.

To help illustrate this consider what it would mean to conclude nonexistence is “bad” or “worse than existence.” We would have to conclude that we exist within an apparent infinity of badness because an unending number of nonexistent beings could exist but don’t. You would even have to conclude that the holocaust is trivial in terms of badness compared to the infinity of nonexistent beings. After all, the holocaust happened to a finite number of people and many of the victims who suffered were still able to experience well-being greater than 0.  Of course, that is a preposterous conclusion. Clearly, the holocaust is worse than nonexistence because nonexistence doesn’t happen to somebody; the holocaust does.

Therefore, it should be more clear that A and A+ aren’t perfectly comparable. The difficultly arrises because of how our minds work. It’s kind of like the “don’t think of a pink elephant” problem. When someone tells you not to think of something, it is extremely difficult not to think of it. As such, to think of nonexistent beings makes us imagine beings. Arguing that it’s better for them because they’d not exist in the other scenario seems to fall into that type of cognitive illusion. Once we talk about “their” existence we can’t help but think of “them” as conscious beings. If a person isn’t in existence they can’t experience anything. Does a nonexistent person have worse balance or a worse sense of smell? No, no sense of balance or smell exists to be worse. Our brains are constructed in a such a way that we can easily personify something and empathize with it. If I asked if it was good or bad to be a rock, you might try to imagine yourself as a rock. Maybe you’d think that would suck. But if you can’t be a rock; once “you’re a rock” you wouldn’t be you. You wouldn’t be a you. It wouldn’t suck or be good or be bad. Rocks aren’t conscious so good and bad doesn’t apply.

Let’s think of this another way. If the goal wasn’t to maximize well-being but rather to completely satisfy our appetites (maximize the food in our stomachs) you might be able to better see the problem with the earlier comparisons. For simplicity’s sake, let’s assume you can’t overeat. If you’re comparing it with well-being or happiness in your mind, let’s assume that the population change doesn’t have any effect on the well-being of the existing people. Let’s also assume that food (resources) is infinite – there isn’t a finite amount of available happiness either.

 

5 people stand in our house (existence). All 5 persons’ stomachs are full – we completed our goal.

Illustration by Drew Simenson

If 5 more people walk into the house with their stomachs 3/4 full just because there is more food in bellies overall doesn’t make the situation better.

Illustration by Drew Simenson

So calculating total food seems ridiculous. The extra 5 people certainly aren’t worse off either. It just means there is more work to be done; since it doesn’t require the original 5 people to regurgitate their food up to feed the new group of people it’s not worse for anybody. It is simultaneously a situation in need of improvement (giving charity to A+ would be useful but not to A) but not worse for anybody.

Does that mean we can’t compare the two situations from a third party perspective? No. Clearly if you had to decide which world you lived in assuming you’d be put in the same state of being as the others, situation A is better than A+.

I submit that it is neither good nor bad to simply add people to the population so long as it doesn’t affect the well-being of anyone else. It only makes sense to look at the conditions of each situation and ask what, if anything, we could make better.

Life

If only two people, Adam and Eve, existed and they were perfectly happy would having a child be morally good or a moral obligation? Assume that their happiness isn’t affected positively or negatively by the addition of their child (far fetched I realize). A utilitarian could answer that it didn’t run counter to the goal because no conscious creature’s well-being was made worse by that decision. Therefore, I’d argue their choice to have a child wouldn’t be morally good or bad (assuming also that the child’s life would be worth living). They’d have no moral responsibility to others to have a child – there are no others. They only acquire moral responsibility to maximize the child’s well-being once, and if, they have a child. There would only be a moral obligation to have children if having those children helped increase well-being of other conscious beings that are currently in existence. Of course, once they popped into existence we couldn’t use them as slaves or something because we would now have to be concerned with their conscious well-being. People should have children only if they believe that they (and assuming others exist) and others would be happier/better off (in a broad sense) to do so (also with the caveat that the child’s life is “worth living”). Think of it this way: do you think people ought to have children if that means everyone is going to have worse lives?

Does any of this mean that if people were all perfectly happy it would be morally wrong to bring them into existence? No – look at the difference between A and A+ again. The extra people’s existence didn’t result in any loss of well-being for the first group.  So as long as the mere addition of extra conscious beings doesn’t ever cause more suffering to the initial group nobody is getting worse. The population in A is better off than the total population of A+. But again no one is worse off for those extra people to exist so it isn’t morally wrong to bring them into existence. The goal isn’t to keep well-being maximized for its own sake. It is to maximize the well-being of anybody who exists. Remembering that people experience life as an individual rather than a group also helps us keep this in perspective.

Death

If this has implications for birth it also has implications for death. We don’t increase conscious creatures’ well-being by killing people off because that is lowering their well-being and will lower the well-being of their family, friends, and anyone that could have been helped by them directly or indirectly. I discuss a related topic here. On an extreme level, this also make it theoretically possible for voluntary human extinction to be a morally neutral or (in some more extreme cases) a morally good choice. For that to happen, it’d have to be actually voluntary and everyone would have to be no worse by refusing to have children. It is almost certainly impossible in practice however. I find it difficult to believe that people would truly be happier deciding to not have children – but if somehow that was the case, so be it.

If people left the stage after a reasonable run, in the fullness of time intelligence could evolve again (dolphin-people? chimp-people? orchid people?). And then, in due course, when this new species deciphered human books or reached the marker that might be left for them on the windless moon, they would know that man ended his dominion so that theirs might begin. Imagine, then, how they will regard us. It is, far and away, the greatest act of goodness ever contemplated, the ennoblement of a whole species; an act, almost, of angels.

Until that day we should be content to fill our bellies.

Special thanks to Drew Simenson for providing illustrations for me. To contract Drew about his graphic design work you can email him here.

Advertisements
  1. Fraser
    November 7, 2010 at 6:44 pm

    “Does that mean we can’t compare the two situations from a third party perspective? No. Clearly if you had to decide which world you lived in assuming you’d be put in the same state of being as the others, situation A is better than A+.”

    Why? I thought you had just done a very good job of explaining that there is no “better than” between the two situations. The chooser would be implanted into one of those two populations that are not comparable.

    The choice is only significant if the two worlds are comparable- the decision process would go as follows: If you choose A, you have a 100% chance of having 100 utility. If you choose A+, you have a 50% chance of 100 utility, and a 50% chance of 50 utility, making for a weighted average utility of the A+ 75. But, like you explained, these utilities are not actually comparable (transitive).

    Or are they? This decision sounds quite sensible when you consider it from the third pary perspective, as you had. But can’t the “more people” (+ group) contemplate this perspective of choice that the third party has the power to make? I think we have a dilemma. On one hand, from within the system, A and A+ are incomparable/neutral; but from outside of the system, we can easily choose the better world – and the people within the system can in fact analyze it from the this perspective as well. With the advantage of the third party choice perspective, perhaps someone from the + group would say “I grant that the A group is not better for me since I don’t exist in the A group, but does someone in the A part of the A+ group have any more right to be there than I?” My last two sentences aren’t exactly the same, but I think it’s clear (although perhaps worth debating) that the A part of the A+ group is interchangeable with the A group in this instance.

    Say this dilemma exists- is one perspective of morality more useful or more true than the other? Or, to put it another way, does your follow-up statement hold true: “…it is neither good nor bad to simply add people to the population so long as it doesn’t affect the well-being of anyone else.”

    I think the dilemma arises because the third party addition introduces the idea of something like marginal utility, and the original A and A+ incompatability stems from the nature of their not being a dynamic system. We can imagine instances where the third party choice exists and those where it doesn’t, making both sides of the dilemma “real” or feasible.

    But if the dilemma exists at all, then perhaps the point you are making isn’t universal, and therefore doesn’t carry the weight that you might imagine it does? I could be convinced otherwise, though, and you’ll probably have a few points that will make this more clear.

    I’d also like to take up Harris’s invitation that you quoted, but I don’t think it’s within the scope of this conversation, and my challenge is more of an intuition at this point than a reasoned argument.

  2. November 8, 2010 at 12:06 am

    The choice is only significant if your choice affects the level of your well-being. I was only trying to point out that if your choice makes you 100% likely to be fully satisfied or gives you a probability to be less than fully satisfied that obviously constitutes a comparable difference. However, the fact that more people exist and they’re at a lower level of happiness than another group of people in another “universe” does not mean inherently that anyone is “worse off” or that the “state of the world” is inferior. In other words, a third party couldn’t say, “if we could have prevented A+ from happening that would be better because some people’s well being isn’t maximized.” It only matters if there were two scenarios where a comparable group of people’s happiness is different (that sounds confusing.. hmm..)

    For example, if just Jack and Jill existed and could know that one decision leads to their 100% satisfaction, that would be a perfect choice. If another decision led to their 100% satisfaction and a new person James’s happiness being 65% that’d be a morally neutral alternative to scenario 1. If another choice existed that led to their happiness being 100% and James (assuming, somewhat unrealistically, that James would be the same entity) being 55% that would be a morally neutral alternative to scenario 1 but a morally inferior choice to the more comparable scenario 2. It wouldn’t be morally evil to choose scenario 3, but it’d be an unnecessarily inferior decision to choose that path. So it certain ways they are comparable but other aspects are not.

    Does that clear things up at all?

  3. Thomas Iodine
    November 8, 2010 at 3:31 am

    Intuitively I lean towards population A being a “better” situation since everyone is at maximum happiness, whereas A+ is less perfect.

    • November 8, 2010 at 10:36 am

      I don’t dispute that that is your intuitive feeling. But I tried to argue why that intuition is wrong.

  4. Fraser
    November 8, 2010 at 9:43 pm

    Right, I understand the point you made in your post, and the one you are making now, and I agreed with you until I read your sentence that I quoted about a third-party perspective, which turned everything on its head for me. Your idea about a third party lead me down the train of thought I tried to describe in my comment.

    To apply my comment with your new example: should James be envious of Jack in the Jack&Jill&James world? Clearly, James can’t say he got screwed as compared to the JacK&Jill world, because he didn’t exist in that world. But James does see how much better Jack has it in the world James does exist in. Why was Jack gifted with higher utility than James in the A+ world? I suppose this is running on the assumption that everyone is equal, and + people have all of the same rights to utility as A people. This is one of those fundamental questions about justice, and I’m not looking for the answer (my guess, which I’m sure you agree with: life’s not fair and the universe doesn’t care about you).

    So that was my intuition for my last comment, but I think the answer is actually given in the wikipedia article, which I went and read after posting. I think rather than arguing A and A+, you really should be talking about A and B as they approach Z. That would nullify the complaint I had by suggesting that James and Jack and Jill all have the same utility in B, and James’ neighbors aren’t better-off.

    I think your argument is made in the wikipedia article here: “The paradox is immediately resolved by the conclusion that the “better than” relation is not transitive, meaning that our assertion that B- is better than A by way of A+ is not justified—it could very well be the case that B- is better than A+, and A+ is better than A, and yet A is better than B-. This is of course incompatible with any form of utilitarianism. Temkin argues for this approach.” You add to this by explaining why the relationship is not transitive – the new population never existed before, so they have no basis to compare their utility by. I’m not positive as to why this is incompatible with utilitarianism – do you have a sense as to why it might be?

    Something I just realized now:

    I find it interesting that, while James didn’t exist, Jack and Jill did, and they themselves can presumably compare between A and B. Can they see justice in having lower utility in order to match with this new population? The shift to B sounds a lot like asking rich people to pay more taxes, or asking everyone to be better to the environment, in order to make life better for future generations.

    Do you think this comparison applies? If so, are you effectively arguing that we aren’t responsible for how we leave the world for future generations, because those future generations don’t have a standard to compare against? I might be missing something.

  5. November 9, 2010 at 12:23 am

    On your last point first, no, I’m not arguing that. People will exist in the future and we should attempt to maximize their happiness as well because once they do exist we’ll have to care. The examples I gave had the insight of perfect foresight, which we don’t actually have. But when making decisions now we recognize that people will exist in the future and it is just to make decisions that attempt to maximize their well-being as well.

    It is def better for Jack and Jill to be in A rather than B – they have higher happiness. Why would it make sense to lower their own happiness in order to accommodate more people?

    I haven’t read Temkin’s approach, but I think he’s on to something in noticing that it isn’t transitive. I disagree that it is incompatible with utilitarianism though. People are still making moral decisions based on the a rational calculation of well-being.

    James should certainly want to increase his happiness to Jack’s level. I’m not sure envy is the best approach, he certainly shouldn’t want to lower Jack’s level of well-being to make it more equitable. They’d be better off working together to increase James’s well-being. It’d be moral for Jack to help James in a way that increased happiness.

    Also, I’m not saying that James deserves his lower position, I was constructing a thought-experiment. Maybe Jack and Jill live in someplace rural and had a child James and James won’t be fully happy unless he gets to live someplace urban. Now I don’t deny that once James exists it is possible that it’d be moral for Jack and/or Jill to accept a small loss of happiness if it were to result in a larger corresponding gain in well-being for James. They’re all conscious beings and now we can calculate the utility of various choices and their effects on well-being.

    Maybe I missed why the third party perspective threw you off. Do you not see that if 2 islands existed: one is A and the other is B, from the third party perspective B is worse off than A? However, if a pre-society existed like A and a group of them split off and colonized the B island and had children who became that B island society, no one there is “worse” off. If no one from A split off those B children wouldn’t exist. Maybe also, if no one from A split off overpopulation would have happened and everyone in A would have suffered because of it. Just because B stands at a lower level doesn’t mean that wasn’t necessarily the best possible moral choice. But now that B exists, it should be the moral goal of A and B to raise everyone’s happiness as much as possible.

  6. Fraser
    November 10, 2010 at 1:23 am

    [Dan’s Edit: I updated Fraser’s comment to correct a copying malfunction]

    Let me make sure I’ve got the point of our conversation down straight: If utility is transitive between A and A+ or B, then there isn’t an argument against the mere addition paradox and the repugnant conclusion. If utility is not transitive, then utilitarianism doesn’t work. The escape would be arguing that utility is transitive in every way needed to make it work, but not transitive between A and A+ or B.

    I think I understand Temkin’s reasoning that you disagree with: he probably says utilitarianism requires transitivity between any and all moral circumstances, because otherwise utilitarians couldn’t compare circumstances and discern which was best. In other words,utility must always be transitive, otherwise it wouldn’t work as a universal measuring stick. Maybe Temkin would say that it doesn’t matter that we have introduced new people – we are still matching utility against utility. The populations A and A+ or B don’t need to have anything at all in common in order to be comparable under utilitarianism. Assuming I’m right about Temkin, and to repeat my paragraph above, perhaps you would argue he is using the idea of transitivity too loosely?

    This is what I’m grappling with: you made a sensible argument that the addition of new people (+ or B) is morally neutral because they did not exist before. At the same time, We recognize +/B people come into existance because of the actions of the original party (A), and the original party in practice has foresight that imbibes them with a responsibility to optimize future life for +/B.

    I think the answer is going to come from better understanding of utility as a metric. Are we asking utility elsewhere to do things that are as transitive as comparing between a world (A) and the same world

    later with additional people (A+/B)? I suspect that two hypothetical worlds would be even more disparate if their entire populations were not related in any way (temporally, spacially, racially, whatever), but a utilitarian would still expect to look at policies of how they treat women, and come up with a conclusion about which is more moral. So I think a utilitarian would come down to saying that A and A+ or B are just two different population sizes – which are thus comparable in utilitarianism.

    What do you think?

    Other relevant points:

    To your first paragraph (which referred to my last): I was missing something – you argued in your first comment that Jack and Jill have an obligation to choose the better of “Scenario 2” and “Scenario 3” for James.

    My two prior comments turn out to be me trying to digest the idea that this transitivity isn’t universal, but utilitarianism still works. The third party perspective was what suggested to me that A and A+ are comparable, because the third party can only choose between A and A+ if utility is transitive between the two groups. When I identified that we ought to be considering A and B instead of A and A+, it was because that made the analysis in one way simpler -> I didn’t have to worry about A+ being internally unfair. I understand your point that it is counter-intuitive for Jack&Jill to lower their utility by choosing B over A+. This would be a sacrifice of utility for social uniformity. I’m not interested in arguing the merits of social uniformity or similar liberal political philosophies, I was just asking if you noticed how that could be portrayed as just. And I didn’t think you said James deserves a lower position – you made pretty clear that he doesn’t “deserve” anything (good or bad).

    • November 10, 2010 at 2:51 am

      Thanks for you continued thoughts on this topic. I appreciate you helping me think through these issues. I’ve just started thinking about these more technical philosophical issues so I don’t always have as strong as a grasp of them as I’d like. I hesitated to even use phrases like “utilitarianism” because I think it saddles us (well, me anyway) with a lot of academic and technical baggage that I’m not prepared to rummage through. That said, I’m still enjoying this topic. I’m not even a 1/5 of the way through Parfit’s book (although I have skipped around to different sections) so I’m still thinking my way through this stuff.

      I’m curious where you stand on this issue. Do you think we can say that A is either “better” or “not worse” than A+ or B, or do you think something else? Do you see a paradox or some other problem in attempting to maximize the well-being of conscious creatures?

      I have trouble finding a value that somehow trumps well-being. Is there any situation where you’d say, “promoting this value X results in a massive decrease in people’s well-being (maybe lots of death or various forms of suffering) but value X is still a good thing.” However, even the values I hold dearest like liberty or political equality are ultimately trumped by well-being. For example, although I can’t imagine this happening in practice (which is partly why I favor those values), I can see myself saying, “wow, we completely restricted people’s personal freedoms and took away their right to vote and political assembly, but surprisingly everyone reports [assume they are accurate for sake of argument] that their lives are going much better than they ever thought possible – so I guess we should support restricting liberty and removing political equality.” I don’t believe that could ever happen to any extreme degree, but if I was convinced somehow that liberty and equality ran counter to well-being I’d have to conclude that liberty and equality weren’t good values to hold. In theory, if political oppression made people’s lives better wouldn’t you support it?

      I’ll do some more thinking on the questions you raised. I’m going to keep reading Parfit’s book. But I just want to say that we should try to get away from saying that A and B or A+ is better or worse without specifying who is better or worse. That’s a major point I was trying to make.

  7. Fraser
    November 11, 2010 at 1:00 am

    I do see a paradox – and a clever use of reductio ad absurdum (it didn’t result in a logic trap, but rather in something we didn’t want, a “repugnant conclusion”).

    You asked for my stance on morality, so I’ll say a few things, understanding that such rambles almost always sound self-indulgent and trite. And that it’s impossible for someone as wet-behind-the-ears as me to say enough to make sense, but not so much that you lose everyone’s interest.

    I’m probably more of a neophyte than you, to be honest, but I’m certainly interested in topics regarding morality. However, most of my thought goes towards arguing about morality and religion, because I have a few friends who I greatly admire and look up to who are devout Catholics.

    I’m skeptical that there are any moral “Truths.” What seems most likely to me is that some rules of conduct/mores are more successful for a society than others, and those societies end up surviving/outgrowing/defeating/convincing the less productive mores. Their relative success doesn’t make these rules universal and indisbutable, and it doesn’t make them standards by which we can compare everything else: technologies change, our understanding of the universe changes, and so will our morality. I’m a product of my society, and I’m conditioned to hold this opinion about cannibalism and that opinion about charity, so I’ll follow them. I’m not going to try to alter my conscience because I intellectually believe that it has no epistemological basis, because it wouldn’t be productive for me to do so, and it would be damn difficult regardless – perhaps behaviorally impossible.

    I think the closest I can get to discovering an ideal moral philosophy is finding one that most closely matches the society I live in. And since the society I live in isn’t unanimous on some issues (say, abortion), I’m not ever going to pinpoint that correct policy, because one doesn’t exist. There’s wiggle room – and I just accept that most people may not be satisfied it exists.

    You seem very interested in politics, perhaps to the extent that your interest in philosophy is driven by your will to apply a philosophy to politics? I hope I’m not mis-labeling you. I’m not apolitical, but I suspect that the causality between political action and outcome is on average much weaker than we hope, and that (I think I’m breaking from true statistics lexicon here) the variability of that causality itself is enormous – sometimes politics works very well, but who knows when and how often that will be?

    I’m not sure how to say that concisely – did that make sense?

    It’s a different topic, but I’m curious if you hold any belief in Platonic ideals? THE MOST interesting thing I’m learning about today is the mathematical arguments for Platonic Truths in mathematics (Georg Cantor, Kurt Godel especially, Alan Turing, etc.) – things that are true despite our existance, things that are true that we actually know that we can never know, things that are true that we can understand, but our biology prevents us from intuiting. Math and science excite me a lot more than philsophy because they seem so much more fruitful.

    Having said that, the Mere Addition Paradox and Repugnant Conclusion truely were fascinating explorations (largely becuase discovered that I agree with them), and I very much appreciate you sharing the idea with me.

    My intuition (and that’s all it is) about morality takes a lot from my (Popper influenced) understanding of the philosophy of science. Science is based on observation, which has limits (that we are constantly expanding). Those limits prevent us from being 100% accurate, so rather than expect we are, we expect that we are not and try to set up our theories in ways that are easily refutable (see: falsification). Philsophy (and morality within it) is much bolder than science, and makes sweeping statements that must always hold true. This is the foundation of my gripe with Harris, who seems to suggest that we can have a science OF morality. Certainly science can contribute, but…. Anyway, that’s a conversation for our other thread (which I was disappointed we ended).

    These opinions aside, I’m most skeptical of all that I have all the answers at 23 years of age, and will be rather disappointed in myself if I haven’t thoroughly and honestly changed my mind a few times over the course of things.

    • November 11, 2010 at 5:10 pm

      Do I hold any belief in Platonic ideals? If you mean In the moral sphere; I don’t think so. I don’t think a moral value can exist independent of its consequences on conscious beings. Murder isn’t wrong because murder is bad for its own sake, it is bad because it has negative consequences on the person murdered, the person’s friends and family, and for society as a whole. In math, I suspect that truths do exist independently.

      On politics, I actually agree that political action is very unlikely to have strong effects in most cases. A single person’s vote has about a lottery winner’s chance of swaying an election. Most political action conforms to perpetuate the status quo. Politicians have very little ability to strongly affect the economy. Etc. But it is a mistake to think that politics as a whole (political systems, legislation, etc) don’t have large effects on people’s lives. I’m not saying that politics has everything to do with how two countries differ (or became different) but just look at the United States and North Korea and tell me politics doesn’t make a huge difference in people’s lives.

      I actually agree that science and technological change may make the biggest differences on people’s lives. But I’d also argue that it is hard to separate cause and effect. For example, many people have argued (mostly persuasively, I believe) that advances in technology within the domestic sphere have done more for women’s rights than politics by allowing women to leave the home to get other jobs or free time to fight for political rights. But these inventions were developed, not coincidentally, in capitalist systems maintained by our particular political structure. There is an unbroken complex deterministic chain of events that interplay to cause future events – each serve important roles (of course to varying degrees of importance).

      I also find science extremely interesting and read a lot about it. I don’t cover as much on my blog because I don’t think I have as much insight to offer on many of the things I read as I do with other topics more frequently covered here.

      I dispute that you are “conditioned” to feel a certain way about cannibalism and other things. Did your parents, friends, or teachers really need to or spend much time at all having to persuade you that eating the flesh of humans was revolting. Human nature, which is mostly determined by our evolution and biology, conditioned us to feel disgust at eating people – probably because it would have been a risky behavior for our ancestors to frequently engage in given the potential for diseases, etc.

      Certainly our moralities will change, but that change isn’t arbitrary. I definitely disagree with your “moral relativism” that holds that your morality has to match the society you find yourself in. If everyone believed that slavery would have never been extinguished from a being seen as morally appropriate practice. You write, “What seems most likely to me is that some rules of conduct/mores are more successful for a society than others, and those societies end up surviving/outgrowing/defeating/convincing the less productive mores.” But particular mores aren’t productive or unproductive arbitrarily. Societies that practice cannibalism aren’t unproductive by coincidence – cannibalism is seen to be immoral because it leads to negative consequences in those societies. The product of cannibalism and slavery is, generally, less well-being for individuals and society. You might be thinking that slavery led to certain productivity gains for some societies in history, but modern economics (among other fields) has shown us that alternatives like trade (read: voluntary cooperation) lead to healthier and richer societies (with the added bonus (!) of not condemning a case of people to suffering and servitude).

      I’d be happy to discuss the Moral Landscape further with you; it just seemed that we were hitting some roadblocks since you hadn’t read the book.

  8. Fraser
    November 11, 2010 at 10:31 pm

    I certainly agree that politics have large effects on people’s lives – it would be ludicrious for me to think otherwise.

    I agree with your point about science and technology – my reference to having an effect on our morality was only an example of the many mechanisms that affect culture (others include individuals, political parties, natural forces, domestication of our crops, etc).

    My explanation, as I expected it would, fell short of everything I needed to say: I don’t think our morality is entirely developed through behavioral conditioning – I wouldn’t dispute that we are born with some traits. But despite the sources of morality appearing to be different, both are due to the mechanism of conditioning, or perhaps more accurately, positive feedback loops. Keep in mind that the genetic code (and, as we’ve just recetly discovered, some prion proteins that allow for non-mendellian inheretance) define who we are at birth, and develop through many generations of conditioning that we call natural selection. Nature vs nurture is really just “nurture over multiple generations” vs “nurture within one lifetime”. I worry about claiming something “due to human nature,” because people connote that label to a permanent and universal quality, which it isn’t necessarily.

    Regardless, you seem to agree with that – even using the word “conditioned” again when discussing evolution. So I don’t think you have anything to contest really, just a clarification was needed on my behalf.

    I’m pleased you brought up the other challenge about change being arbitrary, and used slavery as an example. Most importantly, you pointed out that I admitted to being a product of my society, and you assumed that I therefore wouldn’t make a moral stance against something like slavery if I lived in a society that condoned slavery. Societies (networks of people who interact at some level, to be defined better below) are far more nebulous than you might give them credit for. Are you familiar with set theory? I like to think of it when I think of my definition of society.

    I really need to discuss two separate things at this point: what a society is, and your argumentabout morality not being arbitrary. I’ll start with the latter.

    In nature, change is complicated enough to appear arbitrary (a better word for it would be random), but is catalyzed through evolutionary forces. And that catalysis is what’s important, because it is what allows for the mutation to grow to a material population.

    Morality is very similar. You could make an argument for the random beginnings of a moral sentiment are due to an idea popping into one’s head after some sort of stimulation (or they may be seen as random on a different level: due to the right peron being born in a society who had the right experience at the right time). Catalysis is a bit different because it can actually occur in our brains, which has a pre-frontal cortex that can simulate the future and predict outcomes, so we don’t need to test an idea like nature before we can be convinced of its competitive superiority and catalyze the idea externally.

    The moral improvement of abolition of slavery is no doubt due in large part to humans’ ability to hold a “Theory of Mind,” or empathy in particular, which we developed early in our evolution. A slave-owner in the United States (or a Spartan who terrorized Helots, etc.) has the Theory of Mind, and is able to empathize with his slave, but chooses to ignore empathy over less fundamental but conflicting mores that suit his other interests (here we find another dimension of complexity – which is the stronger?). Would I be the smart one (more EQ than IQ) who recognized this, and took advantage of a chance to ride an evolutionary wave forward? I hope so, but who knows? Lots of people (the US slave owners included) turned out to be wrong morally (in that they were less productive). If they really took stock, they might have been able to recognize the more productive moral position, and changed their minds.

    I hope this has also addressed your next assertion that “… particular mores aren’t productive or unproductive arbitrarily.” This is my point as well, and we don’t disagree here.

    In regards to societies: Like I said, they are more nebulous than you give them credit for. I find it useful to liken societies to mathematical sets (see: Greorg Cantor, although he did truely earth-shattering things with sets, and I’m just using them really simply). The set of all intergers is part of the set of all real numbers, but not all real numbers are in the set of intergers. Say we define a society by every person who shares one moral position. Lots of societies would be too similar in their makeup to discern (some may even be identical). So do many societies overlap with myself. But since most of the big moral questions seem to be solved (or maybe that’s the way it always seems?), there is an enormous amount of moral overlap between myself and everyone I interact with, so I tend to just see myself as part of one society. When we start talking socialized healthcare vs. libertarianism, it becomes useful again to see society as this nebulous mess of lots of socieites.

  9. November 16, 2010 at 3:09 pm

    I think you may really enjoy this.

  10. Fraser
    November 17, 2010 at 11:59 pm

    These are great! I took a course very similar in college, where we read Bentham, Mill, Kant and Hobbes (as well as some less interesting thinkers from the 1970s who I don’t remember very much).

    I’d be interested to hear your perspective on episode 6

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: