Archive

Archive for the ‘Dan Ariely’ Category

Financial vs Mathematical Inequality

January 19, 2011 Leave a comment

Dan Ariely conducted John Rawls’ veil of ignorance study for wealth distribution and posted the results on his blog.

alriely.png

Unsurprisingly, Americans were wildly off the actual distribution and preferred a more equal distribution. If you read into the study it turns out most Americans favor the Swedish level of distribution.

This is all very fascinating and instructive, but I can’t help but worry that a study like this highlights another type of inequality… of math and logic skills. The respondents were asked to “indicate what percent of wealth they thought each of the quintiles ideally should hold, again starting with the top 20% and ending with the bottom 20%.” I seriously wonder if you explained that making the 2nd and 3rd quintile each hold 20% of the wealth meant that they would each be perfectly equal and have no disparity in wealth and making the top quintile hold only about 30% of the wealth meant that the “richest” wouldn’t be very rich in a relative sense. If you look at the graph that averages out everyone’s preference in the study it seems like Americans prefer that almost no differences in wealth exist at all. Do Americans understand that distributing wealth in the way they did doesn’t mean that wealth is progressively tiered down but means that everyone has almost the same wealth with the richest having slightly more and the poorest slightly less?

Unfortunately, I can’t find an actual poll asking specifically if rich people deserve their extra wealth, but I remember reading that many Americans feel that they do. [Source anyone?] Greg Mankiw’s latest paper certainly is representative of that view. Ariely is a great behavioral economist that regularly shows that people give answers by how the question is framed. There is no doubt that Americans would prefer less inequality – there is little doubt less inequality would be a good thing – but I’m left wondering more about the possible disparity in their understanding.

(h/t Zach)

Categories: Dan Ariely

The Utility of Friendship

August 2, 2010 5 comments

Over at BloggingheadsTV, Robert Wright and Robert George have a great discussion on natural law and morality. I think readers of this blog will enjoy the whole video as it touches on many topics that often come up here. Free will vs determinism even comes up briefly. On the question of natural law vs utilitarianism, which was the overriding theme of the dialog, I found myself in agreement with Wright and had similar questions for George’s philosophy. Although, I must confess I was very impressed with George  – much more than I anticipated considering his connections to evangelical religion and their social positions. He’s clearly a very thoughtful thinker. Not sure if I was more surprised with that or with my head nodding to Wright, who I’ve had serious disagreements with before. Let me share a portion of their talk (but watch the whole thing) here that I want to comment on.

The sticking point here is if friendship is intrinsically good in and of itself – independent of its positive attributes. A utilitarian would argue, as Wright does, that we value friendship because friendship is more valuable to us (makes us free better, is useful, etc) than not having friendship – say, simply a business or trade relationship. A natural lawist argues that we can see that friendship is valuable in itself because we commit ourselves to the institution even when it may not provide us with specific utility (and even when its depressing, burdensome, etc). George also argues that its good is “intelligible” to us. That sort of begs the question for me: it seems to be saying that something is good just because we think it is good.


I was mostly with George on friendship until Wright challenged him using the evolutionary psychological explanation for friendship, to which George didn’t seem to adequately respond. George basically just argued that that explanation was reductionist, but I think he failed to understand the ultimate vs proximate distinction implicit in the rational, which I’ve discussed previously. I want to expand on Wright’s argument.


Wright argues that natural selection has “implicitly calculated” the utility of friendship, therefore, it feels good to commit to friendships. That seems to counter George’s example of us participating in a friendship even in a specific situation within that friendship that doesn’t have explicit utility such as visiting a friend in a hospital when it is sad, takes time, etc. Why? Because if we didn’t do those things we wouldn’t get the benefits of friendship. That of course doesn’t imply that we’re just selfish frauds that fake through the tough parts to get the helpful and fun parts. Natural selection has made us actually desire real friendship – cheaters, fakers, and friendship free riders will be spotted and are looked on negatively by society. If by nature (and through nature) we were all phonies, friendship as an institution would be less useful. In his superb New York Times piece on our moral instincts, Steven Pinker puts it this way.

In his classic 1971 article, Trivers, the biologist, showed how natural selection could push in the direction of true selflessness. The emergence of tit-for-tat reciprocity, which lets organisms trade favors without being cheated, is just a first step. A favor-giver not only has to avoid blatant cheaters (those who would accept a favor but not return it) but also prefer generous reciprocators (those who return the biggest favor they can afford) over stingy ones (those who return the smallest favor they can get away with). Since it’s good to be chosen as a recipient of favors, a competition arises to be the most generous partner around. More accurately, a competition arises to appear to be the most generous partner around, since the favor-giver can’t literally read minds or see into the future. A reputation for fairness and generosity becomes an asset.

Now this just sets up a competition for potential beneficiaries to inflate their reputations without making the sacrifices to back them up. But it also pressures the favor-giver to develop ever-more-sensitive radar to distinguish the genuinely generous partners from the hypocrites. This arms race will eventually reach a logical conclusion. The most effective way to seem generous and fair, under harsh scrutiny, is to be generous and fair. In the long run, then, reputation can be secured only by commitment. At least some agents evolve to be genuinely high-minded and self-sacrificing — they are moral not because of what it brings them but because that’s the kind of people they are. (my emphasis)

And piggybacking further on his ultimate/proximate distinction: we ultimately want friendship because of natural selection, we proximately want it because we actually value friendship – not just the “feeling” of friendship like on George’s friendship machine. So the reason friendship seems like it’s just inherently good and written into the laws of nature to George is not because its a good for its own sake but because its good for its ultimate utility. This inner psychology helps explain why it feels weird to treat friends and other “social” relationships the way we would business relationships.

Day Ariely argues that “social relationships have a lot of advantages. They protect us from future fluctuations, they give us trust and confidence, and all kinds of other things.” 

Anyone see a flaw in this line of reasoning?

At the end of Wright and George’s bloggingheads Wright and George go over different moral dilemmas to try to expose the flaws in each school.  Although I’m siding with Wright this doesn’t mean I don’t have some questions for utilitarians (which are a type of consequentialist). Don’t intentions matter? 
Here’s my on-the-fly thought experiment (sorry it’s no trolly problem): 

Say you could save 2 children that might fall off a bridge by jumping out and grabbing them, but another child is standing near the edge and for whatever reason the child would be at risk of being knocked off in your attempt to save the other two. If you knock the 1 child off and save the other 2 is that ok for a consequentialist? If 2 children were knocked off to save the 1 is that now morally wrong? Is it the same moral wrongness of deliberately killing 2 children to save 1? If not, doesn’t that show that intentions matter? Furthermore, does the level of risk affect the morality of the choice and, if so, why? If you can save 2 kids but you understood that you’d have only a 1% chance of knocking the single kid off is that more morally acceptable then if you thought you had a 99% chance? Also, does the probability of saving the kid(s) affect the morally of the choice as well? 

"Proximate Determinism"

July 28, 2010 1 comment

Earlier this week, I discussed the implications of determinism on moral responsibility and my distinction between ultimate and proximate causes of our decisions. To recap a bit, ultimately (it seems likely) that all of our actions are the result of an infinite regress of prior causes, but proximately we make decisions based largely on reasons (even if that reason is ultimately rooted in that same deterministic chain). Free will may be an illusion but it is an illusion that we’re forced to live in. 


I concluded saying I was going to discuss a form of what I termed “proximate determinism.” Although I actually believe all actions are probably not generated by a free will, I want to distinguish between actions that are decided by our proximate will generator (i.e. our reason) and those actions which aren’t. To be less obtuse, I think this free will vs determinism discussion is as good a bridge as any to discuss subjects such as behavioral economics and cognitive psychology. Originally, I just wanted to just show you an excellent Dan Ariely video, but The New York Times and Jerry Coyne expanded on their original posts and it adds some more dimension to this topic.


Most interestingly, they point to some fascinating research that shows our brains’ (also primates’ brains) neurons  register our decisions before we’re conscious of it. Coyne writes, “that implies that the “decision” isn’t really a conscious one—that is, it doesn’t conform to our notion of free will.” He goes on to discuss that research further here.

[T]he brain activity that predicted which button would be pressed began a full seven seconds before the subject was conscious of his decision to press the left or right button. The authors note, too, that there is a delay of three seconds before the MRI records neural activity since the machine detects blood oxygenation.  Taking this into account, neuronal activity predicting which button would be pressed began about ten seconds before a conscious decision was made.

This seems to fit with some findings of researchers in other fields that argue our decisions are sometimes made “irrationally.” Here’s Dan Ariely’s TED talk fittingly titled, “Are we in control of our own decisions?” 

***Note to Andreas: you may enjoy the research that used The Economist’s subscription choices (which I actually recall seeing).***


As Ariely shows us, the default settings in our lives play an enormous role in our decision making. It’s obvious that our choices are shaped by various cognitive illusions, it’s becoming more clear that free will itself may be just another one.  


[update]: Jonah Lehrer shares his thoughts on free will.

The fact is, we are deeply wired to believe in our freedom. We feel like willful creatures, blessed with elbow room and endowed with the capacity to pick our own breakfast cereal.

In my last post I reached a similar conclusion: “We are hardwired by the universe to act as though we have free will.” 

%d bloggers like this: