Category Archives: Decision Theory

Loss aversion and reference points. Or why mathematical formalism is important for behavioural economics.

UBC and the University of Hong Kong hold a joint economic theory workshop every summer. The 2014 edition was held over the last week; unfortunately I was only able to attend a couple of the presentations, but one in particular was very instructive.

Collin Raymond, a postdoc at Oxford, presented his work on Stochastic Reference Points, Loss Aversion and Choice Under Risk (joint with Yusufcan Masatlioglu) [1].

Raymond’s presentation focused on the relationships between a bunch of different economic models of decision making. The “standard” model of economic decision making, the Subjective Expected Utility (SEU) model, works pretty well in most situations. But, there are some situations where some people regularly violate SEU. In response to this, economists have developed a number of models that generalize (expand) the SEU model. In particular, Raymond focused on one particular model, the Koszegi-Rabin model, and how it relates to other models.

Now, if you are not familiar with decision theory, this might sound a little strange to you. Why would we need a presentation on how these different models relate to each other? Shouldn’t we know this already? Well, we often don’t because these things are pretty complicated.

In decision theory, a model has two parts. An axiomatization and a representation. Generally, we think of a model as starting with a bunch of axioms – statements about how people might behave that are (at least) plausibly true. An example of an axiom is: If you prefer A to B, and you prefer B to C, then you must prefer A to C. Starting from a bunch of axioms, we can then derive a representation – a formula that “represents” the axioms in a simple and useful way. Often axioms are rather complicated and only subtly different between models, and it can sometimes take years to work out how the axioms from one model relate to the axioms of another model.

Often we can have strong intuitions about how we think models should be related. One of the most striking things about Raymond’s presentation was just how wrong our intuition can be. In other words, Raymond’s presentation demonstrated just how important it is to go through the formal mathematics when dealing with decision theory models.

The Koszegi-Rabin model that Raymond considered is a model of reference dependence. In these types of models, the decision maker (DM) has a reference point which they use to evaluate all of their options against. The DM likes getting things that are better than the reference point, and dislikes getting things that are worse than the reference point. The key point is that the DM dislikes things that are worse than the reference point more than they like things that are better than the reference point. We say that the DM is loss averse. 

There are many different models of reference dependence. The key difference between them is how the reference point is determined. In the first(?) model of reference dependence, which won Dan Kahneman the Nobel Memorial prize[2] the reference point was considered to be exogenous (i.e. determined outside of the model). Modern models of reference dependence include a way of determining what the reference point is inside the model.

Another class of models don’t have a reference point, but instead the DM is pessimistic in the sense that they overweight bad outcomes (i.e. they behave as if bad outcomes are more likely to occur than the actually are).

Raymond compared the Koszegi-Rabin model with two other reference dependent models and two pessimistic models. Amazingly, the behaviour that is allowed by the Koszegi-Rabin model is actually a subset of the behaviour allowed by the two pessimistic models, but only overlaps with the other two reference dependent models at a single point.

What this tells is that, depending on the exact form of the models, sometimes a reference dependent model acts just like a pessimistic model (and not like other reference dependent models). This is a very counter-intuitive result, and it shows the usefulness of being careful and thorough when working with decision theory models. Representation theorems are very powerful mathematical tools, but they are full of subtleties that can take a lot of study to fully comprehend. In this case, as in many others, formal mathematics has taught us something that we would never have discovered without it.

 

[1] I’ve only skimmed the paper, so most of what I write here will be based off what I picked up during the presentation.

[2] His co-author Amos Tversky passed away before the prize was awarded.

Awareness of Unawareness

Awareness of unawareness is the title of a new paper by Karni and Viero (that ‘o’ should actually be one of these, but I can’t get it work on here). As mentioned in my previous post, I shall start by describing the paper for a non-economist audience, before giving a more technical discussion at the end.

For non-economists:

This paper addresses the question: How should we model decision making when there are aspects of our environment about which we are unaware? This is a very delicate question to pose. If we are unaware of something, then how can it affect our decision making process? Yet if we include it in our decision model, then we are no longer unaware.

Karni and Viero take an approach where they split the world into two different groups of states (edit – in decision theory, a state of the world is a complete description of your decision making environment. If we don’t know something about the environment, this is equivalent to not knowing which is the true state of the world). In the first group are the states of the world that we are fully aware of and can explain perfectly. In the second group are the states that we cannot fully conceive of yet. In the future we might learn more about the world, and some of the states move from the second group to the first.

To facilitate the modelling process, the states in the second group can be treated together as a lump of things that we don’t really know about yet. Although we don’t know anything about these states, we might still be able to form an estimate about how likely it is that they are important. Similarly, we might have a sense about whether these unknown outcomes are likely to be good or bad – we might be either fearful or optimistic about the unknown.

The Karni and Viero framework gives us a formal system within which we can quantify and describe these ideas of increasing or decreasing ignorance, and fear or optimism about the unknown. This might not sound like much but it is important get the foundations in place before we can start answering bigger, real world, questions about how ignorance and unawareness affect decisions.

There are other models that model the same sort of things as Karni and Viero; each of these frameworks have their own technical strengths and weaknesses. This a good thing – modelling unawareness is a tricky thing to do, and it would be amazing if the first attempt turned out to be the best one. At this point the literature is very young, and remains to be seen which approach will be the most useful for answering more applied questions.

For economists:

Continue reading