Monthly Archives: August 2014

World Cup betting pool: the outcomes

I have had a request to follow up on my previous post on the World Cup betting pool that I ran in my department. Specifically, I was asked to address whether the betting satisfied the four desiderata that I outlined in the previous post.

For a brief refresher, I used a model where we treat each of the teams (or a group of teams) in the World Cup as a room in a share house, and each of the participants in the betting pool as a tenant in the house. Then the goal is to match rooms to tenants, and allocate shares of the rent across tenants, so that:

  • the outcome should be efficient
  • no one should envy anyone else’s room/rent combination
  • the sum of the rents should be equal to the total rent payable for the house
  • the mechanism should be incentive compatible (i.e. no one should be able to manipulate the outcome by lying about their preferences)

So, did my betting pool satisfy these four criteria? The short answer is that it is impossible to guarantee all four at once.

The longer answer is that, in summary, if we can assume that no one lied about their preferences, then the other three conditions are automatically satisfied. If we think this assumption might be violated, then the first three conditions will still be satisfied if people have misrepresented their preferences in an optimal fashion. If we think that people might have misrepresented their preferences sub-optimally, then condition three will still be satisfied, but there is nothing that we can say about the first two conditions.

For more details, read on!

Continue reading

Quantity precommitment and Bertrand competition yield Cournot outcomes

This classic paper by David Kreps and Jose Scheinkman is quite possibly my favourite economics paper of all time. It’s easy to explain and understand, but still makes a very deep point that connects two of the most famous economic models in a simple way.

Way back in 1838, Antoine Augustin Cournot wrote what is, I believe, the first mathematical model of competition between firms. The model makes reasonable predictions that are, in a broad sense, supported by empirical observations. One of these predictions is that as more firms enter a market it should become more competitive and prices should decrease.

However, the Cournot model has one deeply unsatisfactory dimension: firms set the quantity that they wish to sell, and then the market determines the price at which that quantity can be sold. This is most certainly not how firms actually make decisions; when a customer goes shopping, the store posts a price and the customer decides how much to buy.

In 1883, Joseph Bertrand came along and wrote a model where firms set prices, and then the market determines the quantity that will be sold at that price. This is a much more satisfactory foundation for a model of firm behaviour. Unfortunately, the Bertrand model generates very poor predictions. One of the implications of the Bertrand model is that two firms are enough to generate extremely intense competition and low prices. Another implication is that adding more firms to the market doesn’t change the outcome. Neither of these implications are compatible with empirical observations.

So we have one model with good assumptions but inaccurate implications, and one model with poor assumptions but reasonable implications. Is there a way that we can resolve this tension?

It took 100 years, but in 1983 Kreps and Scheinkman found the resolution: they key is to use a two stage model. In the first stage, firms install production capacity. Then, in the second stage, the firms compete over prices ala Bertrand. But here’s the kicker: the outcomes that are produced by this model are exactly the same as the Cournot model.

So we now have a model with both good assumptions and reasonable implications. Of course, this model is still highly stylised and leaves a lot of potentially important features unmodelled, but it does provide an extremely compact way of reconciling two very important economic models. Pretty neat.

Additional notes for economists

Continue reading

Loss aversion and reference points. Or why mathematical formalism is important for behavioural economics.

UBC and the University of Hong Kong hold a joint economic theory workshop every summer. The 2014 edition was held over the last week; unfortunately I was only able to attend a couple of the presentations, but one in particular was very instructive.

Collin Raymond, a postdoc at Oxford, presented his work on Stochastic Reference Points, Loss Aversion and Choice Under Risk (joint with Yusufcan Masatlioglu) [1].

Raymond’s presentation focused on the relationships between a bunch of different economic models of decision making. The “standard” model of economic decision making, the Subjective Expected Utility (SEU) model, works pretty well in most situations. But, there are some situations where some people regularly violate SEU. In response to this, economists have developed a number of models that generalize (expand) the SEU model. In particular, Raymond focused on one particular model, the Koszegi-Rabin model, and how it relates to other models.

Now, if you are not familiar with decision theory, this might sound a little strange to you. Why would we need a presentation on how these different models relate to each other? Shouldn’t we know this already? Well, we often don’t because these things are pretty complicated.

In decision theory, a model has two parts. An axiomatization and a representation. Generally, we think of a model as starting with a bunch of axioms – statements about how people might behave that are (at least) plausibly true. An example of an axiom is: If you prefer A to B, and you prefer B to C, then you must prefer A to C. Starting from a bunch of axioms, we can then derive a representation – a formula that “represents” the axioms in a simple and useful way. Often axioms are rather complicated and only subtly different between models, and it can sometimes take years to work out how the axioms from one model relate to the axioms of another model.

Often we can have strong intuitions about how we think models should be related. One of the most striking things about Raymond’s presentation was just how wrong our intuition can be. In other words, Raymond’s presentation demonstrated just how important it is to go through the formal mathematics when dealing with decision theory models.

The Koszegi-Rabin model that Raymond considered is a model of reference dependence. In these types of models, the decision maker (DM) has a reference point which they use to evaluate all of their options against. The DM likes getting things that are better than the reference point, and dislikes getting things that are worse than the reference point. The key point is that the DM dislikes things that are worse than the reference point more than they like things that are better than the reference point. We say that the DM is loss averse. 

There are many different models of reference dependence. The key difference between them is how the reference point is determined. In the first(?) model of reference dependence, which won Dan Kahneman the Nobel Memorial prize[2] the reference point was considered to be exogenous (i.e. determined outside of the model). Modern models of reference dependence include a way of determining what the reference point is inside the model.

Another class of models don’t have a reference point, but instead the DM is pessimistic in the sense that they overweight bad outcomes (i.e. they behave as if bad outcomes are more likely to occur than the actually are).

Raymond compared the Koszegi-Rabin model with two other reference dependent models and two pessimistic models. Amazingly, the behaviour that is allowed by the Koszegi-Rabin model is actually a subset of the behaviour allowed by the two pessimistic models, but only overlaps with the other two reference dependent models at a single point.

What this tells is that, depending on the exact form of the models, sometimes a reference dependent model acts just like a pessimistic model (and not like other reference dependent models). This is a very counter-intuitive result, and it shows the usefulness of being careful and thorough when working with decision theory models. Representation theorems are very powerful mathematical tools, but they are full of subtleties that can take a lot of study to fully comprehend. In this case, as in many others, formal mathematics has taught us something that we would never have discovered without it.

 

[1] I’ve only skimmed the paper, so most of what I write here will be based off what I picked up during the presentation.

[2] His co-author Amos Tversky passed away before the prize was awarded.

Cheap talk can be valuable

I was a huge fan of the blog Cheap Talk. I say was only because the posting rate there has trickled down to almost nothing (sure, you could say the same thing about this blog, but I also have practically no readers). But, occasionally they still put up a new post that, more often than not, includes terrific passages such as this one:

So, is Hachette, a French company, confused because in France they put price on the x-axis and quantity on the y-axis so marginal revenue is upside down? Surely Jean Tirole can sort that out for you.