The 1955 automobile price war?

 

Today’s offering is another one of my favourite papers. Tim Bresnahan has a fantastic paper that attempts to explain the price and quantity of US automobile industry from 1954 through 1956. When you see the footnote that mentions that the paper was the second chapter of his PhD thesis (and, presumably, his job market paper) it is all the more remarkable.

I’ll begin with a caveat. The conclusions of this paper may not be correct. In fact, there is a much more recent paper by David Rapson that makes exactly that claim. But this doesn’t detract from the contribution of Bresnahan’s work. Bresnahan developed a novel and elegant methodology to answer his stated question but, given data limitations and the state of econometrics at the time he was writing, Bresnahan needed to make several assumptions in order to complete his analysis. The more modern, non-parametric, approach taken in Rapson’s paper suggests that some of Bresnahan’s assumptions may be violated, but Bresnahan’s contribution should be considered within the context of the state of economics in the late 1970’s.

There was an increase in quantity and decrease in price of cars sold in the US in 1955, relative to both 1954 and 1956. Why did this occur? At the time that Bresnahan wrote his paper, this was a puzzle to which that no one had found a satisfactory answer. Bresnahan’s conclusion was that the automobile industry was in a state of tacit collusion in both 1954 and 1956, and that there was a price war in 1955. Bresnahan was able to answer this question using only data on prices and quantities sold (broken down by make and model), and data on the characteristics of the different models.

The first step that Bresnahan took was to build a model of demand for automobiles. To do this, he needed to aggregate the characteristics of each model into a single quality dimension, over which he could estimate preferences. The quality weights assigned to each characteristic are determined via simultaneous estimation with the supply and equilibrium conditions. In other words, Bresnahan allows the data to tell him how consumers are willing to trade off horsepower and vehicle size, for example.

Once each car has been assigned to a point on the quality scale we can start to think about how much different cars are competing with each other. For example, a small hatchback doesn’t really compete with luxury sedans as much as it competes with other small hatchbacks. Now, consider two cars that are very similar to each other. If they are in competition with each other then their prices will be driven down close to marginal cost. If they are colluding with each other, then the prices for both cars will be significantly higher. For cars that are very different from all the other types of cars the effect of competition is not as important.

So, which cars are priced cooperatively, and which cars are priced competitively? Obviously, cars that are made by the same company will be priced cooperatively against each other. For cars made by different companies, Bresnahan estimates what the prices would be if they were cooperating and what the prices would be if they were competing. Then we can check which model (cooperative or competitive pricing) fits the data best. In 1954 and 1956 the cooperative model fits the data best, but in 1955 the competitive model fits the data best.

Bresnahan did an amazing amount with very limited data. But to do so, he needed to make some strong functional form assumptions. For example, he assumed that marginal costs were log linear in quality, and that the relationship between product characteristics and quality has a square root form. These are rather arbitrary functional form assumptions, but were necessary to turn the raw data into an estimable set of equations. As the sophistication of non-parametric econometric techniques has increased over time, it is now possible to get a lot more mileage from a given set of data without needing to make such arbitrary assumptions.

Bresnahan’s paper has made two important contributions to my understanding of empirical industrial organization (as I am not really up to speed with the modern empirical IO literature, I will refrain from commenting on the impact the paper had on the broader literature). Firstly, it demonstrates just how much we can achieve with limited data and a rigorous, theory driven, empirical approach. Secondly, and perhaps even more importantly, it shows the limitations of econometric specifications that impose arbitrary functional forms.

It seems to me like empirical industrial organization is one area of economics that has a lot of potential to take advantage of advances in non-parametric statistical techniques. Reading more empirical IO papers is one of those things that is always on my to-do list, but I never get around to actually doing… hopefully this will change.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *