The financial crisis caused many economists to re-examine a lot of old questions in microeconomics. One is the extent to which people are rational in the economic sense ‒ choosing the best they can do for the budget they have. If they aren’t, a number of results, like the predictions of what kind of equilibria will exist in a market, can’t work ‒ and you have to throw out over 100 years of research into microeconomics.
Fortunately, the news so far isn’t catastrophic! Instead, it points to people being boundedly rational: that is, they more or less try to do the best they can, but their decision-making isn’t perfect and they’re prone to a number of systematic biases.
Economists have been drawing on results from psychologists to refine their models. The area of — largely — microeconomics that studies this problem is called behavioural economics. It uses lab experiments and other data to get a handle on how to adapt the utility-maximising model to try and make it more realistic and predictively better.
Some interesting examples have already come out of how people’s biases affect their decisions. For a start, take the way a problem is framed. If your doctor told you that you could take medicine to clear up a cold in seven days, but if you left it to itself it could take a week, would you take the medicine? The first part is framed more positively, and so behavioural economics predicts that you’d choose the former!
Picking up some behavioural economics is a good idea, whether you want to model those choices or just arm yourself with some self-defence against certain selling tactics. When, for example, restaurant sommeliers describe a more expensive bottle of wine to you, they’re framing your choice in some way. If you know a wine is expensive, you’re more likely to rate it as higher quality, even though you’d be unable to distinguish it from cheap plonk in a blind test.
Finding people’s biases and exploring them leads to interesting outcomes. One is that choice isn’t necessarily a good thing in all circumstances. In one experiment, people were more likely to buy from a stall containing a small number of products than one with a larger number. Why? Well, although more people stopped at the larger display, they were overwhelmed by the amount of choice available and didn’t buy anything!
When risk or uncertainty is involved, the biases are sometimes stunning! Ask yourself whether you’d prefer to be given £100 or enter a lottery where you have a 1 in 10 chance of winning £1,000.
You may not realise but, rationally, you’ve been offered the same thing in both cases: you get expected outcome by multiplying outcome by probability and so both offers amount to £100! But would you prefer the risk of not receiving the money — 9 times out of 10 you’d get nothing with the second offer!
If people aren’t rational in the way the utility model says, does it matter? Well, the answer depends on what you want to model and why you want to do so. If you want to see how markets work on average and don’t mind having to qualify your answer with a ‘more or less’, it probably doesn’t particularly matter.
Even if you do use behavioural data to inform you, it may not predict what a given individual does in that situation. But when you need explanations for why the predicted equilibrium may not be holding, adapting models in the light of behavioural conditions may be just what you need.