Models have a habit of missing the target
Troy Media – by Pat Murphy
It seems these days we’re deluged with news stories and opinion pieces citing expert conclusions. And a critical part of the argument is often based on the “findings” of a model. It’s as if that’s supposed to clinch the deal.
But a recent paper from Australian economist Richard Denniss reminds us that models can be less than they seem. And while Denniss refers to the use of economic modelling in Australia, the lessons have applicability beyond that.
Begin with a definition. A model is just a mathematical representation of the linkages between specific variables. For instance, a simple economic model might set out to predict the impact that changes in economic growth would have on taxation revenue.
Models are very complex
In this case, modelling jargon would describe economic growth as the independent (influencing) variable while taxation revenue would be the dependent (influenced) variable. Let’s call them influencers and influenced. And let’s also bear in mind that most models are much more complex than one influencer and one influenced.
In practice, models rely on a number of assumptions, the first of which is that the variables identified as the influencers are truly the significant determinants. But just because two variables move together doesn’t necessarily mean that one is driving the other. Correlation isn’t causality.
And even if the model gets the influencers right, that doesn’t guarantee it’s a useful predictor. Although the value of Y may be a function of the value of X, if you don’t know what X is going to be, then it’s of limited use in terms of predicting Y. At best, it can give you a range of scenarios.
Built-in linkage assumptions also matter a lot. For instance, if a model assumes strong linkages between marginal tax rates, the number of people looking for work, and the number of jobs created, it will inevitably “find” that a policy of cutting marginal rates increases employment. But the validity of the analysis depends entirely on the assumptions made, and the model itself isn’t proof that those assumptions are correct.
Unfortunately, this distinction can get lost. To quote Denniss: “economic models are frequently used to support conclusions that have, in fact, already been assumed.” It’s like traveling in a circle.
Denniss notes another limitation. Current unknowns – even unknowables – make long-run predictions dubious. He puts it colourfully: “anyone interested in knowing what will happen to the Australian economy in 20 years time should probably ask an astrologer, but only if the astrological advice comes for free.”
As noted earlier, the problems that modellers encounter when they come into contact with the complexities of the real world aren’t confined to Australian economics. In The Ascent of Money, Niall Ferguson tells the story of the disaster that befell Long-Term Capital Management (LTCM) in the 1990s.
It’s a story of super-smart people and hubris. A pair of Nobel winning economists got together with some financial backers to launch a business, the essence of which was that their ability to build financial models would allow them to successfully play the more exotic aspects of the markets.
It worked spectacularly for a while, so much so that LTCM borrowed immense amounts of money for further speculation. The mathematical models said there was virtually no risk. They had it sussed.
Except they didn’t. When the August 1998 Russian default sparked a panic, the models fell apart, wiping out more than 90 per cent of the notional value.
But surely, you say, these flaws don’t apply to something that’s been subject to a peer review. Don’t the reviewers always do an aggressive interrogation of the underlying assumptions, data, calculations, and analysis?
Not much scrutiny in peer review
Not necessarily. When questioned about publishing the South Korean stem cell experiments which were subsequently unmasked as fabricated, the editor of the journal Science noted that their peer review doesn’t involve such scrutiny: “What we can’t do is ask our peer reviewers to go into the laboratories of the submitting authors and demand their lab notebooks.”
Given the use of expert studies in setting public policy, this could be unsettling. For instance, a 2009 paper by McCullough & McKitrick reports that “some government staff are surprised to find out that peer review does not involve checking data and calculations, while some academics are surprised that anyone thought it did.” Oh my, say it isn’t so!
Troy Media columnist Pat Murphy worked in the Canadian financial services industry for over 30 years. Originally from Ireland, he has a degree in history and economics.
About the Author (Author Profile)
Markham began his journalism career writing columns in the mid-1980s for Western People Magazine, then reported for a small Saskatchewan daily. He has spent most of his career in media and communications, likes to dabble in politics, was actively involved in economic development for many years, thinks that what goes on in the community is just as important as what happens provincially and nationally, and has a soft spot for small business (big business, not so much). Markham is a bit of a contrarian and usually has a unique take on the events of the day.