Monday, March 25, 2013

On the importance of Excel

Four years ago, when global markets were going completely haywire, one of the more important events that helped "turn things around" was FASB's relaxation of mark-to-market accounting standards, a decision that allowed banks to value many of their "distressed" assets based on, basically, whatever their internal models said they were worth. We can argue all day about the long-term costs and benefits of this decision (as you might imagine, I'm pretty aggressively negative on the decision), but ultimately the short-term impact was to place a significant amount of the world's financial stability on the shoulders of one computer program—Microsoft Excel.

We'll turn things over to Baseline Scenario's James Kwak for some more color on the topic (all emphasis mine):
I spent the past two days at a financial regulation conference in Washington... In his remarks on the final panel, Frank Partnoy mentioned something I missed when it came out a few weeks ago: the role of Microsoft Excel in the “London Whale” trading debacle (note: read more about it here)...
To summarize: JPMorgan’s Chief Investment Office needed a new value-at-risk (VaR) model for the synthetic credit portfolio (the one that blew up) and assigned a quantitative whiz (“a London-based quantitative expert, mathematician and model developer” who previously worked at a company that built analytical models) to create it. The new model “operated through a series of Excel spreadsheets, which had to be completed manually, by a process of copying and pasting data from one spreadsheet to another.”
The internal Model Review Group identified this problem as well as a few others, but approved the model, while saying that it should be automated and another significant flaw should be fixed. After the London Whale trade blew up, the Model Review Group discovered that the model had not been automated and found several other errors. Most spectacularly,
“After subtracting the old rate from the new rate, the spreadsheet divided by their sum instead of their average, as the modeler had intended. This error likely had the effect of muting volatility by a factor of two and of lowering the VaR ...”
Microsoft Excel is one of the greatest, most powerful, most important software applications of all time... it provides enormous capacity to do quantitative analysis, letting you do anything from statistical analyses of databases with hundreds of thousands of records to complex estimation tools with user-friendly front ends. And unlike traditional statistical programs, it provides an intuitive interface that lets you see what happens to the data as you manipulate them.
As a consequence, Excel is everywhere you look in the business world—especially in areas where people are adding up numbers a lot, like marketing, business development, sales, and, yes, finance...
But while Excel the program is reasonably robust, the spreadsheets that people create with Excel are incredibly fragile. There is no way to trace where your data come from, there’s no audit trail (so you can overtype numbers and not know it), and there’s no easy way to test spreadsheets, for starters. The biggest problem is that anyone can create Excel spreadsheets—badly. Because it’s so easy to use, the creation of even important spreadsheets is not restricted to people who understand programming and do it in a methodical, well-documented way.
This is why the JPMorgan VaR model is the rule, not the exception: manual data entry, manual copy-and-paste, and formula errors. This is another important reason why you should pause whenever you hear that banks’ quantitative experts are smarter than Einstein, or that sophisticated risk management technology can protect banks from blowing up. At the end of the day, it’s all software. While all software breaks occasionally, Excel spreadsheets break all the time. But they don’t tell you when they break: they just give you the wrong number.
Yikes. As Kwak later points out, this is likely a systematic problem, and not just an unfortunate one-time mistake. If the modeler's error had served to increase the amount of risk at the end of the day, then the mistake no doubt would have been caught, since it would have affected the bank's bottom line. But because senior executives and traders were explicitly hoping for a model that underestimated the risk profile of their portfolios, the "mistake" here went unnoticed and uncorrected, which is so absurd that it's almost comical.


If a mortgage officer at a small regional bank made a similar mistake—say, inadvertently doubling the annual income number for a loan applicant, and then approving said applicant for a number of low-interest loans—that officer would undoubtedly be fired at the end of the day. But here, at JPMorgan, we have a guy who made a similar error on a much larger scale, with much riskier assets and a whole lot more money on the line, and the whole world shrugs its shoulders and goes on about its business. That's scary.

What are some of the other bank models out there telling us about banks' risk profiles and the strength of their capital bases? Should we expect those models to be any better than this one? I suspect not, and I think the (over)reliance on Excel will likely lead us to some very negative outcomes down the line. Of course, as I've said before, this doesn't mean that we should blame the model if and when things go horribly wrong—models, at the end of the day, are only as good as the people who write (and monitor) them. Instead, we need to start blaming the people who write and implement these models, and then holding them accountable for their errors.

[Baseline Scenario]

No comments:

Post a Comment