More thoughts on MBA rankings
Last post, I mentioned this sense among some, that we already know all the answers when it comes to course rankings. A good example of this is a recent post on Poets and Quants, addressing reported alumni salaries. Financial Times (FT) data tends to deflate reported salaries in the US, and inflate those in certain emerging markets. Rather than address pros and cons of these differences in a measured way, the article carries a heavy helping of cynicism and a clear bias for the status quo.
More important than MBA rankings are the general themes of how to select and interpret data, how to overcome your own biases, etc. They’re issues that affect work life, political life and personal life. A typical example from my work in M&A is the way everyone is a top tier dealmaker. Slice and dice the numbers enough and you can find a sector, timeframe, geography and deal threshold within you’re the number one advisor. Or at least top three). The same is evident in private equity. With enough creative thinking, every fund can claim to deliver a top-quartile IRR.
The above examples are benign in that the readers of a typical M&A pitch book are well informed and know the data has been massaged somewhat. And whoever prepared it is under no real delusions about what is and isn’t true.
Problems occur when the recipient of information takes it as gospel; either because the potential problems aren’t immediately apparent, or because it fits conveniently with existing beliefs. It is difficult to provide data and information without some level of bias. We make decisions about what to show, how to show it, and what not to show. This isn’t an excuse for rejecting all information as useless, because we need to take a stance and make decisions at the end of the day. We just need to realise the limitations of data we latch onto.
But back to MBA rankings. An extract from the P&Q article:
So given the importance of these numbers, you would think they would pretty much match up in both surveys, right? Turns out there can often be a significant difference between what Forbes and The Financial Times report. In fact, the differences are often so great that they should give a reader of these rankings great pause. It’s yet another reason why the rankings out today–widely read around the world–should be worth not much more than a grain of salt…..
…If you’re noticing a pattern here, you’re right. The Financial Times’ averages consistently underestimate the income of MBA alumni [In US schools]–except when the reverse occurs at a different set of schools that happen to fare quite well in the Financial Times’ rankings….
Read the full post here
After spending a whole page implying error and/or numerical wizardry on a scale not seen since the advent of mortgage backed securities, the author eventually gets tround to why differences arise. They’re driven principally by the use of purchasing-power-parity adjusted data by the FT, along with various other adjustments.
This clearly makes it harder for the reader for the reader to do some simple dollar-based mental math about earning potential. Yet no attempt is made to address possible advantages of such adjustments. Instead, we’re told that clean and simple data is always better. However,
- Might it be useful for a candidate intending to work in India or China, to see data that takes into account differences in cost of living across countries? For such a candidate, is the ‘clean’ absolute dollar-value of average post-graduation earnings a good indication of purchasing power?
- Isn’t it actually better to have data that isn’t skewed by the one lucky person each year who walks away earning 3 times more than the average? Or those who’re earning particularly low salaries?
- Might presenting adjusted and unadjusted numbers be a much better solution for all?
- How else can rankings be adjusted (if at all) to reflect a growing shift of activity towards emerging markets?
- Is any form of global ranking fundamentally useless? Are separate tables for Asia, America, Europe etc. the only way to go?
Such questions are ignored, which is surprising. The reader is supposed to accept that a methodology which drives unexpected results is not credible. We already know what the ranking should be, so the job of a new ranking is to confirm those expectations.
Sure, we can assume that FT rankings have some underlying agenda. But it’s no less plausible to claim that established US rankings fail to use a similar methodology because they know it would be to the detriment of US schools. We achieve little by assuming only one party is dedicated to the objective truth. As discussed in the opening paragraphs, it’s hard for information not to have a bias.
Whether we’re talking about MBA rankings, M&A statistics or private equity returns, it is important to have an opinion, but keep an open mind.