The way we talk about super needs to change

It’s August, we’re finishing our taxes, getting our finances in shape, and reviewing letters from our financial advisers/super fund. We don’t necessarily pay too much attention to these until we read in the weekend paper a piece about how super funds overall had performed over fiscal 2017. 

We ask ourselves, “How did my super fund perform relative to the peers?” And it’s then that we fall into one of the greatest investment traps: comparing how we performed against others.  We feel frustrated with the should have/could have: “Had I moved my money to ‘The Fund Formally Known as Growth,’ I would have made an additional 1%, damn it!”


The great con

While we in the business may smile at this, the honest truth is that we perpetuate these often random and misleading rankings. If we rank in the top decile, we milk such rankings through our PR and social media. If we fall in the bottom quartile, we justify why our rankings were low relative to our peers. All along, though, we all know that we’d never choose any investment on how well it performed over 12 months, but we play the agents game and let others perpetuate this great con.

Given a choice, of course, we would all love more. But “wanting more” is more a question of wealth management than pension immunisation.  If we remember why the superannuation levy was introduced, it was to lessen the burden of future generations. While admittedly social security is funded individually, the demographic time bomb, coupled with the high health care inflation, places some of this fiscal burden on future generations – our children, for example.

Pension management, on the other hand, is about immunising our future retirement outlays by our accumulated savings. It’s for this reason that we express long term targets of CPI+, rather than a pension fund obtaining top quartile results. The latter is an “agency risk” (e.g. selling our fund), not a principal one (achieving long term results of CPI+). Annual return rankings are a game we all play, however flawed and unwarranted.

 

The true metrics of success

It’s fair to understand how the fund’s performance matches its intended long-term objectives, but I’ve never seen a “Balanced Fund” described as “one that beats its competitors”.  Its objectives are expressed through risk and return metrics such as CPI+, downside risk of “X%” and negative returns of one in seven years. 

Any actuary understands this logic, but we belittle members by feeding them misleading one year rankings, giving next-to-no observation as to how much risk was employed to deliver that outcome.  A fund with a greater exposure towards risky assets should outperform one with a more defensive exposure, no? But does this mean that the more defensive fund deserves a less appealing ranking solely on this? What happens when markets turn negative? Is the “underperformer” then considered a “performer”?

It’s true that many of these tables do present five and ten year returns, but oddly enough, these returns are rarely ranked alongside the one-year figures. All things being equal, 10-year numbers would hold one to two market cycles, so would be more indicative of how the fund performed during risk on/off periods. Even here, however, any five/ten rankings are still inconsistent with how balanced funds are principally marketed through risk metrics (e.g. maximum downside risk, or negative return of one in seven years).

 

Comparing to DB funds

Our goal shouldn’t be to avoid rankings, of course, but to disclose one that’s closer to how pension savings are forced on employees in the first place – through risk metrics.  Divide five-to-10-year returns into two groupings: those that delivered their own CPI+ target, and those that didn’t. Then rank these five- and ten-year figures on both returns and subsequently on Sharpe Ratios (matching a unit of risk underpinning the level of returns).

A colleague of mine once said that supermarket apples aren’t priced on Sharpe Ratios – nor for that matter on league tables, either. But pension/superannuation objectives are. Defined benefit (DB) funds are not rated purely on rankings, but on how much the fund sits in deficit/surplus. If Keith Ambachtsheer is right when he says that at the end of the day it’s someone’s pension, then we should rank our Defined Contribution (DC) schemes on more realistic risk/return metrics over this ridiculously short-term beauty parade.

If we as investment professionals don’t stand up against this marketing game, who else will?


The opinions expressed in this content are those of the author shown, and do not necessarily represent those of No More Practice Education Pty Ltd or its related entities. All content is intended for a professional financial adviser audience only and does not constitute financial advice. To view our full terms and conditions, click here.

The opinions, advice, or views expressed in this content are those of the author or the presenter alone and do not represent the opinions, advice or views of No More Practice Education Pty Ltd. Our contents are prepared by our own staff and third parties who are responsible for their own contents. Any advice in this content is general advice only without reference to your financial objectives, situation or needs. You should consider any general advice considering these matters and relevant product disclosure statements. You should also obtain your own independent advice before making financial decisions. Please also refer to our FSG available here: http://www.nmpeducation.com.au/financial-services-guide/.