How to build a better YFYS test
In their efforts to fix the Your Future Your Super (YFYS) reforms, many of those who have worked on the problem have been somewhat dismayed at its scale. It turns out it’s quite difficult to build a test that comprehensively measures super fund performance while retaining the ‘bright line’ aspect that’s supposed to motivate members of underperforming funds to move.
But two schools of thought are now emerging: throw the test out and start from scratch, or make some minor tweaks and add a second, qualitative test to be administered by the regulator.
There are three options for a ‘bright line’ performance test, Frontier says in its submission to Treasury’s review of the reforms (penned by David Carruthers, photo at top): elect for a simple solution, ignoring the complexity; elect for a complex solution and attempt to present it simply; or elect for a two-tier solution, providing a simple solution to members and complex solution for regulators.
“The YFYS test is a variation of the first and third options – it has a simple comparison tool for members and a slightly more complex test,” the submission says. “However, it satisfies neither criterion. It neither passes the “pub test” of matching with member outcomes nor is it a sophisticated assessment without unintended consequences.”
Frontier’s idea for the test is an “integrated two-tier solution”, which would entail ranking fund investment performance over 10 years compared to relevant peers in the YourSuper comparison tool, with underperforming funds then subjected to a second, more detailed test. That test would include multiple metrics and the regulator’s prudential investigations to “better assess whether a fund has the ‘right to remain'”. If a fund fails, it would either be subject to a 12 month remediation or, if that isn’t possible, the regulator would ensure its orderly exit. Frontier’s solution is similar to those proposed by others, and is emerging as a likely front runner for the future of the reforms.
“The YFYS performance test has been a success if the desired outcome was to reduce the number of superannuation funds,” the submission says. “However, it negatively impacted member outcomes for many members who choose to switch away from those funds. A number of funds which failed the initial test produced market leading investment returns in the subsequent year.”
“In the future, with the test now built into funds investment objectives, we expect fewer funds to fail. However, because the test is now an additional constraint on fund’s investment strategies, we do not expect this to improve long-term member outcomes and will reduce outcomes if funds respond to the test by investing more short-term to limit the risk of underperforming the test in future years.”
Frontier’s proposed multiple metrics would include: return versus investment objective (the “most logical assessment”, albeit one with less efficacy in differentiating between good and bad funds as short-term performance will likely be driven more by markets than skill); actual returns, unadjusted for risk; comparison against a simple reference portfolio, which could be easily understood by consumers; and an implementation benchmark, which is the current performance test approach.
“A well-designed collection of metrics measured across multiple time periods and measures of investment risk, all else equal, is superior to an individual metric,” the submission says. “Any individual metric will have shortcomings, and these can be reduced through the judicious use of additional metrics.”
“Frontier’s recommendation would be to use a test based on multiple metrics and multiple time periods, similar to the APRA heatmaps. However, we would suggest an overall result be determined from these metrics – this could be as simple as a “pass” is if more than half of the metrics individual show an above threshold result.”