A critique of APRA’s ‘Heatmap’: the good and the bad


APRA this week releases its first ‘Heatmap’ designed to allow super fund members to get a clearer picture of the performance of their, and other, super fund MySuper products. While there have already been questions raised, Frontier Advisors today publishes a comprehensive analysis as a preview.

The Frontier paper is careful to emphasise the firm’s overall support for APRA’s aims in publishing “at least once a year”, as the regulator has said, a colour-coded guide to help members understand what’s going on with their super. As simplistic as this sounds, cutting through technical jargon at the same time as having experts examine the finer points is undoubtedly a good addition to other available analyses of MySuper products and strategies. But it should not be the only source of information, the paper points out. There are dangers too.

Frontier has necessarily based its analysis on what APRA has published to date in its information paper, married with its own information sources and experience working with institutional investors. It has made several assumptions required to achieve APRA’s precise methodology for analysis. APRA pointed out the information paper was not a “discussion paper”, such as it usually produces prior to issuing a regulatory guideline, but that it welcomed feedback. One suspects it’s going to get it. The Heatmap will initially concentrate on three key areas which include: (1) investment performance (over a three‐and five‐year period); (2) fees; and (3) sustainability. It will have a concise view (showing eight key metrics) and an expanded view (showing 21 metrics). It is designed to emphasise underperformance without giving a “pat on the back to better-performing funds”.

Products that are performing below the outlined benchmark will be presented on a continuous coloured gradient from pale yellow to dark red. Any product that is performing above the outlined benchmark will be coloured white. APRA has highlighted that receiving the colour white does not mean the product/fund is “blemish‐free”, or that there is no room for improvement for the MySuper product.

Investment metrics

The range of investment risk targeted (and taken) in MySuper products is varied. We think it is positive that the assessment incorporates a risk adjustment. This aims to resolve a key flaw in simple, return‐based peer rankings and a resulting risk that funds may feel compelled to “move up the risk curve” to compete fairly on peer related metrics. On the measure used for the risk adjustment is the growth/ defensive classification, the paper says the strategy is a limitation of the Heatmap in our view, given investment risk is multi‐ faceted.

“Growth/defensive, in particular, is not a good selection for a single portfolio risk measure as it is a simplistic perspective of risk used primarily for reporting. We encourage the inclusion of additional measures of investment risk in the Heatmap metrics and see this as a key area for future Heatmap enhancements.

“While we understand the metrics are focused on the identification of underperformers rather than the full spectrum of performance, we expected at least one metric involving absolute returns and/or returns versus CPI+ objectives, given this represents the eventual member outcome and reflects the primary MySuper objective required by APRA in SPS 530. This is also what has been communicated to members via product disclosure statements and product dashboards.

“The degree of risk taken is in part an active investment choice and should not be totally excluded from the assessment. While it is positive to incorporate risk‐adjustment, we think it would be appropriate to include return only measures in the suite of metrics. All three investment performance metrics utilise fund SAAs to set the risk level or as the benchmark. We may see an increased focus on the SAA settings in the context of how it is used in these metrics. For example, funds may choose to reduce any large long‐term deviations between the SAA and the actual asset allocation.”


The concise section of the Heatmap uses five-year returns and the expanded section uses both five-year and three-year returns. Another restriction is that the start date for data is the inception of MySuper in 2014.

The paper says: “Relative to the three‐year timeframe used in prior member outcome measures, the use of five years for measurement is a positive development. However, it is shorter than the timeframe considered in setting investment objectives and strategies, and, also, does not cover a full economic cycle. This limits the metrics’ efficacy as an assessment of fund strategies.

“We believe investment performance should be measured over the long‐term, defined as a 10‐year (plus) time horizon. Shorter periods make it difficult to differentiate between persistent underperformers and cyclical investment markets. We support APRA’s intention to extend the timeframe as data becomes available.

“Relative performance over any five‐year period will be influenced by the interaction of investment strategy and market returns. As a result, the metric will likely identify funds with robust investment strategies for the long term as “underperformers” because their strategy has not ‘paid off’ relative to other strategies within the period assessed. The risk of “false positives” will be even more impactful over the three‐year measurement period. Too short timeframes may also lead to an increase in shorter term and peer aware investment strategies.

“Once underperforming funds are identified by the set metrics, there are multiple areas of further investigation we think that APRA should undertake to ensure its assessment is robust. This should include longer term performance which is readily available for many funds that converted their existing defaults to MySuper products.”

Fees and costs

The Heatmap methodology is not explicit on investment fee expectations as they are incorporated into the net performance measures. A net of fees performance assessment has merit, as some investment strategies with higher fees can produce superior net of fee performance.

As the fee metrics embed an investment fee assumption of 0.80 per cent annually, this creates an effective benchmark whereby lower fees are beneficial for both the investment performance and total fee metrics. “Based on our calculations, a significant proportion of funds currently fail the administration fee test. A smaller proportion of funds fail the overall test, indicating that the high administration fee funds typically have lower investment fees,” the paper says.


APRA is using three metrics: total accounts growth rate; net cashflow ration; and net rollover ratio.

Frontier says: “The approach taken by APRA for these measures is clearly focused on sub‐$5 billion funds. Notwithstanding the clear benefits of reduced fixed costs, positive net cash flows and other benefits which can come with scale, we also believe there are benefits that only small‐to-mid‐sized funds can provide to members. These include engagement with members, an ability to target the offering to the membership and from an investment perspective, and an ability to access certain niche asset classes and capacity constrained strategies.”