Black box to gold rush: Alphinity teams with CSIRO to explore AI risks
Alphinity has partnered with the CSIRO’s digital research unit Data 61 to explore responsible AI, which is the use of AI to provide benefits to society while minimising negative consequences. Through interviews with domestic and global businesses, the year-long research program will identify current best practice and create a framework to assess, manage and report on responsible AI risks.
“(Two years ago) there was very little written about the topic and companies did not have much to say about AI and responsible investment,” said Alphinity portfolio manager Mary Manning. “All that changed with the launch of ChatGPT. AI and RI are now front of mind for corporates, investors and regulators.”
“There is an AI gold rush going on right now in financial markets. Corporates are under immense pressure to commercialise AI products, which can mean that ethical considerations are forced to a back seat.”
Alphinity has been working on responsible AI for about two years, starting with trying to understand how it was reshaping the business of some of the big tech names in its global equity portfolios like Google, Amazon, Microsoft and Apple.
“Those businesses started pursuing this opportunity, and it’s an exciting opportunity, but we’ve started talking about the different implications that could occur if that’s not rolled out in a responsible way – human capital impacts, issues around equity and bias, and data and cyber-security,” said Alphinity head of ESG and sustainability Jessica Cairns (pictured).
“That’s where it started, but since then we’ve seen the conversation move beyond tech into all sectors – you talk to mining companies, consumer companies, they’re all really actively thinking about this.”
Alphinity has so far identified six ESG considerations within AI: trust and security; data privacy; bias, equity and inclusion; human capital; sentience; and environment. One example of the problems that can arise comes from healthcare companies that might be looking at using AI for early diagnostics, where there’s “really great opportunities” for doctors to be used for higher quality work but also risks around trust and accountability and data security for the wider healthcare system.
“For healthcare, the system is built around the concept that your physician is primarily responsible for you care,” Cairns said. “They are the final decision maker in terms of giving you a positive or negative diagnosis on whatever the issue might be. What happens when you bring a machine into that equation?.. There’s implications there in terms of who is ultimately responsible.
“AI at the moment is very much considered a black box. Even some of the global leaders in AI can’t always describe how some of these algorithms work and how decisions are made within these systems. If you can’t identify an error properly or track a decision back to where it was made, there are implications for how those systems are managed… It’s a legal or liability risk in terms of the specific business.”