Artificial intelligence (AI) is often touted as the聽cure-all for financial services firms' ability to deal with the looming data onslaught聽stemming from environmental, social & governance (ESG) regulation. Yet ESG also poses an existential threat to the financial services industry's use of AI
The European Union’s聽聽has required asset management firms to begin collecting millions of data points from the companies in which they invest, and the forthcoming Corporate Sustainable Reporting Directive will only add to the volume of data points. Further, there is the data being collected under the Task Force聽on Climate-Related Financial Disclosures (TCFD) initiative and the International Sustainability Standards Board’s plans to create a baseline for ESG reporting.
Taken all together and it becomes clear that AI-enabled systems will be essential聽to firms’ efforts to make sense of 鈥 and profit from 鈥 all these requirements.
Potential problems for financial services firms using AI lurk聽beneath all three columns of E, S and G, however. The carbon footprint from storing and processing data is enormous and growing, algorithms have already been shown to discriminate against certain groups in the population, and a聽lack of technology skills in both senior management ranks and the general workforce leave firms vulnerable to mistakes.
Environmental: Carbon footprint of energy use
According to the International Energy Agency, electricity consumption from cooling data centers聽could be as much as 15% to 30% of a country’s entire usage by 2030. Running algorithms to process data also requires energy consumption.
Training AI for firms鈥 use has a big environmental impact, according to Tanya Goodin, a tech ethicist expert and fellow of the Royal Society聽of聽Arts in London. “Training artificial intelligence is a highly energy-intensive process,鈥 Goodin says. 鈥淎I are trained via deep learning, which involves processing vast amounts of data.鈥
Recent estimates from academics suggest that the carbon footprint from training a single AI is 284 tons, equivalent to five times the lifetime emissions of the average car. Separate calculations put the energy usage of one super-computer as the same as that of 10,000 households. Yet, accounting for this huge electricity use is often hidden. Where an organization owns its data centers, the carbon emissions will be captured and reported in聽its TCFD scope 1 and 2 emissions.聽If, however 鈥 as happens at聽an increasing number of financial firms 鈥 data centers聽are outsourced to a cloud provider, emissions drop down to scope 3 in terms of TCFD reporting, which tends to聽take place on a voluntary basis.
“I think it’s a classic misdirection 鈥 almost like a magician misdirection trick,鈥 Goodin explains. 鈥淎I is being sold as a solution to climate change, and if you talk to any of the tech companies, they will say there’s huge potential for AI to be used to solve climate problems, but actually it’s a big part of the problem.”
Social: Discriminating algorithms聽& data labelling聽
Algorithms are only as good as the people designing them and the data on which they are trained, a by the Bank for International Settlements (BIS) earlier this year. “AI/ML [machine learning] models (as with traditional models) can reflect biases and inaccuracies in the data they are trained on, and potentially result in unethical outcomes if not properly managed,” BIS stated.
Kate Crawford, co-founder of the AI Now Institute at New York University,聽has gone further in warning of the ethical and social risks embedded in many AI systems in her book Atlas of AI. “[The] separation of ethical questions away from the technical reflects a wider problem in the field [of AI], where responsibility for harm is either not recognized or seen as beyond the scope,” Crawford says.
It is perhaps unsurprising, therefore, that mortgage, loan, and insurance firms have already found themselves on the wrong side of regulators when the AI they used to make lending and insurance pricing decisions turned out to have absorbed and perpetuated certain biases.
In 2018,聽for example, researchers聽at the University of California-Berkeley, found that AI used in lending decisions was . On average, Latino and African American borrowers were paying 5.3 basis points more in interest on their mortgages than white borrowers. In the UK,聽research聽by the Institute and Faculty of Actuaries and the charity Fair By Design found that individuals in lower-income neighborhoods聽were being charged 拢300 than those with identical vehicles living in more affluent areas.
The UK Financial Conduct Authority (FCA)聽has repeatedly warned firms that it is watching the way they treat their customers. In 2021, the FCA revised pricing rules for insurers after research showed that pricing algorithms were generating lower rates for new customers than those given to existing customers. Likewise, the EU’s AI legislative package looks set to label algorithms used in credit scoring as high-risk and impose strict obligations on firms’ use of them.
Financial firms also need to mindful of how data has been labeled, Goodin agrees. “When you build an AI, one of the elements that it still quite manual is that data has to be labeled. Data labelling is being outsourced by all these big tech companies, largely to Third World countries paying [poorly],鈥 she notes, adding that these situations are akin to 鈥渢he disposable fashion industry and their sweatshops.鈥
Governance: Management does not understand the technology
Turning to governance, the biggest issue for financial services firms is a lack of technologically skilled staff, and that includes those at the senior management level.
“There is a fundamental lack of expertise and experience in the investment industry about data,” says Dr. Rory Sullivan, co-founder and director of Chronos Sustainability and a visiting professor at the Grantham Research Institute on Climate Change at the London School of Economics.
Investment firms are blindly taking data and using it to create products without understanding any of the uncertainties or limitations that might be in the data, Sullivan says. “So, we have a problem of capacity and expertise, and it’s a very technical capacity issue around data and data interpretation,” he adds.
Goodin agrees, noting that all boards at financial firms should be employing ethicists to advise on the use of AI. “Quite a big area in the future is going to be around AI ethicists working with corporations to determine the ethical stance of the AI that they’re using,鈥 she says.
鈥淪o, I think bank boards need to think about how they’ll access that.鈥