Chartered Accountants & Business Growth Specialists

01432 370 572

AI in Accounting: Can we trust what we don't understand?

Published:

AI in Accounting: Can we trust what we don't understand?

As ICAEW (Institute of Chartered Accountants in England and Wales) launches a new ethics and tech resources hub, its experts talk about the ramifications of using artificial intelligence in business and finance – exploring evolving responsibility, accountability, ethics and more.

Artificial Intelligence: Can we trust what we don’t understand? 

 

 

Public mistrust around artificial intelligence (AI) has tended to prey on emotion: visions of robots “stealing” jobs from their human counterparts. For those working in the corporate field, fears are more specific. While generating huge volumes of insight that might benefit businesses, the algorithms being developed are so sophisticated they verge on the opaque. 

As Chartered Accountants, we are compelled (driven by professional scepticism) to ask: if we cannot understand these systems, should we be trusting the outputs? 

To address this issue and dispel the fears emerging around AI, ICAEW has created a new ethics and tech hub. The site brings together expertise from the ethics team and the IT Faculty, as well as the Financial Services and Audit & Assurance Faculties, where data analytics driven by AI is more advanced than in other sectors. For example, “black boxes” being used to offer cheaper insurance to people in otherwise high-risk categories in return for having their driving performance tracked. 

There have, however, been examples of algorithms making decisions that led to unintended consequences (eg, accusations of racism) because the algorithm acted on information contained in historic data sets.  

The techies are taking the lead.

IT Faculty technical manager Kirstin Gillon says the hub’s mix of practical guidance and knowhow will hopefully help members be more comfortable with AI over time. Gillon says that accountants’ questions are most often framed around AI’s impact on society and the wish to do public good.

In effect, practitioners want to know how the adoption of AI will affect their ability to stay true to the five core principles of their ethical code: integrity, objectivity, professional competence and due care, confidentiality and professional behaviour. 

Tech giants are already beginning to create this environment of open collaboration. The Partnership on AI consortium includes Amazon, Apple, Facebook, Google, IBM and Microsoft. Open AI, a non-profit for research sharing, was co-founded by Tesla’s Elon Musk.

“There are plenty of forums for discussion, particularly in the UK where there is a lot of research going on,” Gillon agrees. “But because the tech firms are the ones leading on innovation, the debates are heavily driven by their sector. Maybe accountants should have a stronger voice, given our ethical focus and experience.”

Who’s responsible for AI systems?

Another reason for accountants to be involved in framing the AI ethical debate is their level of accountability, particularly where so-called black box systems are being adopted. Gillon asks: “How do you make sure you’re making decisions that are morally correct and error-free, as well as put right mistakes?” The roundtable suggested the profession would need to be involved in creating assurance frameworks that determine “whether firms/systems are operating in accordance with ethical principles”. 

Machines cannot fear being made redundant, but they can be programmed to have particular reactions and learn from them. Falcon says: “Part of professional ethics is that there are consequences if you breach them and you can be disciplined. One strand of thought is that they could be considered similar to staff with the programmer taking on the role of the line manager. If you’re responsible for programming a piece of software, surely you are responsible for reviewing what it produces. Otherwise, with whom does the buck stop? 

So, do accountants need to understand AI?

A discussion panel at the WCOA (World Congress of Accountants) held in Sydney in 2018, indicated that adding AI-specific elements to ethical accounting codes is still at an early, theoretical stage. But Falcon agreed the profession would not be able to “absolve itself of the responsibility” of a machine’s actions. It could become a real challenge for accountants who want to learn the ins and outs of AI algorithms before they will trust them. 

Gillon agrees: “It comes down to a trade-off between accuracy and understandability. There will be times when accountants don’t need to understand the AI. And there’ll be other times when you really do need to understand how the program has come to this recommendation that you’re going to rely upon.” 

Machines, “that understand the context and environment in which they operate, and build underlying explanatory models that allow them to characterise real-world phenomena”, are expected to be realised in what tech specialist David Gunning of Darpa calls third-wave AI systems.

Adapting the XAI concept for accounting, Falcon says: “You wouldn’t be checking the technology: you’d effectively check its thought process and whether this was in line with the principles you required.” 

Is using AI in financial services ethical?

Those looking to regulate or at least advise accountancy on ethics in future will surely examine how things have played out so far in financial services. 

“Technology can help design and distribute better products, and widen access to financial services on sustainable terms by giving a better view of risk,” says Philippa Kelly, head of ICAEW’s Financial Services Faculty. Tech firms have long been disrupting traditional banking and insurance providers by meeting the demand for cheaper, targeted products through apps and other platforms that rely on customers providing personal data. But the AI that helps to deliver these improvements has been found wanting enough to warrant greater oversight.  

There are numerous ethical challenges to face in the financial services sector, for example around offers of credit. Card companies will receive a swipe fee in addition to the interest charged on purchases, which means it is in the company’s interest for customers to rack up transactions. “Reward credit cards” trade in a similar fashion, giving bonus rewards or discounts if the spend adds up to a certain monthly level. But the ethics of encouraging higher spending are questionable when one in six borrowers is in financial distress. 

However, financial service providers that in the past have been blamed for irresponsible and damaging practices may find that this new use of AI helps to improve trust between the industry and the public. Who would you trust more with your money, a computer you don’t understand but is set to only make decisions within certain parameters of risk, or the “greedy banker who’s just out to make money at any cost”?

A no-win scenario?

Financial service providers and by definition our clients will definitely benefit as AI becomes more widely adopted. Investment managers had the same sceptical and cautious mindset as accountants before applying AI models to their portfolios. Kelly says: “One leading investment manager found that an AI liquidity risk model was significantly outperforming traditional methods. However, the type of AI used, (neural networks) meant that the reason for the outperformance couldn’t be explained. 

This obvious gap in understanding is the kind of AI problem that ICAEW’s AuditFutures programme is concerned with. Philippa adds: “By not taking the action that makes the best return on their investments, they’re not acting in the best interests of their clients. But their duty of care also means that if they were to use the technology that couldn’t be explained, even if it got a better result, they also wouldn’t be doing the right thing.” A no win scenario really...

Findings from the ICAEW indicated that there is a low tolerance of failure from AI and that they are expected to make “better than human judgements”. The majority of automation has so far occurred at entry level analysis. AI is not yet able to take over from humans in areas where wisdom, experience, professional judgement, selectivity, instinct and general knowledge must be applied. Looks like we’re not out of our jobs just yet. 

The Ethics Standards Committee will continue to feed in to the International Ethics Standards Board for Accountants, along with ICAEW staff who met with the Board in January, in anticipation of any long-term project to address ethical updates to the code. We will be sure to update you when we hear any more information. 

Accountants won’t be the only professionals grappling with the philosophical debates around AI as its use continues to expand. Similar technologies are being implemented in industries from manufacturing to healthcare. By starting the hub now, while discussions about updating ethical codes are still young, it means we will have the necessary means to prepare.

 

Worried about how AI could impact your business? Help future proof your business by contacting us and find out more about our risk analysis services.