Regulating AI in reporting? Here’s how

Post #67

May 3, 2023

Claire Bodanis

Claire has drafted a proposal for how to regulate the use of AI in reporting, which she’s sharing with the FCA and FRC. You can read it here.

Dear FW blog readers,

Forgive me for writing yet again on this subject. But I hope you’ll feel it’s worth it, since, rather than deliver another rant about the dangers of AI in reporting, I’ve moved into practical mode. With the help of some splendid folks, I’ve put together a draft proposal for regulating the use of large language model AI in corporate reporting, which I have shared, and hope to discuss further with, two of our UK regulatory bodies: the Financial Conduct Authority (FCA) and the Financial Reporting Council (FRC). Please comment, share, and if you agree, support! And if you don’t support please comment anyway, so that we can address this critical issue in a way that will protect the truthfulness and accuracy of our reporting.

Thank you,
Claire

PROPOSAL FOR REGULATING THE USE OF LARGE LANGUAGE MODEL AI IN CORPORATE REPORTING
Truth and accountability are the bedrock of corporate reporting, and thus the bedrock of our system of capital markets. In this system, investors rely on the information – data and narrative – that companies produce, in order to assess them as investment prospects. It is therefore essential that our reporting regulators act immediately to put safeguards around the use of large language model AI – which invents, synthesises and presents information in highly plausible but false narratives – in corporate reporting. Why immediately? Because AI is moving so quickly that any action must be swift; and we have a window of opportunity, since companies have not yet started to use AI in this way.

The case for regulation

  • AI systems frequently create false narratives. There are two types of information in reporting: data and narrative. Data alone is not truth, it is simply a fact that is (or is not) true. The truth or otherwise about a company, as told through reporting and results announcements, lies in the interpretation of data – the story that is told around it. We already know that large language model AI systems invent ‘truth’ and create false attributions and sources. It’s essential that we keep such systems out of creating these narratives, and ensure the interpretation of data remains the preserve of human beings.

  • AI systems lack accountability; any narrative created by such a system is the preserve of itself, not of the directors of the company who should be telling their own story. It’s impossible to determine who is responsible for the narratives that AI systems create. Is it the person who asked the question of the system? Is it the company that created the system? Accountability for reporting must remain with the company itself, and its individual directors. By allowing an AI system to create the company narrative, directors will be giving those systems responsibility for telling the truth.

  • AI itself is unregulated and changing rapidly – its use needs to be curbed. It’s irresponsible to allow any system that is in a rapid state of flux, with consequences as yet unforeseen, to be involved in the important business of creating information on which people rely to make investment decisions.

  • Everyone, including its creators, is calling for AI to be regulated. All over the world, people are clamouring for AI to be regulated – even its architects. While governments grapple with the bigger question of its general use, those that can act in curbing AI’s ability to put false but plausible information into the public domain should do so. Reporting regulators have that power, and that responsibility.  

  • We have a window of opportunity in which regulation will be easy to implement – if we act immediately. New reporting regulation always needs to consider the practicalities of implementation, and whether it will place additional burdens on companies. Right now the practicalities are easily surmountable, because in general, companies don’t use AI to produce reporting and results announcements. We therefore have a window of opportunity to introduce such safeguards, in a way that adds no burden to companies, before work begins on the next round of December year-end reporting.

What might regulation look like?
In line with the principles of the ‘Better Regulation Framework’, regulation needs to be practical, proportionate and enforceable. Prohibiting the use of AI altogether in the creation of reporting would be both disproportionate and unenforceable; however, prohibiting its use in the creation of narrative, and requiring disclosure as to its use in any part of the information upon which that narrative relies, is something that could be both tested and enforced. It would also support the important principles of transparency, consistency and accountability. With that in mind, regulation could be drafted as follows:

  • Narrative reporting – whether in annual reporting or other statements to the market – must be written by human beings. Large language model AI systems must not be used in any form to create narrative reporting, including first drafts.

  • Any use of AI systems in the gathering or preparation of any form of source material that is used in annual reporting or other statements to the market must be disclosed in full.

  • Accountability – the existing requirement for directors to ensure their annual report is ‘fair, balanced and understandable’ should be expanded, under the fairness principle, to include a statement confirming that:

o   the narrative has been written by a human being, not an AI system

o   either:

-  AI has not been used at all in the creation of source material; or

-  source material that has been generated by AI has been listed, along with the system or systems used in each case

o   directors are confident in the veracity of all the information that has been included, from whatever source.

It’s important that this issue is the responsibility of the Executive Directors, particularly the Chief Executive, and not just the non-executives; since the practicalities of creating reports sit within the company, and any use of AI systems will be a management issue that permeates throughout the organisation.  

A direct appeal – act now and safeguard reporting
The large-scale adoption of AI is coming fast, and it is untested and unregulated. Through the proposals described above, you, our regulators, could help ensure that such new technologies are a benefit and not a threat. By acting quickly, and introducing regulation before AI is adopted wholesale by companies, you will make it easy for them to adhere to this regulation – thereby safeguarding the truthfulness and reliability of the information companies provide, and their accountability for it.