Post # 89
June 4, 2025
Claire Bodanis
The Falcon Windsor/Insig AI team are very grateful to London Standard columnist, Chris Blackhurst, who interviewed Claire and wrote the following piece on Your Precocious Intern. For print geeks, it was particularly exciting since it was a double-page spread in the 29 May edition! The online version is available here, but you can read the full text below. The issues Chris raises will certainly be on the agenda at our webinar on 12 June, with sustainability guru John Elkington, and Sam Mudd, CEO of FTSE 250 company, Bytes Technology Group. Please sign up for that here. We do hope you will join us!
Not OK, computer: firms using AI to cut corners are playing with fire
A CEO sent shockwaves through the business world by admitting he asked a bot to help draft his annual results statement – where will it end, asks Chris Blackhurst
The corporate world is agog. Ever since Eben Upton, the chief executive of Raspberry Pi, said he ran his annual results statement through AI before its publication, the talk has been of machines taking over the boardroom.
The reaction to Upton’s admission was astonishment. Raspberry Pi is stock market listed — these were its first full set of figures since flotation. They were eagerly awaited and, as with any quoted company, they were a closely guarded secret.
Upton asked Claude, the AI bot designed by Amazon-funded Anthropic, to conduct a “tone analysis” of the document, to say how it felt the microcomputer business was doing, on a scale of one to 100.
Getting a so-so score, he set the computer to work. As the bot dialled up the language, the score improved. Too much, as it made his words seem breathlessly over the top. He made some improvements of his own, took out descriptions like “exceptional” and reached an acceptable level.
Eyebrows shot up on two counts. AI is a third-party, it’s mechanical, susceptible to intrusion. It was not clear if he did but it is to be hoped Upton used a secure internal system. Then, there is the issue of the statement being entirely his – it is supposed to be his thoughts on the company’s performance. Here he was, asking AI to look at what he planned to say.
To be fair to Upton, he said in public what others may well be doing in private. Still, it was the most glaring instance yet of AI doing a boss’s bidding. Others include a multinational senior executive freely saying he uses AI to draft his emails. An avatar of a CEO recently “spoke” in a short video accompanying a stock exchange results announcement. Another corporate head told a tech conference how he uses AI to help prepare his speeches.
While the software advances, the authorities stall. No regulation or guidance on AI’s expansion and use is forthcoming. It is up to companies to make their own policies, not only to reap the benefits of AI but also to prevent a scandal and shareholder disaster. That is a worrying state of affairs. Specialist financial reporting and advisory consultancy Falcon Windsor teamed up with Insig AI [our bold!], which delivers data infrastructure and AI-powered environmental, social and governance research tools, to look at the FTSE 350 companies. Their study, based on engagement with 40 firms and analysis of all FTSE 350 reports published from 2020 to 2024, revealed that generative AI use is multiplying across UK companies, often without any training, policy or oversight.
They titled their report Your Precocious Intern, using the term to describe AI as useful but also a liability, the equivalent of someone who requires careful handling. While investors see the adoption of AI as inevitable and look forward to the advantages and efficiencies it could bring, they are increasingly alarmed about its implications for the truthfulness and authorship of corporate reporting. Everyone agrees that company reports and statements must remain the direct expression of management’s thinking. Without rules and a common code, AI risks undermining the accuracy, authenticity and accountability that underpin trust in the stock markets.
AI is moving so fast that there is only “a short window of opportunity” to upskill and mitigate the risks it represents to the financial system. Their conclusion? “Treat generative AI like a precocious intern: useful, quick, capable, but inexperienced, prone to overconfidence and should never be left unsupervised.” Claire Bodanis, a leading authority on UK corporate reporting and founder and director at Falcon Windsor, told The London Standard: “If people use it unthinkingly, without proper training or guidelines, it could fatally undermine the accuracy and truthfulness of reporting.”
Comments like these from two FTSE company secretaries should also be a warning. “I think there are some real benefits in using generative AI as a summarising tool, and I’m quite keen to utilise it a bit more for efficiency if we can get comfortable with the accuracy of it,” said one. Another said: “Would I be able hand on heart say that none of my contributors had used gen AI to provide the bit they’ve sent in? I have no idea.” Institutional investors are understandably afraid. As one told the researchers: “I would be very wary about AI being used in forward-looking statements, or anything that is based around an opinion or a judgment.” Another said: “I see generative AI as a flawed subordinate who’s learning the ropes.” A third said: “I feel very strongly that there should be a notification in the annual report if there’s anything that has not been written by a human — there’s no accountability through generative AI.”
According to Bodanis, Raspberry Pi ought to act as a wake-up call. She asked: “If a director gets AI to decide what is his or her opinion of their results based on what people are likely to think, then how is that honestly and truthfully their opinion?” History tells us, she said, what can happen. “You think back to those stock market bubbles. Companies have to account to investors what they’ve done with their money and what they are going to do with it.” There must, said Bodanis, be “a building of trust between a company and its shareholders”.
One issue is the amount of material companies are obliged to produce. Annual reports that had grown to 80 pages, which felt huge, can reach 300 pages. That is because of the amount of non-financial reporting they must provide — on issues such as climate change, for example. “They are expected to use detail and opinion to create the truth of the state of the company,” said Bodanis. “But if they are using AI, it is very difficult to decide what is true and what is not.” [Clarification point: ‘if they are using AI to write their opinion’!]
Just when corporate reporting is becoming “ever more onerous and important”, supplying all manner of information by law, along comes AI to make it easier. “We should be using AI to do things humans can’t do like crunch the numbers, not using it to do the things humans can do, like express opinions,” says Bodanis. [Another clarification – I do of course mean that what AI can do better is crunch vast quantities of numbers – and other info for that matter – at speed!]
A company report, she said, “should be like looking the chairman in the eye and hearing it from them direct”.
The slippery slope, too, is that distinction is lost. All company communications end up resembling each other – with the same wording and descriptions – when they are meant to be unique, coming straight from the top.
The Financial Reporting Council, which regulates financial reporting and accounting, is dragging its heels, thinking about what to do about generative AI but so far not doing anything to police its rise. The FRC last got in touch with company boards about where it thought AI was heading in relation to results and reports some 18 months ago. That feels like a lifetime, such is AI’s acceleration.
As for companies uploading their sensitive figures to AI, Bodanis’s point is succinct: “AI has not signed an NDA.”