By Tamara O’Brien, TMIL’s roving reporter
It’s probably bad form to begin a blog with a list of stats. Nevertheless, can we take a moment to appreciate the scope of Your Precocious Intern – a multi-faceted report on the use of AI in corporate reporting?
Quantitative research analysed:
21,350 corporate documents published in the calendar years 2021-2024, including:
The annual reports of all FTSE 350 companies
A range of their other corporate publications.
Qualitative research elicited the views of:
60 participants in 12 focus groups held throughout 2024, comprising:
People from 40 companies of various sizes from various sectors (including 20 FTSE 100s)
5 institutional investors
1 proxy agency.
As a Falcon Windsor associate I’ve been aware of this project – driven by our founder/director Claire and Insig AI’s Diana Rose, and researched in collaboration with Imperial College London – since it began in early 2024. But even I’m blown away by its ambition. This is no agency window-dressing. This report means business.
And well it might, because the corporate world is crying out for insight and guidance on how to use AI in its communications. Even if not all of us realise it yet.
Today’s panel brought together three people working at the forefront of AI, each with their own very human take on its risks and benefits:
Sam Mudd, CEO of Bytes Technology Group (BTG), a value-added IT reseller which focuses on cloud and security software developed by leading vendors, and which sells and helps implement AI tools responsibly in both public and private sectors. (And as Claire put it: ‘Sam’s a great role model, with 35 years working in IT, including her promotion last year to CEO, a rare feat for a woman in the IT industry!’)
Diana Rose, Head of ESG Solutions at Insig AI, a tech company which develops AI tools to make sense of corporate sustainability information. As well contributing her thoughts to Your Precocious Intern, Diana crunched those impressive numbers
John Elkington, world authority on sustainable development; sustainability disclosure probably being the biggest headache for report producers, writers and readers.
And so to the meat of the matter. Claire began by summarising the report’s findings and recommendations, which I won’t go into here but you know where they are. Instead, I’ll focus on what our panel made of them and their own hopes, fears and musings on AI. Claire reminded us of two questions underpinning the research.
First, in what ways can generative (gen) AI support the ultimate purpose of reporting? Which is, according to her widely accepted definition: To build a relationship of trust with investors and other stakeholders, through truthful, accurate, clear reporting, that people believe because it tells an honest, engaging story.
And second: how might gen AI banjax all of that?
Oh, the advocate and sceptic should be friends…
Diana of Insig AI began by explaining how, for her, Claire’s passion for protecting integrity in reporting dovetails perfectly with what she does (uses different technologies to analyse corporate reports, and solve the challenges of preparing and making sense of them). Because, although quantitative data is InsigAI’s stock-in-trade, Diana recognises that it’s only part of the wider corporate reporting ecosystem. And so she jumped at the chance to work with Claire on her proposed research – to pause amid all the hype around gen AI, and consider how it should be deployed thoughtfully, responsibly and intentionally. It’s proved a fruitful partnership, not least, says Diana, because Claire ‘was, and is, relatively and healthily tech-sceptical!’
She gave two snippets of data from her research that stood out, interpreted by me as follows:
There looks to be a promising trend in terms of governance. In the whole of 2024, only three Codes of Conduct had been updated to mention use of AI. But when Insig AI ran the query again, just in Q1 of 2025, that jumped to 17 Codes.
When it comes to the reporting process, not so promising. Despite all last year’s hoo-hah about gen AI, it only got two reporting-related mentions in those 21,350 corporate reports. Yes, all that mooted drafting and editing assisted by those precocious interns ChatGPT, CoPilot et al yielded just two mentions – about graphics, not even text! Will this alarming detachment from reality change any time soon? The data cannot tell us.
But this is where the human factor comes in. Diana went on to say that, in their separate focus group discussions – the ‘qualitative’ research – reporters and investors alike were vocal about disclosure of the use of Gen AI; its role in creating trusted content; and its impact on both writer and reader.
So: a runaway technology that’s part bubble, part godsend, part harbinger of chaos. A groundswell of concerned, informed opinion. What to do?
Well, I was born in Knutsford, which I only recently learned is named after tenth-century King Canute, he of doomed-attempt-to-turn-back-the-tide fame (which is perhaps what ChatGPT would tell you – but in truth, he did it deliberately to prove he could not turn back the tide). So don’t ask me.
State-of-the-heart technology
Fortunately, humanity has more aces up its sleeve than mere scribes. Next to speak was Sam, CEO of BTG. She has a particularly interesting perspective, since her business is both a consumer of AI and, as a major UK partner of Microsoft, vendor of same to companies. And BTG is a FTSE 250 reporter.
Sam too had some interesting stats to share. According to IBM’s Global AI Adoption Index:
35% of companies reported using AI in their organisations
42% reported that they were exploring AI.
Massive opportunity for BTG, then; but it’s not as simple as you might think. The newest kid on the block is agentic AI, which enables you to set up processes whereby AI can do a lot of thinking for you. And like any sensible business, BTG is looking at how they might use this themselves, to become more efficient.
But big change has big consequences, and Sam shared a couple of them. Microsoft, one of the biggest growth companies on earth, has stated that they’ll embed AI into all their departments to ensure they keep growing. And that means they will not hire another employee over and above the ones they have now. Microsoft has also said that 30% of their coding is now done through AI. (Ok possibly less of a surprise, but I’ll leave that there.)
Sam’s point was – AI technology is moving incredibly fast, and businesses are having to make quick decisions about it if they’re not to be left behind. That’s why she believes a key part of BTG’s role is to act as an ethical guide through the maelstrom, helping customers stay true to their values in this new way of working. When a company comes to them for help in using AI, BTG takes time in initial discussions to understand their values, then help map a bespoke ethical/governance policy around them.
Taking her own company values of integrity and passion as an example, Sam explained that for her, any use of AI must not come at their expense. When it comes to reporting and communications, this means putting great human effort – and passion – into articulating her company’s vision and growth ambitions. In music to Claire’s ears, she wouldn’t even consider using AI to express her opinion as CEO. ‘[It’s one of] my duties to have that clarity of thinking, my tone and personality, coming through in what we put out to our investor community.’ The only thing Sam would use it for is to ‘improve on my writing, a little bit. In places.’
AI needs us to help solve the problems it creates
Sam’s point about the speed and scale of change that AI ignites in companies reminded us of a core tenet of Your Precocious Intern: the need for a wider appreciation of what corporate reporting’s for, and companies’ need for principles-based guidance. Having shaped the likes of the Dow Jones Sustainability Indexes and the Global Reporting Initiative, John was the perfect person to give us an external perspective.
In a sentence sweeping from Babylonian clay tablets to spreadsheets to Zoom, John reminded us that developments in comms technology have always prompted frenzies of excitement that peak, trough and level out (see the Gartner Hype Cycle).
But, no Luddites on this panel! John, in common with all our guests, declared himself very much pro technology. Obsessed and fascinated by it, in fact. He went on to express this inner conflict in what felt to me like a very entertaining and enlightening one-man game of Fortunately/Unfortunately:
Unfortunately: John is deeply sceptical about how AI is playing out in our societies and the wider biosphere, with concerns, for example, around the energy and water footprint of big data farms.
Fortunately: Technological advances are part of the solution – for example, by shrinking compute times, thereby reducing the environmental footprint.
Unfortunately: While the current frenzy around AI will burn itself out, we’ll use it much more and become dependent on it. With chief sustainability officers saying their reporting burden is now unmanageable, the question on John’s mind is: is AI adding to this, by spewing out more and more information that we just don’t know what to do with? Or –
Fortunately: Will AI, with its ability to cut through complexity, be the solution to the reporting pile-on?
Unfortunately: Perhaps not, because for the last 40 years reporting has focused on the supply side; how corporate reports are produced, and what information they should contain. Instead we should look at it from the demand side – how you create market intelligence from information, and how the whole system could work better with better information.
Fortunately: In its capacity to operate at levels of complexity far beyond the human brain, next-generation AI will be essential to achieving sustainability. John is also encouraged by the intelligence and commitment he’s observed in young people working in this field. ‘We have allies there,’ he says. ‘And I think, hope, that “Your Precocious Intern” will provide a bridge into that expanded set of conversations.’
There followed a tide of questions from the audience, prompting some fascinating discussion. Here are just some responses that stuck in my mind:
Could AI take over the preparation of financial statements, or an entire annual report? My overarching feedback is yes – but I would never allow it. (Sam)
If there is a move towards declaring the use of AI in annual reports, it won’t last very long... it will be a basic assumption that this is how [it’s done]. Almost all financial accounting and reporting will be done mechanically. But we’ll still call on human judgement for hard-to-measure things like impact assessment. (John*)
We are seeing excitement, scepticism, even fantasy about what's possible with AI. We have to bring it right back down to the really boring bits; data integrity, auditability, and those golden threads of transparency. There’s a very genuine risk that this can be forgotten amid the excitement. (Diana)
There will be scandals… where people suddenly realise they’ve been taken on a complete adventure by AI. (John)
With chatbots taking over Google searches, people are being misled into believing that generative AI has a concept of truth. It doesn’t. LLMs are not deterministic systems that look up and tell you what an answer is, they are probabilistic, statistical systems that give you a likely answer based on patterns of language. (Claire)
Finally, the panels’ top tips and parting thoughts for company reporters:
Sam: A good place to start [to understand more about AI] is the Alan Turing Institute, which has copious amounts of good, free guidance for organisations on how to develop an ethical policy around the use of AI.
John: I think it’s Jevon’s Law which states that, every time something becomes cheaper and more efficient, more of it is used. And because AI is going to disintermediate functions like auditing and accounting and make them cheaper, we'll start auditing things we never thought of auditing in the past. It's a recipe for bureaucracy!
Diana: Coming at this from a solutions point of view:
Start by testing things out in a secure environment, through trial and error within guardrails
Work out what AI’s good at, what it's bad at
Incentivise everyone! Try and bring everyone along on this, whether they're gung-ho or a total AI sceptic. It's a skill we all need to master, and it’s best to learn from experience
This tech is going to keep changing, so start the journey with a policy already in place.
And a final note from Claire: if you’re not sure where to start with using gen AI tools in reporting, Your Precocious Intern is a good place to start!
* Although Claire disagrees when it comes to stating whether a matter of opinion has been written by people or bots – she hopes this will always be a feature of disclosure, particularly if her plan to rethink reporting from first principles as a set of disclosures and a requirement for opinion gets off the ground – more on this in FW’s July blog!