By Tamara O’Brien, TMIL’s roving reporter
Who doesn’t love an intern? In the popular imagination, they’re as shameless as wolf-of-Wall-Street Jordan Belfort, as risky as a prototype Robocop, or as useless as likeable know-nothing Will, in the BBC comedy Twenty Twelve (soon to be reprised, hurrah!).
I’m sure Claire and Diana had such cultural archetypes to tap into – albeit more soberly – when they described gen AI as Your Precocious Intern, in their report of the same name last year. So, is it Jordan-Robo-Will that’s compressed into the little Copilot or chatbot key at your fingertip… or something vastly more beneficial to business and humanity, that must nevertheless be properly understood in order to accentuate its positives/eliminate its negatives?
In tune with the changing season, the buds of more thoughtful approaches to gen AI are appearing. And I’m delighted to say that in our own small(ish) world of corporate reporting, the ‘thoughtful approach’ movement is spearheaded by my good friend, and founder of small-but-mighty comms agency Falcon Windsor, Claire Bodanis. (Fitting, then, that this webinar was livestreamed from Anthropy, a conference to inspire a more sustainable, equitable and successful Britain, which Claire and Diana were attending.)
It hasn’t got a catchy name, this movement. I’m sure Claire would agree that her ‘Future of Reporting’ campaign is no headline-grabber; nor was it intended to be. Neither did Claire herself expect to be consulted as an expert, as the implications of AI muddying the waters of financial probity and decision-making begin to dawn.
Yet here we are. Because, as well as an unwavering commitment to truth-telling, Claire has a gift for finding and bringing the right people together to make things happen. Which is also what these webinars are about; and today, our panel shared their views and experiences of getting the best out of this unstoppable, expensive technology in their own fields.
Claire set the scene by rattling off some stats. In a recent McKinsey survey, nearly 90% of respondents said their companies used AI in some form. Copilot and company chatbots are now a standard feature of the business desktop. So AI is coming for corporate reporting, ready or not. And for most companies, that would be ‘not’ – because despite their considerable investment in AI tools, few are training their people in how to use them properly. A scary thought, which Claire backed up with some (source-checked) online research she shared with us. She also took an enlightening straw poll of the webinar audience (see panel).
So, suitably primed and ready to shake our fists at a world out of control – we turned to our guests for reassurance that at least someone’s got a foot on the brake.
Poll: Do you use gen AI at work, and if so, how much training have you had?
92% of the webinar audience use generative AI at work (as one of the 8%, I took no further part!)
73% used Copilot, 55% ChatGPT, 32% Claude, and 18% used an internal chatbot (obviously people used more than one tool)
42% had had no training in the use of these tools; 37% had received comprehensive training; and 21% had had basic training
89% would like more/better training; mainly to become more productive (65%) and to improve their professional skills (59%), but also because of concerns about ethics, governance and data security.
Claire comments: ‘These statistics bear out what we're hearing, that people want more training. Interesting that a lot of people have said they use ChatGPT and Claude. I just want to make the point that companies don't generally allow external chatbots like ChatGPT to be used at work. Confidential information should only be put into a company chatbot.’
Gen AI: brainless, despite appearances
Diana began with an overview of the nature, benefits and risks of generative AI, which can be summarised as It Is Big But It’s Not Clever. In her role as Head of ESG solutions at Insig AI, she and her team immediately saw how powerful LLMs such as Chat GPT would be in searching documents for sustainability disclosures, accurately and traceably. But to produce results fit for the research platform they offer their clients, this means – among other things – ongoing testing and refining of prompts, with input from database experts, language model specialists, and Insig AI themselves as ESG subject matter experts.
This matters because we all know by now that gen AI’s outputs can be misleading, to the point of outright garbage. Like a well-drilled politician, they sound good: plausible, balanced, on top of the brief. But when you’ve condensed the verbiage, does it contain insights that will be of practical use in completing a task or advancing a project? Or is it, in the FT’s phrase, just so much ‘work slop’ that must be laboriously checked, interpreted and re-done by humans?
‘I think that raises a lot of points that are relevant to training,’ Diana concluded. ‘What is good work? What tasks are we trying to advance with AI… and how do we measure AI’s success in achieving those aims?’
The board must be tech-savvy
For Peter, the emergence of gen AI is one of the biggest changes he’s seen in 40 years of working in governance and shareholder matters. And while the opportunities are certainly there, he finds many corporate users perilously unaware of its risks.
Peter’s position was that company directors must know enough about gen AI to be satisfied that it’s being appropriately used, and its risks well managed, within their organisation. On an individual level, he continued, we need to ask ourselves, how does AI make my life easier – and what are the downsides of that? It might mean you can dispense with some less obviously useful tasks, for example. But in doing so, you risk losing insight into why such tasks may be necessary… and with it, an understanding of how to do more complex things.
Developing these new skills and awareness will require training at all levels, but especially for boards and directors. And while modesty made it hard for Peter to plug his own Institute’s training, it nevertheless is excellent, and comprehensive, and available via the CGI website.
Humans in the loop
Our final speaker, Jaime, is not only an experienced company secretary, most recently with Kier Group, but also an early adopter of AI, and rather more of a fan than you might expect. When Kier trialled Copilot a couple of years ago, Jaime volunteered and received some basic training.
Finding it difficult to compose prompts that gave her the output she wanted, Jaime consulted the designated ‘super user’ for help. And to turbo-charge your AI skills, she recommends a hackathon. ‘Going to Microsoft for a day and brainstorming with colleagues on different projects was hugely beneficial. You’re learning from people with advanced skills, who make you realise just what the technology is capable of.’ Jaime made the point there should always be a human in the loop to check sources and outputs.
Two years on, Kier Group has a multi-functional AI steering committee to assess opportunity and risk before approving any investment in AI, and has greatly expanded its training. Jaime added: ‘If you’re stuck, AI itself can actually be very helpful. We had some issues when we were creating Copilot agents, and asked Copilot to help us.’ Now there’s a hall of mirrors…
Having heard from all our speakers, Claire threw out a question for everyone. What does good training in gen AI for reporting look like?
Peter: Probably the single most important thing for training to cover is prompts. We've all heard the horror stories about what Gen AI can produce, hallucinations and the like. Making sure you ask the right questions is really important, because that's what people need in their day-to-day job.
Diana: It’s not only a technical issue, it’s about people, culture, process and governance. Anyone designing training needs to take a consultative approach to bringing everyone along. It has to be equitable as well – there's already a gender gap appearing in the uptake of AI tools within the workforce.
Jaime: For me the best way to learn is peer-to-peer, because ‘you don't know what you don't know’. So that’s colleagues, hackathons, super users – and Copilot can be your friend too.
Ross Hawley (ZIGUP plc – in the chat functions): Good training helps people understand how the model works, how to check its sources, and how to use it in a defined set of documents. In this way you mitigate the risks of Gen AI hallucination, while leveraging its benefits and ability to assimilate large information sets.
When it was time for the audience to have their say, comments and questions abounded. Ross Hawley made some practical observations about training (see above). He also shared that he’d just built a Copilot agent, with the company’s annual report style guide as its sole source of info. So now ZIGUP have a fast, efficient way of ensuring their drafts are compliant, ‘without the risks of generic GenAI drafting that we all have concerns about’. In further ZIGUP-related news – and with reference to January’s ‘AI disclosure’ webinar – Claire added that the company’s latest annual report specified that AI was not used in its creation. ‘Which was pioneering!’ she approved. Look out for Ross on a FW webinar panel soon.
As ever when the 45-minute deadline looms, the discussion found an extra gear. How do you measure the success of training (would KPIs help?); is there an AI bubble, and if there is and it bursts, what happens when the tool you've been using is no longer available; why are 50% of company AI projects scrapped between proof of concept and adoption; the pros and cons of free training… all of which and more can be found on the webinar recording.
I left the webinar bedazzled by the many urgent issues we’re facing, and new ones I hadn’t even thought of. In the context of current world affairs, one could be forgiven a momentary indulgence of despair. But at the darkest hour comes the dawn, which for me was the result of the Hungarian election on Sunday. A reminder that systems can work as planned, and change can be for the better. All it takes is the will of the people – and as I’ve learned from this webinar series, there’s clearly a real will that gen AI should serve us, the people, not the other way round, at least when it comes to reporting!
