01/30/2026 | Press release | Distributed by Public on 01/30/2026 03:59
2026 seems to be the year when regulators and industry bodies have decided that they're going to put out some guidance on how people should use AI in their jobs. Some could argue that that horse has well and truly bolted, but at least we have something where we didn't have anything before. On the 16th of January ICAEW published their latest Professional Conduct in Relation to Tax ( PCRT) release on AI and how people should use it in their jobs. And just yesterday, on the 28th of January, HMRC published their guide to using generative AI if you are a software developer.
Without wanting to put too fine a point on it, not really anything that we didn't know before. ICAEW's take, quite rightly, was that people using AI in the job needed to make sure they reviewed all the outputs and made sure that the risks still sat with the practitioner before anything was sent to any customer. It's useful to get that written down in black and white, but I'm not sure there was anyone in the industry not expecting that to be the case. Since 2023, we've seen lots of new stories about how people have generated things with AI that have turned out to be false. At least once a week we see some form of legal case that has been invented by AI for example. From speaking to our customers, it's very clear that nobody has assumed that AI would be able to take away their professional judgement nor would it absolve them of any risk. So whilst it's nice of ICAEW to put something out there, it hasn't moved any goalposts whatsoever.
ICAEW also suggested that people may wish to disclose to their customers when they have used AI to generate parts of, or all the work. This clearly doesn't make a lot of sense and doesn't give us any information on when what does, or does not constitute as "AI". For example, if people Google search something and then read the AI summary, is that "using AI"? Now clearly, if a practitioner was to Google something and the result was incorrect, leading to incorrect advice; the risk would sit with the practitioner, as it always has done. This hasn't changed, AI or not. To my mind, disclosing the use of AI for a client deliverable would be akin to having to disclose which team members worked on a deliverable. Clearly this would not be practical nor reasonable.
At least the HMRC guidance starts with a relatively upbeat statement that says: "We recognise the opportunities that AI offers software developers and encourage the innovative use of AI in tech software products." Positive indeed.
The guidance follows with things that we would again much expect from these sorts of things.
Very similar to all the things that the ICAEW PCRT says.
Whilst again it's really good to see some of these things written down on paper, just for the avoidance of doubt, I'm not sure there are too many people in the industry who've been operating under any kind of different principles since people have started building out these applications a number of years ago.
To be fair to both organisations, they can't exactly go out on a limb and propose something too radical. However, Pandora's box is well and truly open. This sort of guidance 2.5 years ago would have actually been really quite useful for people who hadn't started the AI journey. At this point in time, pretty much every customer I've spoken to, and every firm of any size will have looked at this stuff, understood how it works, and incorporated it to a greater or lesser extent within their business.
What would be fascinating, for example, would be to get their guidance on the use of agentic AI. It's a fundamentally different approach to using AI within the business, and it would be very interesting to see that take on it. Agentic AI can make decisions autonomously from humans, which might cause an issue, particularly with the current guidance. But actually some tasks that take place during a tax process don't require humans. For example, getting a fresh copy of an invoice that may have been missing some information doesn't require a human to actually do the "getting of the invoice" - that part can be handled by an agent. A human should be checking that the invoice is indeed correct.
The guidance is useful for businesses to be able to point to, to make sure that what they're doing passes muster. However, I'm not sure it's really told us anything new or different from what we didn't already know. The pace that this new technology moves at continues to be entirely different to any other technology that we've known before. And as such, I am hopeful that future regulation and guidance will rear its head much sooner so that we can benefit from it earlier in the process.