techUK Ltd.

11/06/2024 | News release | Distributed by Public on 11/06/2024 07:28

DSIT Secretary of State Announces RTA AI Assurance Initiative: £6.5bn Market Growth Potential and New Public Consultation

06 Nov 2024

DSIT Secretary of State Announces RTA AI Assurance Initiative: £6.5bn Market Growth Potential and New Public Consultation

This morning, the Department for Science, Innovation and Technology's (DSIT) Secretary of State, the Rt Hon Peter Kyle, announced the publication of two new Responsible Technology Adoption Unit (RTA) products at the Financial Times Future of AI Summit.

This includes the launch of the Assuring a Responsible Future for AI' report, which assesses the current state of the UK AI assurance market, identifies opportunities for future growth, and sets out the targeted interventions government is taking to drive the future growth of this market.

As noted in the report, the UK's AI assurance market already employs more than 12,000 people and generates more than £1 billion Gross Value Added (GVA) and could grow six-fold in the next decade to over £6.5 billion GVA if market barriers are addressed. 

The UK currently has 524 firms supplying AI assurance services. This includes 84 specialised AI assurance companies, 225 AI developers, 182 diversified firms, and 33 in-house adopters. Most suppliers are concentrated in London (47-69%), with smaller hubs in the South East, Scotland, and North West. Notably, the UK's AI assurance market is proportionally larger than those in the US, Germany, and France. Some of these firms in techUK membership include Advai and Holistic AI.

To realise this potential, DSIT shares the intention to drive demand for AI assurance goods and services by developing an 'AI Governance Essentials toolkit,' to improve quality supply of AI assurance by working with industry to develop a 'Roadmap to trusted third-party AI assurance', and support the international interoperability of the UK's AI assurance regime by developing a 'Terminology Tool for Responsible AI.'

However, the report notes that this assurance market faces several key challenges. On the demand side, there is limited understanding of AI risks among firms and the public, lack of awareness about AI assurance benefits, and uncertainty about regulatory requirements. Supply-side challenges include a lack of quality infrastructure to assess assurance tools, limited access to AI model information for third-party providers, and concentration of supply among AI developers rather than independent providers. The market also struggles with interoperability issues, including fragmented terminology across sectors and countries, lack of common understanding of AI assurance concepts, and different governance frameworks internationally.

To address these challenges, the UK government has outlined several key actions. To drive demand, they are creating an AI Assurance Platform as a one-stop-shop for information and developing an AI Essentials toolkit to help startups and SMEs engage with good practices. To increase supply, they are developing a "Roadmap to trusted third-party AI assurance," collaborating with the AI Safety Institute to advance research and development, and exploring capital investment and grant mechanisms. For improved interoperability, they are creating a Terminology Tool for Responsible AI to define key terms, working with US NIST and UK NPL to promote common understanding, and developing sector-specific guidance.

The report notes that these measures are part of the UK's broader AI governance framework, which includes plans to introduce binding requirements for companies developing the most powerful AI systems, while maintaining a proportionate, sector-specific regulatory approach.

The report emphasises that collective action across the AI assurance ecosystem is necessary to realise the market's potential. The UK government aims to position itself as a leader in AI assurance while ensuring AI is developed and deployed safely and responsibly. The report concludes by inviting organisations to collaborate on these initiatives by contacting [email protected].

Secondly, DSIT has launched a public consultation on the AI Management Essentials (AIME) Tool. AIME is a self-assessment tool which supports businesses to follow responsible AI practices in their organisations, and the first product for their AI assurance platform. DSIT notes they are set to make sure the AIME tool is a straightforward, easy to access tool, which provides simple and clear terms on what is required of businesses to ensure the development and use of AI systems is safe and responsible.

The public consultation will be open for 12 weeks, after which DSIT will decide on next steps. This could include helping public sector organisations make better and more informed decisions on purchasing AI systems.

Next Steps

The Digital Ethics Working Group's final meeting of 2024 will take place on 12 November, where members of techUK will discuss these releases and intentions to submit a formal consultation. Those interested in contributing to this consultation are encouraged to contact [email protected]. Please note further conversation on Digital Ethics and AI Safety will continue at techUK's eighth annual Digital Ethics Summit on 4 December, you can register to attend here.

We would encourage you to contribute your thoughts to the public consultation on AIME and thank you for your continued engagement with RTA as we take forward our work to support and strengthen the UK's AI assurance ecosystem.

Sue Daley

Director, Technology and Innovation

×

Sue Daley

Director, Technology and Innovation

Sue leads techUK's Technology and Innovation work.

This includes work programmes on cloud, data protection, data analytics, AI, digital ethics, Digital Identity and Internet of Things as well as emerging and transformative technologies and innovation policy. She has been recognised as one of the most influential people in UK tech by Computer Weekly's UKtech50 Longlist and in 2021 was inducted into the Computer Weekly Most Influential Women in UK Tech Hall of Fame. A key influencer in driving forward the data agenda in the UK Sue is co-chair of the UK government's National Data Strategy Forum. As well as being recognised in the UK's Big Data 100 and the Global Top 100 Data Visionaries for 2020 Sue has also been shortlisted for the Milton Keynes Women Leaders Awards and was a judge for the Loebner Prize in AI. In addition to being a regular industry speaker on issues including AI ethics, data protection and cyber security, Sue was recently a judge for the UK Tech 50 and is a regular judge of the annual UK Cloud Awards.

Prior to joining techUK in January 2015 Sue was responsible for Symantec's Government Relations in the UK and Ireland. She has spoken at events including the UK-China Internet Forum in Beijing, UN IGF and European RSA on issues ranging from data usage and privacy, cloud computing and online child safety. Before joining Symantec, Sue was senior policy advisor at the Confederation of British Industry (CBI). Sue has an BA degree on History and American Studies from Leeds University and a Masters Degree on International Relations and Diplomacy from the University of Birmingham. Sue is a keen sportswoman and in 2016 achieved a lifelong ambition to swim the English Channel.

Email: [email protected]Phone: 020 7331 2055 Twitter: @ChannelSwimSue,@ChannelSwimSue

Tess Buckley

Programme Manager - Digital Ethics and AI Safety, techUK

×

Tess Buckley

Programme Manager - Digital Ethics and AI Safety, techUK

Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.

Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace.

Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess's primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical.

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music.

Email: [email protected]Website: tessbuckley.me LinkedIn: https://www.linkedin.com/in/tesssbuckley/

Related topics