techUK Ltd.

09/16/2025 | News release | Distributed by Public on 09/16/2025 09:13

Workshop Round up: Mapping the Responsible AI Profession

16 Sep 2025

Workshop Round up: Mapping the Responsible AI Profession

On Wednesday 10 September 2025, techUK and the Ada Lovelace institute convened a group of over 50 experts, each representing organisations in the AI assurance and ethics ecosystem including responsible AI leads, assurance firms, civil society and professional bodies, to discuss and continue the work of mapping the responsible AI profession.

The event featured a series of lightning talks by DSIT, techUK and the Ada Lovelace Institute, showcasing prominent insights about what is needed to support a flourishing AI assurance profession in the UK. These lightning talks were followed by a series of breakout discussions, centered around what it means to be a responsible AI practitioner, what to say to those looking to get involved in the space, and the current lack of standardisation across responsible AI professionals.

Discussions examined the benefits (and hindrances) of professionalisation, certification and accreditation, delivering insights targeting the absence of clearly defined roles, organisational responsibilities and reporting lines, structured career pathways, standardised skills, competency frameworks and training. Insights also focused on the multifaceted aspects of the role, centering on the market barriers of variable quality, skills, information access and innovation. Furthermore, discussion explored commonly found skills in professionals including risk management skills, legal and ethical fluency, ability to community with a range of stakeholders and operate in uncertain environments.

There was an attempt to build consensus across attendees on what is needed to clarify the roles and the organisational position of a responsible AI practitioner, map out current career pathways, and discuss the standardisation of skills and training frameworks for those currently in and wanting to enter the AI assurance ecosystem.

Priority areas for further work

For Organisations

Organisations should look to establish responsible AI roles with clear mandates and sufficient authority to influence AI development proactively. They must invest equally in technical capabilities and governance skills when developing AI talent, ensuring the value to be brought to the table from each area is cultivated, as no one person can shoulder the burden alone. Organisations must work to ensure that RAI practitioners have direct reporting lines to senior leadership, access to necessary information and that leadership understands the collective benefit of responsible practice.

For Professional Bodies

Bodies should look to develop flexible certification frameworks, recognising the multitude of different pathways to expertise and the value that arises from intellectual diversity. They should look to center current practitioners in discussions of professionalisation, encouraging them to build and develop existing best practices. Creating accessible professional development opportunities that maintain diversity while establishing standards promotes these efforts, and these opportunities, certifications and training should demonstrate how AI assurance can support both business priorities & accountability goals. From here, we must recognise and validate formal and experiential learning, especially in ethics, social impact, and interdisciplinary practice. We need clarity on competencies generally and which are required for specialised context. Certifications of assurance professionals should consist of modular certification with tracks. We must look to define clear boundaries between the ethical, auditorial and compliance functions of RAI practice, recognising the specialist areas necessary for responsibly integrating complex developing technology. Accreditation and certification of professionals will support legitimacy and may raise the bar, but it is not a silver bullet.

For Policymakers

Those in the role of influencing and creating policy should look to take action recognising RAI practitioners as essential internal and external human infrastructure necessary for effective AI governance and adoption across the economy. Further developing the AI assurance ecosystem would support efforts to address common challenges, and the fostering of industry collaboration is essential to this. Investment in educational pathways and talent pipelines which develop both technical and ethical competencies would ensure future practitioners, decision makers and developers possess the core skills to navigate responsible and ethical AI. The profession, much like AI, will continue to evolve and grow, and we must work to identify areas requiring additional support. Action is needed to promote skills development that could support AI assurance.

For Practitioners

Work to cultivate strong professional norms and an ethical culture which complements the increasingly formal standards & accountability structures across our profession, and above all, be ready with a solution when push comes to shove. Contribution to the understanding of competencies and skills framework that is informed by their experience to date.

If you found this insight useful, or would like to know more, please contact [email protected].

Tess Buckley

Programme Manager - Digital Ethics and AI Safety, techUK

A digital ethicist and musician, Tess holds a MA in AI and Philosophy, specialising in ableism in biotechnologies. Their professional journey includes working as an AI Ethics Analyst with a dataset on corporate digital responsibility, followed by supporting the development of a specialised model for sustainability disclosure requests. Currently at techUK as programme manager in digital ethics and AI safety, Tess focuses on demystifying and operationalising ethics through assurance mechanisms and standards. Their primary research interests encompass AI music systems, AI fluency, and technology created by and for differently abled individuals. Their overarching goal is to apply philosophical principles to make emerging technologies both explainable and ethical.

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music. Email: [email protected] Website: tessbuckley.me LinkedIn: https://www.linkedin.com/in/tesssbuckley/

Read lessmore

Nuala Polo

UK Public Policy Lead, Ada Lovelace Institute

Lara Groves

Senior Researcher, Ada Lovelace Institute

Michael Birtwistle

Associate Director, Data & AI Law, Policy, Ada Lovelace Institute

Return to listing
techUK Ltd. published this content on September 16, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on September 16, 2025 at 15:13 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]