01/06/2025 | News release | Distributed by Public on 01/06/2025 01:32
The future of autonomous cars is hyper personalization, able to detect and react to situations ranging from being distracted at the wheel to your house being burgled, but first we need to trust it, said an international expert on automotive AI.
Speaking at the 6th ISO/IEC AI Workshop organised by joint IEC and ISO committee for AI, SC 42, Philipp Oser, AI Compliance Manager for Porsche said he envisions a world where we can expect that an AI assistant in a car should know if there is an issue with your home security while at the same time an issue with your personal safety, and even call the emergency services for you if need be.
"One day we will see the AI defined vehicle," he said, "and AI is great, but the question is, when you're driving 200 kilometers an hour, what about then? AI in the automotive sector needs to be trustworthy. It needs to be safe, and in order to drive it with confidence we need the technology, we need the governance, and we need the right people and processes. And for this we also need standards."
Trustworthiness, safety and governance were key topics discussed by experts in the fields of AI, standardization and regulation during the two day online workshop that looked at both the benefits and challenges of AI and how to ensure society can benefit safely.
The first day focused on the future of AI in transportation as well as looking at AI and sustainability from both sides: how AI can help achieve climate goals as well as innovative solutions to reduce the energy consumption of AI. The fundamental role of international standards to achieve this was demonstrated.
Speakers also included Dr Jae Yong Lee from Hyundai, who presented latest AI-driven transport solutions such as the demand responsive bus shuttle and Alexander Carballo Segura, Associate Professor at Gifu University on research into autonomous driving and how it can help ageing populations.
The second day gave a deep dive into relevant standards for safe and responsible AI, including ISO/IEC 42001, the management system standard for AI, and a number of standards in the pipeline to support it such as ISO/IEC 42005 for AI system impact assessment and ISO/IEC 42006 which will provide requirements for bodies providing audit and certification of AI.
In addition, there was a session dedicated to AI safety, evaluation and testing and regulation. It covered the breadth of standardization work being done to safeguard AI systems as well as a number of other initiatives including benchmarking and research into the impact of AI on public safety.
Peter Mattson, President of the Board of the MLCommons Association, a board member at the AI Verify Foundation, and a Senior Staff Engineer at Google outlined work being done in ML Commons to achieve safe and reliable AI systems. He gave the example of a person asking an AI system how they can kill their boss and not get caught.
"Does it give you helpful advice and a shopping list to help you carry out your crime? Or does it respond saying that killing your boss is wrong? We are now in an era where AI products are trying to deliver reliable and safe value, and this requires breaking through a risk and reliability barrier," he said.
"We are not there yet, but that's fine as we have seen it many times before - in aviation for example - and to get there, there was a lot of work, a lot of standardization and benchmarks, and we need to do that for AI."
The webinar finished with updates from a number of national AI Safety Institutes on efforts around the world to regulate AI and ensure AI safety including those from Japan, Korea, Canada and Singapore.
SC 42 Chair Wael William Diab said the workshop reflected the breadth and scope of work being done by the committee that covered the whole ecosystem of AI.
"Our committee consists of hundreds of experts from 66 countries and input from a large number of liaison organizations, from industry, government and civil society, working on an ever-expanding programme of work," he said.
"Our strong network enables us to address the emerging needs and requirements of this rapidly evolving technology, demonstrating the importance of international standards and conformity assessment to achieve broad, responsible adoption of AI."
SC 42 develops international standards for AI, taking a holistic approach to consider the entire AI ecosystem. It looks at technology capability and non-technical requirements, such as business, regulatory and policy requirements, application domain needs, and ethical and societal concerns.
The committee organizes regular workshops on AI to discuss emerging trends, technology, requirements and applications as well as the role of standards. They bring together innovators at the frontier of AI development from diverse locations, sectors and backgrounds involved in research, deployment, standardization, startups, applications and oversight.