01/23/2026 | News release | Distributed by Public on 01/23/2026 14:19
Driving is the controlled operation and movement of a vehicle, an act that requires making continuous decisions, many of them instantaneously.
As we enter the age of autonomous vehicles, the question is not whether an AI brain can make many critical decisions instantaneously, but rather, who will define AI's sense of right and wrong?
Who is responsible for AI's soul?
Programming autonomous vehicles for ethical decision-making is a modern real-world challenge. Unavoidable "dilemma situations" cannot be excluded and automotive programmers must prepare for them. However, a universal moral code for machine ethics and self-driving cars does not exist.
"We are now dealing with problems that become not only internal problems for engineering, but are also now becoming problems that society recognizes as critical questions they would like answers to," said Wolf Schäfer, professor emeritus in Stony Brook's Department of Technology and Society.
Although ethical theories such as libertarianism, utilitarianism and Kantianism are available, the algorithmic implementation of any would seem arbitrary. The different moral preferences found in Western, Eastern and other cultural clusters would also impede the design of a morally sound and globally valid vehicle control system.
Schäfer, who has been leading an Automotive Ethics VIP (Vertically Integrated Projects)team since 2020, said that addressing these problems is now critical and requires changes in engineering education, especially appropriate AI design and teaching. Researchers predict that the shift to autonomous vehicles will take more than a decade.
"We should use this time to plan the rapidly expanding AI sphere," he said, noting that in the U.S. there are about 40,000 fatal motor vehicle crashes per year. "That's more than homicide, plane crashes and natural disasters combined. Some estimates say that almost 10,000 crash victims come to the emergency rooms every day. That's what cars driven by humans do. The social burden of injuries and fatalities in car crashes is just much too high."
Schäfer said his VIP project provides a unique opportunity to integrate subjects like moral philosophy into typical engineering classes, an increasing need. He began building the lab in 2022, using model cars equipped with cameras and sensors on a racetrack.
"We installed little signs with bar codes that instructed the cars what they were encountering, so when the cars approached a person or multiple people, they had to read the bar code," he said. "The students wrote the code in a way that the car was instructed to do certain things under certain conditions."
Those conditions included moral dilemma situations.
"We'll need to distinguish between moral, immoral and rightful machines," said Schäfer. "'Immoral' machines would be machines that do things that we consider immoral. 'Moral' machines are machines that follow certain ethical rules or conventions. And 'rightful' machines would be created by the societal certification of new technology to be allowed on public roads."
The challenge is that people will have to engage in discussions about ethical questions.
"Engineering education is not there yet," he said. "It's not like we have Newtonian or other physical laws, where we know that the value of gravity is not up for debate. It's a universally recognized number. It doesn't matter in which country or under which president. But with ethical rules, there's variation."
Schäfer said that engineering programs are very good at teaching technical skills, but lack the critical evaluative skills that the humanities and social sciences offer.
"We're running into problems that cannot be solved with just technical skills," he said. "And this is the new model for our department. AI is too important to be left to computer science alone."
Engineering students, said Schäfer, will need to collaborate with humanities and social science scholars.
"The VIP program and our particular project point to a solution of engineering plus applied humanities and social sciences," he said. "That's not only a great opportunity for engineering, but also a great opportunity for the humanities and social sciences."
"I originally joined the Automotive Ethics Lab because I've been fascinated by autonomous vehicles for years, specifically with how companies like Tesla approach self-driving," said Ammar Ali '26. "That interest became personal when I bought my own Tesla and started regularly using its self-driving features, which compelled me to directly contribute to the cause rather than simply observing it."
Though the technical side of the research lab appealed to him as a computer science major, Ali said its focus on the ethical and humanistic constraints of technology drew him in, aligning with his passion for integrating AI into everyday life in a responsible manner.
"I'm currently leading the development of a machine learning model that will serve as the ethical attention head within our simulation," he said. "Besides being able to use the most advanced computing resources of Stony Brook, this research is truly multidisciplinary, and it's something that I view as far beyond the typical undergraduate level."
Ali added that the professional environment established within the team mimics real-world industry practices, giving it a head start and setting expectations as students move closer towards their professional careers. "In addition to expanding my technical skills, particularly in AI/ML, I also hope to build on my leadership experience. Ultimately, I aim to influence how intelligent systems are built, and the Automotive Ethics Lab is preparing me to do so."
Schäfer describes himself as a historian of science and technology that would classify as a social scientist. That background has provided him with a keen sense of change and a combination of technical and societal understanding.
"To contemplate AI does not necessarily require the singular disciplinary skills that were the requirement for so long," said Schäfer, who noted that this year, the Department of Technology and Society will become the Department of Technology, AI and Society. "Cars today are computers on wheels. But these computers on wheels have great computing capabilities, and they can do a lot more in a short time. Humans couldn't react that quickly. But the AI in a car can recognize that it has options in a dilemma situation. And if we have options, those are all either political or moral choices."
It's those choices that present AI with what Schäfer describes as "no clear-cut way to go," one requiring ethical deliberation to figure out what would be the best way.
"2000 years of philosophy have not produced a universal moral theory," he said. "They've produced a lot of very good things, but we do not have the equivalent of Newtonian laws. We have theories. Depending on how it's programmed, not every AI-driven car would make the same decision. We are the ones who will try to get as close as we can in terms of translating philosophical theories into algorithms. Engineering has become too important to be left to the engineers."
- Robert Emproto