04/02/2026 | Press release | Distributed by Public on 04/02/2026 07:18
You can design a self-driving car that follows traffic laws to a T, but if it isn't trained to account for how other people drive in the area, it may do more harm than good.
Cyber-physical systems designed to work well for and alongside their human users will define the next iteration of state-of-the-art autonomous technology. A research team led by The University of New Mexico recently wrapped up a six-year projectaimed at creating a new framework for human-centered technology.
The $5.5 million multi-university project was led by Meeko Oishi, principal investigator and professor of electrical and computer engineering. The project, titled "Cognitive Autonomy for Human CPS: Turning Novices into Experts," was supported by the highly competitive National Science Foundation Cyber-Physical Systems Frontier program. It also marked the first time UNM was selected to lead such a grant.
"Frontier projects are special, because they involve larger teams of researchers collaborating over a longer duration than typical awards. They facilitate meaningful interdisciplinary collaboration and enable technical contributions at a scope and depth that simply aren't possible in smaller projects. Our award involved nine faculty across five institutions, with three industry partners," Oishi said.
The team included experts in formal methods, aerospace sciences, control theory, human factors, autonomous systems, and human-centered design, with faculty from UNM, Purdue University, University of Colorado Boulder, University of Texas at Austin, and Penn State University. Industry and government lab partners included Sandia National Laboratories, Raytheon Technologies, and the Air Force Research Laboratory.
Cyber-physical systems (CPS) are complex engineered systems in which physical and digital elements work in tandem, typically involving extensive computation, communication, and control. With recent advances in machine learning, many of these systems have been designed to work more autonomously, but even autonomous systems work with humans either through supervision or collaboration.
Oishi's team was interested in developing the algorithms and theories underlying autonomous systems that allow them to be responsive to humans and also amenable to guarantees. The main topics of exploration were developing computable cognitive models, algorithms for predictive monitoring and probabilistic verification, cognitively aware controllers, and algorithms and devices to facilitate transparent communication. Essentially, the team aimed to develop models that could characterize the human state, and algorithms that could ensure correctness, respond to the state of the human and the autonomous system, and constructively communicate its reasoning.
"Our research focused on creating a new framework for cognitively aware autonomy," Oishi said. "Our goal was not simply to replicate human behavior or to mimic it, but rather to use inferred human cognitive state to be responsive to what the human actually needs."
The team also developed and led an outreach program called the Summer Intensive Research Institute, aimed at improving pipelines to careers in cyber-physical systems. Participating undergraduate students came to UNM for orientation before going to Purdue to work on research projects in cyber-physical systems where they were matched with existing research programs designed to support students who may not have experience in academia. Students were also exposed to a wide variety of professional development activities, lightning talks and more.
Every year, the research team held NSF site visits to summarize findings. At the final visit held in summer 2025 at Purdue University, researchers presented three demos of their work modelling a new state-of-the-art approach to designing human-centered cyber-physical systems.
The demos showcased live integrations of the team's algorithms onto physical platforms, addressing problems that go "well beyond the state of the art," according to Oishi.
The demos included:
Landing a quadrotor
Quadrotors are unmanned aerial vehicles (UAVs) that are typically remotely operated with a joystick. Operating the vehicles, and especially landing them gently, can be challenging for users. The research team developed algorithms that observed how users of different experience levels approached the task of landing a simulated quadrotor and a large language model (LLM) that would provide live feedback based on what they were doing, perceived self-confidence and other cues. During the demo, the LLM provided customized assistance and constructive feedback to three volunteers trying to land the simulated quadrotors.
UAV Rescue Scenario
The second demo involved two humans using a team of UAVs to perform a rescue scenario. Algorithms that the team developed used psychophysiological sensing to infer workload of each of the human operators, and then customized the operators' next tasks to maximize work accomplished while also respecting workload limits. The technology is designed to delegate tasks based on workload, stress management and perceived affinity for certain tasks.
Haptic Driving Simulator
The third demo modeled an advanced driving simulator that uses haptic feedback to provide drivers with assistance in detecting blind spots, such as deer jumping into the road and other challenges. Researchers programmed vibrating sensors into car seat fabric to quickly offer cues to the driver. Research has shown that haptic technology could be more useful to older populations than audio or visual signaling, which are senses more likely to degrade with age. Haptic sensors in cars could thereby improve an older adult's ability to drive in complex scenarios.
As the project wraps up, the team is proud of all it has accomplished, Oishi said. Its members have published 84 journals and refereed conference papers, graduated 12 Ph.D. and four master's degree students, and supported more than 35 graduate students. Two postdoctoral scholars completed fellowships supported by the project.
The ultimate success, though, was working together with experts from many disciplines to lay the groundwork for autonomous tools that will work better for the people using them.
"With the growing ubiquity of AI, there is a need for cyber-physical systems to be able to work with everyone, not just highly-trained users," she said. "What we have provided as a team is a foundation for future work across fields to design autonomous systems that can explicitly take the human into consideration."