Oklahoma State University

03/18/2026 | Press release | Distributed by Public on 03/18/2026 15:08

OSU research uses brain signals to improve robot decision-making

OSU research uses brain signals to improve robot decision-making

Wednesday, March 18, 2026

Media Contact: Desa James | Communications Coordinator | 405 744 2669 | [email protected]

In some of the world's most dangerous environments, the difference between success and disaster can be a split-second human instinct, and at Oklahoma State University, researchers are teaching robots how to recognize it.

Dr. Hemanth Manjunatha, assistant professor for the School of Mechanical and Aerospace Engineering, is developing a new way for robots to work more safely alongside people by allowing machines to respond directly to the human brain in real time.

The research introduces a neuroadaptive control framework that integrates brain-computer interfaces with formal safety constraints, enabling robots to recognize when a human operator perceives a mistake and adjust behavior before an accident occurs.

Manjunatha notes that current teleoperation, the process by which humans remotely control robots, is exhausting for humans and lacks a safety net. This research focuses on improving teleoperation in unpredictable environments.

"In high-stakes environments, like decommissioning a nuclear site, performing deep-sea inspections, we can't yet turn the keys over entirely to a robot," Manjunatha said. "The world is too unpredictable.

"When a robot encounters something it hasn't seen before, like a shifting pile of space debris or a complex surgical complication, it lacks the 'common sense' and intuition of a human. We still need the human 'in the loop' to provide high-level judgement and adaptability that AI hasn't mastered yet."

At the heart of the research is a signal produced by the human brain known as the error-related potential (ErrP). These signals occur almost instantly when a person recognizes a mistake.

"ErrPs are specific electrical patterns generated by your brain, specifically the anterior cingulate cortex, the moment you recognize a mistake," Manjunatha said. "The fascinating part is that your brain reacts to an error faster than you can physically move your hand to fix it. By detecting these ErrPs, we aren't just reading the brain activity; we are capturing the human's instinctive 'Oh no!' moment. This tells the robot, 'Whatever you just did, don't do it again or stop doing whatever you are doing."'

Using a wearable electroencephalogram cap, the research team detects these signals and feeds them into a shared-control robotic system. When the system detects an ErrP, it can slow down, stop or shift control back to the human operator in milliseconds, providing an early warning that improves safety.

"Normally, a robot only knows it has failed when it hits something," Manjunatha said. "By the time a human corrects it, it might be too late. With brain signals, the robot gets an early warning."

To ensure the system works reliably for different people, the team created an adaptive decoding approach that learns general brain signal patterns and then fine-tunes itself to each person. This approach reduces the need for extensive calibration.

"Everyone's brain signals are as unique as fingerprints," Manjunatha said. "If a system only works for one person after hours of setup, it's not practical. We use "self-supervised learning" to create a foundational model that learns general brain patterns, which we can then quickly 'fine-tune' for a new user, much like how a new phone learns to recognize your specific face."

Safety is reinforced through formal constraints, known as Signal Temporal Logic, that define rules governing the robot's behavior. Manjunatha compares STL to strict mathematical "shalls" and "shall nots" for the robot to follow.

"Safety is the cornerstone of this project," Manjunatha said. "The brain signals tell us when something is wrong, but Signal Temporal Logic provides the rulebook. By combining human intent with mathematical guarantees, we create a system users can trust."

The project is implemented using NVIDIA Isaac Lab and Isaac ROS to simulate thousands of robot interactions and enable real-time communication with physical hardware. High-performance NVIDIA RTX PRO 6000 GPUs support the computational demands of processing brain signals while training complex robotic control policies.

"These platforms are our digital playground," Manjunatha said. "In robotics, every millisecond counts, and this ecosystem allows us to move fast and minimize lag."

The potential for this research extends into health care and rehabilitation. Future applications could include prosthetics or exoskeletons that adjust movement based on a user's comfort and intent.

"Imagine a prosthetic limb that senses when the user feels it's moving incorrectly and adjusts itself," Manjunatha said. "It's about making technology feel like an extension of the human body."

All datasets, models and code produced through the project will be released publicly, allowing other researchers to build on the work.

"If someone can take our brain-to-robot pipeline and apply it to helping people with mobility impairments, then the impact of this grant multiplies far beyond our lab," Manjunatha said.

Students play a central role in the work through OSU's iHuman Lab, where graduate and undergraduate researchers contribute to development and testing. The findings will also be incorporated into graduate-level coursework.

"We want our students to graduate not just knowing how to build a robot," Manjunatha said. "But knowing how to build a robot that understands the human it's working with."

Oklahoma State University published this content on March 18, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on March 18, 2026 at 21:08 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]