04/16/2026 | News release | Distributed by Public on 04/16/2026 11:38
Rob Moore is a recognized leader in the development of autonomous science and self-driving laboratories at the Department of Energy's (DOE) Oak Ridge National Laboratory (ORNL). A Tennessee native who spent five years as a U.S. Navy submarine officer, Moore joined ORNL in 2019 to perform research in the syntheses and characterization of quantum materials.
Moore's background as a mechanical engineer led him to the Navy where he developed a deep interest in science, eventually focusing on next-generation quantum applications. Looking to develop quantum materials with tailored properties, Moore realized a natural next step was to use artificial intelligence to accelerate the discovery of those materials, knowledge he then applied as director of the lab's INTERSECT initiative. INTERSECT was designed to develop a scalable ecosystem to enable interdisciplinary "self-driving" processes for research across ORNL. Building in INTERSECT's success, Moore is turning his attention to a new initiative called Labs of the Future (LOTF). This new initiative will build on the foundational work of INTERSECT to move ORNL toward true autonomous research, positioning the lab to make key contributions to DOE's Genesis Mission.
The Genesis Mission is a national initiative to build the world's most powerful scientific platform to accelerate discovery science, strengthen national security, and drive energy innovation.
A. We've entered a new era, if you will, with a lot of these large language models (LLM) that are coming online. It's amazing how fast they've developed where they can give us reliable information. As we start training these models on a lot of this scientific information, we can get a lot of information from these bots that helps us in our science.
A. We see these AI agents can deliver information instantly, but there's the question of reliability. A lot of these tools can hallucinate. They can deliver bad information in a way that makes it seem good. We see potential acceleration for these tools, but the question is how can we deploy them to ensure that the information they're providing is reliable?
A. Science is a little different. We can't produce bad information for society. That's not our job. The information we put out there, we have to do our due diligence to ensure that it is accurate, reliable and reproducible. This was one of the biggest issues with some of these models before.
A. A lot of these tools can find correlations in information and data much faster than we can. There are a lot of ways that they can help us make decisions, create hypotheses, and steer experiments. There are a lot of ways it can really help us accelerate science.
A. When this was first happening, I don't think people had it on the radar that these LLMs were going to be directly applied for science or have an impact on science. I think what surprised us was how quickly they were able to develop intelligence to the point where it can help us find information and it can work with scientists to help make decisions. Because of that, I think what is really brought into the light is that we can use AI to move faster in science.
We have a lot of these grand challenge problems that we've been working on for decades that we just can't solve or at least solve fast enough. But because of the rapid progress of AI, we are starting to understand how to offload (human) cognitive tasks to help free up our bandwidth to do things more holistically and use these tools to kind of help us on a lot of our scientific endeavors, especially these huge, complex grand challenges.
A. I would say you could have an automated lab without autonomy. But for autonomous operation, we need to have decision-maker in there. Now we're starting to get to the point where we can have AI help make decisions while the humans provide oversight and are in an advisory role. If it's just automation, you can have the instrumentation set up to just do tasks over and over again without decision making.
An example of automation is our 4-D STEM (Scanning Transmission Electron Microscopy) that can look at an image and use a neural network to be able to tell where the atoms are and to be able to look at a defect and classify it. But it stops there. It doesn't make a decision and decide what we're going to do next with this new information. Full autonomy is seeing something interesting and being willing to pull the thread to dive deeper. Having decision-making in there to drive the next set of experiments, this is how we can think about the difference between autonomy and automation.
UT-Battelle manages ORNL for DOE's Office of Science, the single largest supporter of basic research in the physical sciences in the United States. The Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science. - Greg Cunningham