UCSD - University of California - San Diego

03/17/2026 | Press release | Distributed by Public on 03/17/2026 03:00

From Dog Soundboards to Smarter AI: What Animal Communication Reveals

Published Date

March 17, 2026

Article Content

Key Takeaways

Changing science culture to go beyond the lab: Real-home studies and citizen science can capture behavior that lab tests miss.

Understanding animal communication can help clarify AI "ground truth": AI finds patterns, but meaning requires knowing what a signal refers to.

Rossano is building a "Primate-GPT" model to track primate behavior and strengthen context-aware research.

Decoding animals' senses could help keep humans safer - by flagging early warnings, smoke, or wildfire-related cues.

Why does studying animal communication matter - and what might it teach us about building better AI? What can humans learn by mapping the cognitive abilities of other species, including their own pets? Federico Rossano, associate professor of cognitive science at the University of California San Diego and director of the Comparative Cognition Lab, is pursuing those questions with groundbreaking work to understand animal intelligence - most famously through his research on how dogs use soundboard "word buttons" to communicate with humans.

What began as viral videos of dogs "speaking" through soundboards - pressing buttons to play recorded words and combining them in context - has become a rigorous scientific effort. Rossano was the first researcher to pair controlled experiments with one of the largest citizen-science datasets in animal communication, gathered from dogs and their owners in real homes around the world. Rossano's research doesn't claim that dogs can talk, but his published studies suggest that button-trained dogs aren't simply copying their owners - they're learning a shared communication system, responding to particular words as cues with consistent meaning and, for some dogs, stringing words together in recurring two-button pairings that appear purposeful rather than accidental.

The work has resonated far beyond academia, drawing global media attentionand recently featuring in a NOVA documentarythat has been viewed by roughly four million people on PBS and YouTube. Rossano recently sat down with university communications to discuss what secrets his research unlocks about intelligence in other species, why it matters, and what its bigger-picture implications could be - from how we interpret animal minds, to how we evaluate whether AI systems genuinely understand meaning or are simply producing convincing behavior.

From viral videos to rigorous science: what's the real impact of your work so far? How has your research changed - scientifically and culturally - how we investigate animal communication?

There is a general belief that science requires highly controlled lab testing and anything else is necessarily problematic. But that is a very 20th-century way of thinking of what science is all about. Historically, if we wanted to study animal minds or animal communication, the researcher would bring animals to the lab and control every aspect of their life - this also meant that many animals were kept separate from their conspecifics, living alone or with a single other member of their species. For social animals, you can only imagine what the effect of such social deprivation would be on their minds and behaviors. This scenario has been true for dogs as well: bring them to the lab to be tested in an empty room in an unfamiliar environment by a stranger and let's see what they can do so '...let's put our study participants in very stressful and uncomfortable situations and act like we are observing their everyday behavior… My hope with our project with dogs and cats using soundboards is to show that science can be done also outside the lab, that there are ways to control the training that do not require bringing the animal to the lab. And of course, experimenters can travel to test the animals in more familiar and comfortable environments. And we always compare performance with a stranger with performance with the pet owner because motivation to engage and attention would likely differ. In the modern world, problems come in a messy form, but we now have tools to analyze messy data and to do that at scale. And we can look at data that better reflects the day-to-day experiences of the animals, rather than some simplified scenario that is completely disconnected from anything they would ever experience.

I am also proud of the citizen science part of the study. Many scientists refrain from it again because the claim is that we cannot trust what people do at home or when the scientists are not controlling every step of the process. And in some cases that is true. But I have found that engaging with the community leads to exciting insights that would be completely lost if we refrain from engaging with the people who actually live with pets every day. And I am truly amazed by how motivated people are to help scientists and do scientific research. As a scientist, I think we need to get out of "the ivory tower" and try to engage and connect and explain what we do and why we care about it. Because if you are asking interesting questions, people will listen and try to help find some reliable answers.

Documentary follows Rossano's research, which is the largest animal communication study in history, analyzing millions of button presses from thousands of dogs worldwide.

Your work suggests dogs respond to the recorded words from the buttons - not just subtle owner prompting - and that they can use buttons in consistent, context-appropriate ways across situations. What are the implications for AI - especially for building systems that communicate with humans and for testing whether an AI system understands meaning rather than just performing well?

At a very basic level, current LLMs are doing the same that these dogs are doing at least in the beginning: learning to associate patterns with specific outcomes. So if dogs see that when they push "treat" they get a treat, that is pretty highly rewarding and if they can push sequences and they get rewarded enough times, or they see them produced in that exact sequence many times, they are learning how to structure things to get the desired outcome. LLMs are learning statistical regularities, which in humans occur because that is the appropriate, normative way to do it. The issue with current AI is that we do not quite know to what degree AI understands "meaning" because they do not have "world knowledge." Or better, when AI uses words in a correct sequence it does not quite mean that it understands what each word means. My two-year-old daughter told me "I need a credit card" but that was in response to spending the day with me going to different shops, and me buying toys. She clearly has no idea of what a credit card is… but realized that if she kept asking me for it, that once she would get it, she would get toys and spend the day with dad.

The challenge of current AI research on animal communication is that finding patterns is what AI is amazing at, but without "ground truth", i.e. without knowing what that communicative signal actually means, all we would know is that AI can identify several different signals and predict what follows them, without really knowing what those communicative signals are trying to actually communicate, content-wise.

In regards to other species, we can easily figure out if whales, for example, are singing and even identify how many songs they have in their repertoire… but figuring out what they are singing about and why they are singing that song at that particular moment and to that particular addressee is a very different question.

What are some of the strides scientists have made in understanding animal communications that leverages AI, which in turn, can also help build better AI and are you part of this effort?

In some areas of animal communication we have made humongous progress. I am thinking of examples of primate communication, starting from the early work of Jane Goodall to the dozens of scholars now studying gestures, vocalizations and facial expressions in great apes and other primates. And that is why my lab has started a couple of years ago a project aimed at using AI technology (especially computer vision and multi-modal models) to build a foundational model to study primate behavior. Not just communication, but behavior in general. Because by understanding what they are doing in terms of activities and who they are doing it with and when, we can learn a lot about context and improve our animal communication models. We call it "Primate-GPT" and it is part of a large international collaboration with several outstanding researchers worldwide. It will take a while but we truly believe that the payoff will be extremely valuable.

I am truly amazed by how motivated people are to help scientists and do scientific research. As a scientist, I think we need to get out of "the ivory tower" and try to engage and connect and explain what we do and why we care about it. Federico Rossano, associate professor in the Department of Cognitive Science at the UC San Diego School of Social Sciences
Cognitive scientist Federico Rossano.

Why is it important to map different species' cognitive skills - not just "IQ rankings"-and what do you think that kind of mapping could unlock for science or society?

Understanding animal minds and their cognitive abilities goes hand in hand with our understanding of what kind of animal welfare is required for that species. It is not just important to see how they live in the wild. It is also important to know whether they can notice the difference when some things happen in their environment, like when an individual goes missing and when the environment changes around them. How long do they remember? How aware are they of their own experiences and perceptions? In other words, how sentient are they? Can they communicate about their experiences to others and do they have any control over their communications, or are they displaying pure outbursts of physiological experiences? Are they conscious when they are upset that they are upset? Can they communicate that they are in pain, etc.?

Because the other important benefit of animal communication is that it might allow us to discover an entire universe of things happening around us that we are currently unaware of. Some animals can see and hear better than us. Dogs' olfaction is unbeatable. And some animals can tell early changes in the environment that would help us predict what might be happening next (think for example of wildfires and earthquakes and the viral video of the elephants at San Diego Zoo). To put it simply, if we can figure out what they know and what they are communicating with each other, we would likely benefit from it as a species. After all, we have built airplanes because humans could not stop being mesmerized by birds flying.

Looking ahead, what are some of your upcoming research projects and what important question are you looking to answer?

With the button project I am trying to understand to what degree dogs and cats can reliably communicate with humans about their pain, their emotional states and their need for help. I believe this would dramatically change our relationship with them if confirmed. I am also very interested in assessing to what degree they can combine buttons to communicate about concepts they do not have a word/button for. We call it "productivity" and it is a skill that as of now has only been shown in humans. If the dogs can grow their communicative signals to match a changing environment, then they are experiencing the world in ways that are more similar to how humans represent things than we ever considered. And if they can communicate about a changing world, then it means that they are aware of such changes and new stimuli in the environment and that would address the open questions of their sentience and consciousness.

Concerning AI and animal behavior, I am working on two fronts: 1) trying to develop a foundational model that would allow us to reliably track and decode primate behavior, hoping that this could have important implications both for conservation but also for animal welfare in zoos and sanctuaries and 2) trying to develop monitoring systems for animals in the wild that are not invasive, that do not require darting the animals but also that appreciate that in social living animals, IDs matter. Because if we understand who they are to each other, it is an important predictor of who they will follow and what they will do next. For example, if I see my wife and children going one way and three other people going the other way, who do you think I am going to follow? And if any one of them disappeared, which among them would affect me the most, my kin or close friends or some random individual that happened to be around at some point? The point being that just like for humans, relationship matters for social animals. And tracking relationships and behavior over time is a new frontier for AI for conservation. A challenge that we need to be able to meet if we truly are trying to care for nature and what surrounds us in a meaningful way. Not humans vs. other animals, but humans with non-human animals sharing the same environment.

To support Rossano's research and the Comparative Cognition Lab, go to https://giveto.ucsd.edu/giving/home/?linkId=8c85577f-6ee3-49d5-abe0-8f746656d63a

Learn more about research and education at UC San Diego in: Artificial Intelligence

UCSD - University of California - San Diego published this content on March 17, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on March 17, 2026 at 09:00 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]