02/17/2026 | News release | Distributed by Public on 02/17/2026 16:03
In his office lined with hand-drawn diagrams and alphabet-like symbols, Stony Brook researcher Jeffrey Heinz is trying to answer a deceptively simple question: How well, exactly, can today's neural networks learn, and where do they fail?
Heinz, a professor with a joint appointment in the Department of Linguistics and the Institute of Advanced Computational Science, usually studies the sound patterns of human language. In his latest project, he and his collaborators have built something that looks less like a traditional linguistics study and more like a stress test for modern AI.
Their work, called MLRegTest, is a carefully designed stress test for neural networks (or other AI techniques), built not to ask a model to write articles or poems, but to pose thousands upon thousands of tiny yes-no questions about simple symbol patterns, and watch very closely what happens.
Heinz said, "We're trying to understand the learning capacities of neural networks from a controlled experimental point of view," Heinz said. "It's an endeavor to map their performance on kind of a big scale."
Read the full story by Ankita Nagpal on the AI Innovation Institute website.