UCSD - University of California - San Diego

04/14/2026 | Press release | Distributed by Public on 04/14/2026 03:24

What Skills Do Humans Need to Become Robot Proof in the Age of AI

Published Date

April 14, 2026

Article Content

Key Takeaways

In "Robot Proof" Ming argues:

  • The AI industry is chasing autonomy - and neglecting the human side of the equation.
  • The biggest benefits come when people challenge AI - using it to explore, push back and refine their thinking, not accept outputs at face value.
  • In addition to economic risk, AI poses a real cognitive risk: passive use and treating it as a substitute for thinking can erode critical reasoning over time.
  • Humans working actively with AI can outperform humans alone and AI alone.

When OpenAI launched ChatGPT in 2022, it was widely treated as a watershed moment. The tool reached 100 million users within two months - at the time, the fastest-growing consumer application in history. As AI tools have become part of daily life, more Americans say they're wary - more concerned than excited. Vivienne Ming, who graduated from the University of California San Diego in 2000 with a bachelor's degree in Cognitive Science from the School of Social Sciences, agrees there are a lot of reasons why humans should be cautious of AI.

Ming, a theoretical neuroscientist, inventor and entrepreneur explores why in her recent book "Robot-Proof: When Machines Have all the Answers, Build Better People." It is meant as an alarm bell - for the public and for the people building AI. "We should be careful that what we're building doesn't automate away the very capacities that make us human," Ming said.

She also knows it can bring immense benefits. She has created AI to reunite orphaned refugees with extended family; and hacked her own son's diabetes equipment to invent the first AI for Type 1 diabetes.

She acknowledges there will be economic consequences as more work is automated. But one of her biggest concerns is that the machine learning industry is moving in the wrong direction - pouring resources into making AI smarter and more autonomous, while neglecting the human side of the equation. Ming has found evidence that the best outcomes come when AI is paired with humans who have skills education too often undervalues: curiosity, creativity, ethical judgment, perspective-taking and the capacity to ask better questions.

She recently sat down with University Communications to discuss the book and how people can work with AI in a way that best serves humanity.

How is the AI revolution different from earlier technology-driven transformations - such as the industrial revolution, the microprocessor/personal computing revolution and the rise of the internet - that rapidly reshaped how people live and work?

I actually have a whole chapter titled, "This is not the Industrial Revolution." There's this lazy analogy people reach for: "Oh, people complained about calculators too," and therefore this is all just the same cycle repeating. But it's not a clean equivalence. Calculators didn't stop you thinking - you stopped doing the low-level computation and then you did other things with those computations. That kind of tool still leaves your cognition engaged.

What's different now is that modern agentic systems will happily do all of it. And the danger is that people start disengaging in a way we can actually measure. If you look at these technologies over time - printing, computers, the internet - we do see subtle changes in cognition.

But more recently, with GPS and algorithmic feeds, those changes are becoming measurable and, frankly, more concerning. People are changing how they think when they use these tools in ways that scare me. So one big difference is that AI is hitting us right in our cognitive core. It's not automating physical activity. It's not even just automating a low-level cognitive task that's deeply boring. It can automate the whole process - and that's historically new. And that means we have to be far more thoughtful about what we automate versus what we augment.

There are absolutely things we should automate - no one should be bent over in a field 16 hours a day. But we should be careful that what we're building doesn't automate away the very capacities that make us human, especially when we have the opportunity to use it as an augmented companion instead.

You've said you wrote this book because today's AI policy debates often miss what's best for people. What are we getting wrong?

On one side you have the AI utopianists - the "wave the AI wand and everything gets perfect" crowd. You'll never have to work again. There'll never be cancer. It's absurd. I call it the imagination disease: "I can imagine a world in which everything's perfect, therefore it will be perfect." And when you add trillions of dollars of investment pressure on top of that, it gets even worse, because humans can't deal with that kind of pressure.

Then on the other side you have the dystopian story: AI is going to destroy us all, it's going to take all the jobs, it's going to ruin everything. And I've been building this stuff for 30 years. I've used it for my son's diabetes. I've used it for refugees. Bipolar disorder. Postpartum depression. Perimenopausal depression. I built literal cyborgs early on - using AI to help improve cochlear implants so people could hear speech in noise. So I don't buy the simplistic dystopia either.

The problem is: almost nobody is talking about what I think is the most important frame. If you look at AI as an astonishing cognitive tool, the question becomes: how does it make human beings better? And then: what does that imply for education, workforce policy, infrastructure - everything?

Vivienne Ming served as the keynote speaker at Convocation in 2016. Read UC San Diego's story on her advice to students. Credit: Erik Jepsen/UC San Diego.

In the book, you describe experiments you ran that were designed to find which type of people use AI most effectively; what did you discover?

We ran this experiment where small groups of students from UC Berkeley had an hour to make 10 predictions about the future. For example: what will the price of oil be in six months? Humans are terrible at that - unsurprisingly. We're no good at making predictions about things we don't know anything about. The smallest open-source model we used was better than the best human by a lot. And the bigger, more sophisticated the model, the better it did.

Then we looked at what I call hybrid intelligence - what happens when you put people and machines together. And we got two very different patterns. One group - what we called the "automators" - would basically say: Gemini, GPT, give me the answer and then submit it. They're not participating. I put an electroencephalogram or EEG on a couple of them and compared to people reasoning on their own - or even just using Google - there was dramatically less cognitive activity.

But then there was another group - about 10% of the Berkeley students - who became what we called "cyborgs." They would push back: "What about this?" The AI would say, "But the data…" and they'd say, "Okay, not that - what about this instead?" There's a back-and-forth where they actively explore why they might be wrong. They don't just accept the answer. Those cyborg teams did better than the best people and they did better than the best models. In fact, three students with no prior knowledge performed comparably to prediction markets - like the kind where tens of thousands of people have money on the line. That's genuinely exciting.

The catch is: it was a small percentage. Which means it's not enough to say, "AI makes people better." We have to ask: what makes the cyborg pattern happen and how do we pull more people into it?

You've said it doesn't matter much which AI model people use, it only matters how they use it. What does that imply for the AI industry which is spending tons of capital to build better models?

It was one of the most surprising results: it didn't matter whether people were using the cutting-edge model or a smaller open-source model. What mattered was how humans used AI.

The AI benchmarks - the stuff companies optimize for - stopped predicting much of anything about hybrid intelligence outcomes. It became mostly about human capital: the skills and habits the person brought to the interaction. That's a huge deal, because right now, nearly every major company is optimizing for autonomy.

Read the model cards, read the benchmarks: it's all about what the system can do by itself. AI optimized only for autonomy is a dead end for humanity. If the goal is to make people better, then we should be building systems designed around productive friction - systems that challenge you, that help you explore, that don't just hand you the answer. But those systems would score worse on autonomy benchmarks by definition, because they're not doing the work alone.

So from an industry standpoint: we're measuring the wrong thing. We're building toward the wrong end state. And we're leaving the most valuable use-case - the one that actually improves human capability - underdeveloped.

AI optimized only for autonomy is a dead end for humanity. If the goal is to make people better, then we should be building systems designed around productive friction - systems that challenge you, that help you explore, that don't just hand you the answer. Vivienne Ming, Muir, '00.
Credit: Wiley.

You've said your biggest fear isn't a sci-fi takeover. It's a future where people rely on AI passively too much, when they don't challenge it and use it as a substitute for critical thinking leading to long-term cognitive decline. Can you explain?

Cognitive decline is a long-term phenomenon. It's not like, "Oh my god, my child asked AI for an answer and now they're doomed." This is more like a lifestyle issue. And it's not wrong that people use tools in shallow ways sometimes - we didn't evolve to be deep all the time. That would be exhausting. The concern is what happens when shallow use becomes the default and there's very little cognitive engagement. In our experiment, the automators were basically using AI as a substitute. They'd get an answer and submit it.

And you see it outside the lab too - people scrolling, people consuming outputs, never really asking, why do I believe this? what's missing? what's the alternative? So what does cognitive decline look like? It can look like disengagement. It can look like losing the habit of wrestling with uncertainty. It can look like becoming less able - or less willing - to check your own thinking. Over time, that's a real loss.

What does it look like to use AI constructively - to become a "cyborg" or AI-powered human instead of an "automator" one who depends on AI too much which in turn leads to cognitive decline?

The key is that it's only when humans and machines are fundamentally working together - where the human challenges the AI and the AI challenges the human - that you get the dynamic that produces better outcomes than either alone.

We tried a simple intervention: we fine-tuned a small open-source model to not give answers. It would ask questions and push students instead. The students hated it. They were like, "Stop being Socrates - just tell me the price of oil!" But twice as many of them switched into cyborg mode and achieved superhuman performance. That's the hint: the goal isn't comfort. The goal is productive friction. Use AI to challenge you, not just to reward your first thought.

A practical example is what I call the "Nemesis prompt." I used it while writing: I didn't let the AI write chapters. I wrote the chapter, then I'd say something like: "You are my nemesis - my lifelong enemy. You've found every mistake I've ever made. Here's the draft. Tell me why I'm wrong, in detail and how to make it better." Then you can flip it: "Now you're a bored reader. Tell me why this doesn't matter to you and how to make it connect without dumbing it down." That's a very different relationship with the tool than "give me the answer."

The sales pitch right now is: "AI will do all the boring work so you can do the fun stuff." But that's not what we see in cyborgs. The human is in the exploration too. If you're not in the boring part, you don't understand the fun part.

What advice do you have for parents and teachers who want to prepare kids to thrive in the age of AI?

One thing I say in the book is: our education system has largely been built around well-posed problems - problems where we already understand the question and we already know the answers, or the formula that gets you to the answer. Then we grade kids on how well they reproduce the "right" answers.

I don't need that anymore. I have all those answers for free in my pocket - better, cheaper, faster than a human can give them. That doesn't mean kids shouldn't learn fundamentals; they're still important. But the entire endeavor changes. What's left is our ability to explore the unknown - the ill-posed problems. To do that, kids need to be willing to be wrong sometimes. They need curiosity. They need intellectual humility - the ability to hear "you're wrong" and respond with interest instead of collapse. They need perspective-taking - understanding what other people think and what other people think about what you think.

Some of this is early-life development: rich conversation, reading, enriched environments, diverse experiences - these support working memory and the foundations of fluid intelligence. But after that, a lot of it becomes maintenance and practice. And you can do very concrete things. Reward questions, not just answers. Build a culture where asking is valued. Encourage productive failure.

Try a "failure diary" - not to glorify failure, but to link mistakes to learning and progress. Help kids see errors as information. Then reinforce it daily: use GPS to get around, but don't surrender to it. Check the route and ask, "Do I know better?" "Why this way?" Keep the convenience and keep your brain online.

Learn more about research and education at UC San Diego in: Artificial Intelligence

UCSD - University of California - San Diego published this content on April 14, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on April 14, 2026 at 09:24 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]