12/11/2025 | News release | Distributed by Public on 12/11/2025 15:46
December 11, 2025
CompanyReflections on a decade of breakthroughs, learnings, and the path toward AGI that benefits all of humanity.
OpenAI has achieved more than I dared to dream possible; we set out to do something crazy, unlikely, and unprecedented. From a deeply uncertain start and against all reasonable odds, with continued hard work it now looks like we have a shot to succeed at our mission.
We announced our effort to the world ten years ago today, though we didn't officially get started (opens in a new window)for another few weeks, in early January of 2016.
Ten years is a very long time in some sense, but in terms of how long it usually takes the arc of society to bend, it is not very long at all. Although daily life doesn't feel all that different than it did a decade ago, the possibility space in front of us all today feels very different than what it felt like when we were 15 nerds sitting around trying to figure out how to make progress.
When I look back at the photos from the early days, I am first struck by how young everyone looks. But then I'm struck by how unreasonably optimistic everyone looks, and how happy. It was a crazy fun time: although we were extremely misunderstood, we had a deeply held conviction, a sense that it mattered so much it was worth working very hard even with a small chance of success, very talented people, and a sharp focus.
Little by little, we built an understanding of what was going on as we had a few wins (and many losses). In those days it was difficult to figure out what specifically to work on, but we built an incredible culture for enabling discovery. Deep learning was clearly a great technology, but developing it without gaining experience operating it in the real world didn't seem quite right. I'll skip the stories of all the things we did (I hope someone will write a history of them someday) but we had a great spirit of always just figuring out the next obstacle in front of us: where the research could take us next, or how to get money for bigger computers, or whatever else. We pioneered technical work for making AI safe and robust in a practical way, and that DNA carries on to this day.
In 2017, we had several foundational results: our Dota 1v1 results, where we pushed reinforcement learning to new levels of scale. The unsupervised sentiment neuron, where we saw a language model undeniably learn semantics rather than just syntax. And we had our reinforcement learning from human preferences result, showing a rudimentary path to aligning an AI with human values. At this point, the innovation was far from done, but we knew we needed to scale up each of these results with massive computational power.
We pressed on and made the technology better, and we launched ChatGPT three years ago. The world took notice, and then much more when we launched GPT-4; all of a sudden, AGI was no longer a crazy thing to consider. These last three years have been extremely intense and full of stress and heavy responsibility; this technology has gotten integrated into the world at a scale and speed that no technology ever has before. This required extremely difficult execution that we had to immediately build a new muscle for. Going from nothing to a massive company in this period of time was not easy and required that we make hundreds of decisions a week. I'm proud of how many of those the team has gotten right, and the ones we've gotten wrong are mostly my fault.
We have had to make new kinds of decisions; for example, as we wrested with the question of how to make AI maximally beneficial to the world, we developed a strategy of iterative deployment, where we successfully put early versions of the technology into the world, so that people can form intuitions and society and the technology can co-evolve. This was quite controversial at the time, but I think it has been one of our best decisions ever and become the industry standard.
Ten years into OpenAI, we have an AI that can do better than most of our smartest people at our most difficult intellectual competitions.
The world has been able to use this technology to do extraordinary things, and we expect much more extraordinary things in even the next year. The world has also done a good job so far of mitigating the potential downsides, and we need to work to keep doing that.
I have never felt more optimistic about our research and product roadmaps, and overall line of sight towards our mission. In ten more years, I believe we are almost certain to build superintelligence. I expect the future to feel weird; in some sense, daily life and the things we care most about will change very little, and I'm sure we will continue to be much more focused on what other people do than we will be on what machines do. In some other sense, the people of 2035 will be capable of doing things that I just don't think we can easily imagine right now.
I am grateful to the people and companies who put their trust in us and use our products to do great things. Without that, we would just be a technology in a lab; our users and customers have taken what is in many cases an early and unreasonably high-conviction bet on us, and our work wouldn't have gotten to this level without them.
Our mission is to ensure that AGI benefits all of humanity. We still have a lot of work in front of us, but I'm really proud of the trajectory the team has us on. We are seeing tremendous benefits in what people are doing with the technology already today, and we know there is much more coming over the next couple of years.