OpenAI Inc.

04/26/2026 | Press release | Distributed by Public on 04/26/2026 18:05

Our principles

April 26, 2026

Company

Our principles

By Sam Altman

Loading…
Share

AI has the potential to significantly improve many aspects of society.

This technology, like others before, will give people more capability and agency; what people will be able to do with AI will dwarf what people could do with steam engines or electricity.

We envision a world with widespread flourishing at a level that is currently difficult to imagine, and a world in which individual potential, agency, and fulfillment significantly increase. A lot of the things we've only let ourselves dream about in sci-fi could become reality, and most people could live more meaningful lives than most are able to today.

But this outcome is not guaranteed. Power in the future can either be held by a small handful of companies using and controlling superintelligence, or it can be held in a decentralized way by people. We believe the latter is much better, and our goal is to put truly general AI in the hands of as many people as possible. Like the present, the future won't be all bad or all good, but the decisions we make now can help maximize the good.

Our mission is to ensure that AGI benefits all of humanity. Here are the principles that guide our work.

1. Democratization. We will resist the potential of this technology to consolidate power in the hands of the few.

This means that in addition to giving everyone access to AI, we need to ensure that key decisions about AI are made via democratic processes and with egalitarian principles, and not just made by AI labs.

2. Empowerment. We believe AI can empower everyone to achieve their goals, learn more, be happier and more fulfilled, and pursue their dreams, and that society as a whole will benefit from this.

Achieving this requires letting people explore the enormous potential in front of us, and we need to build products that enable this. Users should reliably be able to accomplish increasingly valuable tasks with our services.

The world is diverse and people have different needs. We want to give our users the autonomy they need and allow as much as we reasonably can.

Although we want to give our users very broad latitude in how they use our services and strongly believe that AI will be hugely beneficial on the whole, we have a responsibility to build and deploy it in a way that minimizes harm. This includes of course preventing catastrophic harm, but also minimizing local harms and avoiding potential corrosive societal effects. This will mean erring on the side of caution in the face of uncertainty, and relaxing constraints with more evidence.

3. Universal prosperity. We want a future where everyone can have an excellent life.

By putting easy-to-use AI systems with a lot of compute power into the hands of everyone, we believe people will find new ways to generate value and massively improve quality-of-life for everyone, especially with discovery of new science.

For prosperity to be fully realized and widely shared, we believe that 1) our governments may need to consider new economic models to ensure that everyone can participate in the value creation in front of us and 2) we need to build huge amounts of AI infrastructure and develop new technology to drive costs of AI infrastructure way down.

A lot of the things that we do that look weird-buying huge amounts of compute while our revenue is relatively small, vertically integrating to lower costs and make our technology easier to use, pushing to build datacenters all around the world, and much more-are driven by our fundamental belief in a future of universal prosperity.

4. Resilience. AI will introduce new risks, and we will work with other companies, ecosystems, governments, and society to solve them. We will make significant use of our Foundation's resources to support this work.

No AI lab can ensure a good future alone. For an obvious example, there may be extremely capable models that make it easier to create a new pathogen, and we need a society-wide approach to defend against this with pathogen-agnostic countermeasures. For another example, as the cybersecurity capabilities of models increase, we need to rapidly use these models to help secure open-source software and critical infrastructure, while training the models to help everyone create more secure software.

This is an expansion of our long-held strategy of iterative deployment; we believe society needs to contend with each successive level of AI capability, understand it, integrate it, and figure out the best path forward together. This cannot be done in a vacuum; society and technology co-evolve, and that requires time.

We do not mean this as our only safety strategy; we also need to make safe systems and continue to do great work on technical alignment.

We expect there will be periods where we need to collaborate with governments, international agencies, and other AGI efforts to ensure that we have sufficiently solved serious alignment, safety, or societal problems before proceeding further with our work.

5. Adaptability. We continue to believe the only way to meet the challenges of a very unpredictable future is to be prepared to update our positions as we learn more. We also acknowledge that OpenAI is a much larger force in the world than it was a few years ago, and we will be transparent about when, how, and why our operating principles change. As a concrete example, while we are quite confident that universal prosperity will remain really important, we can imagine periods in the future where we have to trade off some empowerment for more resilience.

AI development has brought many surprises, and more are still to come. As the technology advances, its emergent behaviors will become increasingly difficult to predict. We embrace that uncertainty by advancing capabilities carefully, deploying systems iteratively, and learning from their interactions with the world.

It wasn't that long ago that we were nervous about releasing the weights of GPT-2 because we weren't sure what the impacts on society will be. Obviously in retrospect that was a misplaced worry, but it led to us discovering the strategy of iterative deployment, which has been one of the most important things we've figured out.

__

We are heading into a very impactful phase as the technology continues to improve. It's very fair to critique us on every decision; we deserve an enormous amount of scrutiny given the weight of what we are doing. We will not get everything right, but we will learn quickly and course-correct.

We are committed to doing our part to make the future better than the past; we feel lucky to get to take on such important work.

Author

Sam Altman
OpenAI Inc. published this content on April 26, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on April 27, 2026 at 00:05 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]