03/19/2026 | Press release | Distributed by Public on 03/19/2026 10:02
Today, we're launching new AI tools for support and content enforcement on our apps to make them work better for you. As technology advances, we're applying AI in more ways so you can get reliable, action-oriented help when you need it, and we can catch more severe violations like scams faster and more accurately, with fewer over-enforcement mistakes.
Launching the Meta AI Support Assistant on Facebook and Instagram
In December, we previewed the Meta AI support assistant - a tool designed to provide reliable, 24/7 support for nearly any support issue at any time. Now, we're rolling it out globally on the Facebook and Instagram apps for iOS and Android, and within Help Center on Facebook and Instagram on desktop, with even more capabilities and ways to help.
When you have an account issue, you need a solution - not just a suggestion. The new Meta AI support assistant is designed to help resolve account problems for you from start to finish. It offers answers for any question - like about notification settings or new features - and if you'd like, it can also take action for you on a growing set of requests directly within Facebook and in the future, on Instagram, including:
Getting support should be simple. The Meta AI support assistant is built into Facebook and Instagram, so help is always just a tap away. It can respond to requests typically in under five seconds, dramatically reducing wait times compared to traditional help center searches or seeking answers on external websites.
The Meta AI support assistant is a major step in our work to deliver stronger support on our apps. In fact, among people who have provided feedback, the majority report a positive experience with the Meta AI support assistant. It's rolling out now in all languages supported by Facebook and Instagram for support topics.
We've also started rolling out the support assistant to people who need help logging into their Facebook and Instagram accounts, starting with select cases in the US and Canada, and we'll be expanding to more countries and other types of account access situations soon. We're continuing to invest in AI-powered tools to make support more accessible, reliable, and effective - and we'll keep evolving the Meta AI support assistant as more people use it and as the technology advances, so it continues to improve over time. Learn more about the Meta AI support assistant here .
Improving Content Enforcement With More Advanced AI
Last year, we shared the positive results of the changes we made to cut down on mistakes and focus our proactive enforcement toward illegal and the most severe content on our platforms like terrorism, child exploitation, drugs, fraud, and scams. We also shared that we've been experimenting with more advanced AI systems for content enforcement to build on this progress - systems we believe can catch more of these violations more accurately while also stopping more scams and responding faster to real-world events with fewer over-enforcement mistakes.
Early tests of these systems have been promising, as they can:
These more advanced AI systems can do all of this in languages spoken by 98% of people online - far beyond our previous coverage of around 80 languages. Importantly, they can increase capacity in any language based on need and adapt to understand cultural nuance - including niche subcultures - rapidly changing and regionally specific code words, emoji meanings, and slang.
A Smarter Approach
Over the next few years, we'll be deploying these more advanced AI systems across our apps once we've seen them consistently perform better than our current methods of content enforcement, transforming our approach. As we do this, we'll reduce our reliance on third-party vendors for content enforcement and focus on strengthening our internal systems and workforce. While we'll still have people who review content, these systems will be able to take on work that's better-suited to technology, like repetitive reviews of graphic content or areas where adversarial actors are constantly changing their tactics, such as with illicit drugs sales or scams.
Even as we use new technology to scale what's possible, people will remain at the center of our approach. AI can help us move faster and operate at scale, but it doesn't replace human judgment - it helps us apply it more consistently across billions of pieces of content on our platforms. Experts will design, train, oversee, and evaluate our AI systems, measuring performance and making the most complex, high-impact decisions. For example, people will continue to play a key role in how we make the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement.
We're rigorously testing each of these AI systems, building in safeguards and evaluating their performance to protect against bias and ensure consistency and accuracy. Our Community Standards aren't changing as a part of this shift, and with new tools like the Meta AI support assistant, we'll be improving our methods for reporting violating content and for appealing mistakes. Ultimately, this approach will also help ensure people do what people are best at and technology does what technology is best at - combining the scale and capabilities of advanced AI with the expertise and judgement of people, each strengthening the other.