Google LLC

03/25/2026 | Press release | Distributed by Public on 03/25/2026 10:38

Build with Lyria 3, our newest music generation model

Your browser does not support the audio element.

Listen to article
This content is generated by Google AI. Generative AI is experimental
[[duration]] minutes
Voice Speed
Voice
Speed 0.75X 1X 1.5X 2X

Lyria 3 and Lyria 3 Pro, our music generation models, are rolling out now to developers in public preview through the Gemini API and a new audio experience in Google AI Studio.

Lyria 3 is designed to combine deep musical awareness with structural coherence. This allows developers to build apps that offer high-fidelity compositions, including vocals, verses and choruses, that maintain musical consistency from the first note to the last.

Studio quality and speed

Developers can now choose between two distinct model variants designed to meet specific production and latency requirements:

  • Lyria 3 Pro (lyria-3-pro-preview): Our premier model for full-length song generation creates tracks up to approximately three minutes long. These tracks have professional-grade structural awareness, making it the standard for studio-quality, premium output.
  • Lyria 3 Clip (lyria-3-clip-preview): Optimized for speed and high-volume requests, this variant generates high-quality 30-second clips. It is the ideal choice for rapid prototyping, background loops and social media assets.

Both models support realistic vocals that convey expressive nuance, plus improved clarity for more natural sounds. Developers can also explore global languages and genres. Generate vocals in different languages, and create music spanning genres from pop to funk to Motown.

Precision control and multimodal input

Lyria 3 introduces granular controls that allow you to direct the model with precision through natural language prompts:

  • Tempo conditioning: Set a specific tempo (e.g., Fast, slow) with high accuracy to ensure the music fits your application's rhythm.
  • Time-aligned lyrics: You can outline the progression of a song in your prompt and control when lyrics start and end within a track.
  • Multimodal image-to-music input: Beyond text, Lyria 3 supports multimodal inputs. You can provide an image to influence the mood, style and atmosphere of the audio.

Try Lyria 3 in Google AI Studio

To help you start experimenting immediately, we are also launching a new music generation experience in AI Studio. Using a paid API key, this dedicated workspace provides a first-class environment to create with Lyria 3 and explore its advanced features like image to music.

Inside the playground, you can explore two powerful creation modes for music:

  1. Text mode: Describe the music you want to hear using natural language including parameters like Tempo or Key.
  2. Composer mode: Build your song section by section from intro to verses, to bridges and more. This mode gives you granular control to set timing, intensity and descriptions for each part individually.

Start composing today

Lyria 3 Clip and Lyria 3 Pro are now available in public preview for developers globally.

We have been developing our music generation tools in close partnership with industry experts to ensure AI serves as an additive force for human creativity. Additionally, every track generated by Lyria 3 includes a SynthID digital watermark. This technology maintains transparency and trust by allowing anyone to identify and verify audio generated by Google AI, even after the audio has been modified.

  • Try it in Google AI Studio: Use the model selection dropdown to select Lyria 3 (30s) or Lyria 3 Pro (Full Song) and start experimenting in the playground.
  • Explore the documentation: Visit the Music Generation Guide for prompt guides, API references and code snippets to jumpstart your integration.
  • Start coding with the cookbook: Check the cookbook guide to get started with the API.
POSTED IN:
Google LLC published this content on March 25, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on March 25, 2026 at 16:38 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]