Luma AI Launches Ray3, the First Video Model Designed to Reason

Key Takeaway:

Luma AI introduced Ray3, the world’s first video-generation model with reasoning ability, enabling consistent characters, natural scene progression, and film-grade HDR output.

Positioned as a creative partner for filmmakers, advertisers, and game developers, Ray3 marks a significant leap for creative industries. With Adobe integrating Ray3 directly into Firefly, Creative Cloud gains its first third-party AI video system, signaling a strategic pivot toward an open ecosystem. The model launched globally on September 18, 2025, via Luma’s Dream Machine platform and Adobe Firefly, with a 14-day unlimited-use period for paid Firefly and Creative Cloud Pro subscribers.

Luma AI Launches Ray3 - Credit - Luma AI
Luma AI Launches Ray3 - Credit - Luma AI

Luma AI Launches Ray3 – Key Points

  • First Video Model with Reasoning Capability

    Ray3 is the first generative video model able to critique its own work, add revision notes, and generate outputs with consistent characters, natural scene unfolding, and realistic physics. Unlike prior models, projects can be paused and resumed. Built on a multimodal reasoning system, Ray3 produces text and visual tokens similar to a director sketching a storyboard, enabling it to plan complex multi-step scenes and refine outputs on the fly. At more than twice the size of Ray2, Ray3 offers higher fidelity, stronger instruction-following, and greater temporal coherence, significantly reducing hallucinations.

  • Adoption by Adobe and Major Creative Agencies

    Adobe became the first external platform to launch Ray3 outside Luma’s Dream Machine, integrating it directly into the Firefly Video module and Firefly Boards, with seamless export into Premiere Pro for professional editing. Early agency adopters include Monks (S4), Galeria, Strawberry Frog, HUMAIN Create, and Dentsu Digital. Dentsu is introducing Ray3 into advertising production in Japan, while HUMAIN Create is expanding its use across the MENA region with a focus on culturally grounded content. Adobe’s launch campaign included 14 days of unlimited free Ray3 generations for paid Firefly and Creative Cloud Pro subscribers, underscoring its push to accelerate adoption at scale.

  • Draft Mode, Keyframe Control, and High-End Rendering

    Ray3 introduces a Draft Mode that accelerates iteration by up to 20×, allowing creators to explore dozens of variations quickly before selecting final sequences. The model generates in native 1080p (with 4K upscaling via a neural upscaler) and is the first to deliver video in 10-, 12-, and 16-bit HDR ACES2065-1 EXR format. This enables cinema-grade footage with richer dynamic range, deeper shadows, and brighter highlights, while also allowing SDR footage to be converted to HDR. Additional creative controls include Image-to-Video (animating stills), Keyframes (timing and transitions), Extend (lengthening shots), and Loop (seamless repeats). These features align with the technical standards of professional production pipelines.

  • Backed by Major Investors

    Luma AI is backed by Nvidia, Andreessen Horowitz, AWS, AMD, Amplify Partners, and Matrix Partners, alongside angel investors from the tech and entertainment industries. The company recently opened a Los Angeles studio (July 2025), is establishing a New York presence, and has grown its Dream Machine platform to 30+ million users, reinforcing its ambition to dominate the generative video space.

  • CEO Vision for Creative Intelligence

    Amit Jain, Luma AI’s co-founder and CEO, framed Ray3 as a “first step toward building intelligence for creative work.” He emphasized that prior generative models functioned like “slot machines”—powerful but unintelligent—whereas Ray3 can understand intent, evaluate its outputs, and refine results. By reasoning across words, images, and motion, Ray3 promises fidelity and coherence alongside guardrails for ethics, compliance, and cultural context, marking what Jain described as a long-awaited leap for creative industries.

  • Adobe’s Strategic Pivot

    Adobe’s rollout of Ray3 reflects its broader shift away from relying solely on proprietary Firefly models. In 2025, Firefly added Google’s Gemini 2.5 Flash (Nano Banana) alongside models from OpenAI, Ideogram, Pika, Black Forest Labs, and Runway, with Moonvalley and Topaz Labs integrations forthcoming. By embedding Ray3, Adobe is positioning Firefly as an AI ecosystem hub, aiming to centralize access to trusted models within its suite. Content Credentials are automatically attached to AI outputs, but responsibility for commercial use remains with users.

Why This Matters:

Ray3 establishes a new benchmark in generative video, marrying reasoning capability with industry-grade output standards. Its integration into Adobe Firefly accelerates mainstream access, transforming Firefly into a marketplace of leading AI models and reshaping Adobe’s long-term strategy. With 20× faster ideation, HDR EXR production-ready output, and adoption by major agencies across the US, Japan, and MENA, Ray3 signals a future where AI is embedded across film, advertising, and gaming pipelines. If successful, it will intensify competition between emerging AI startups and legacy creative software providers, driving the next era of AI-powered production.


This article was drafted with the assistance of generative AI. All facts and details were reviewed and confirmed by an editor prior to publication.

AI is revolutionizing filmmaking and content creation! This comprehensive guide compares the top 20 text-to-video tools, highlighting their strengths, and limitations

Read a comprehensive monthly roundup of the latest AI news!

The AI Track News: In-Depth And Concise

Scroll to Top