A native 4K audio-video model with open training code, designed for on-device deployment and real-world production workflows.
NEW YORK, Jan. 06, 2026 (GLOBE NEWSWIRE) -- Lightricks today announced the open-source release of LTX-2, the first production-ready model to combine truly open audio and video generation with native 4K output, synchronized expressive sound, and full access to model weights, inference, and training code, alongside extensive options for fine-tuning and deeper customization for builders and creative professionals.
“LTX-2 is the first truly open audio-video model, released with open weights and training code, and designed to run locally on consumer GPUs,” said Zeev Farbman, Co-founder and CEO of Lightricks. “It delivers the kind of quality and performance teams usually associate with closed systems, without giving up control, transparency, or the ability to customize. We believe this release marks a meaningful shift for both research and real-world production pipelines, expanding what teams can build and who gets to build it.”
LTX-2 is capable of generating synchronized video and audio up to 20 seconds long, rendered at native 4K resolution and 50 frames per second, while maintaining expressive lip sync and audio fidelity that exceed existing open-source systems. The release includes both the full model and a distilled variant designed for significantly faster inference with minimal quality trade-offs, giving teams direct control over the balance between performance and fidelity based on their hardware and use case. Providing a production-grade distilled model out of the box removes a costly and complex step for developers and enables broader deployment across a wider range of systems. Support for Comfy pipelines is included to accelerate integration and experimentation.
Optimized for the NVIDIA Ecosystem
LTX-2 is optimized to run efficiently across the NVIDIA ecosystem, from GeForce RTX GPUs and NVIDIA DGX Spark to full enterprise-grade data center systems. This enables creators to generate production-quality content on local PCs, while giving enterprises a clear path to scale deployments in more demanding environments.
LTX-2 has been quantized to NVFP8, reducing the model size by ~30% and improving performance by up to 2x. ComfyUI has also been optimized to run LTX-2 models, further improving performance. These optimizations allow LTX-2 to deliver comparable or better results with substantially lower compute requirements than existing open-source audio-video models, enabling faster iteration and more accessible high-quality generation.
On-Device Performance and Privacy
Unlike cloud-only solutions, LTX-2’s on-device capabilities ensure complete privacy and security for sensitive projects. Running locally on NVIDIA hardware, studios can iterate on unreleased IP, enterprises can maintain strict regulatory compliance, and creators can work without bandwidth limitations or usage restrictions. By keeping all data and generation fully local, LTX-2 gives enterprises complete control over their intellectual property, supports teams operating under strict regulatory or compliance requirements, and ensures that sensitive creative workflows never leave secure environments.
Customization, Training, and Creative Control
One of the most powerful aspects of open-sourcing LTX-2 is the ability for teams to customize it to their own creative or production needs. For the first time, an open model supports native audio and video generation together, allowing creators and studios to train their own IP and visual or sonic language directly into the system.
The release includes:
- Full Model Architecture and Weights: available for immediate download.
- Training Framework: enabling efficient fine-tuning and LoRA creation for specific styles.
- ComfyUI Support: included out of the box for streamlined workflow integration.
- Research Transparency: benchmarks, test suites, and training insights to support reproducible multimodal research.
Ecosystem and Availability
LTX-2 is released with full model weights, training code, and benchmarks available to all users. The model is free to use for academic research and for commercial use for companies with less than $10M in annual recurring revenue (ARR). Organizations above this threshold are required to obtain a commercial license that enables continued use of the same open-weight model in production, along with options for enterprise-grade support, deployment flexibility, and customization.
LTX-2 is also available through a self-serve API on the website and is also accessible directly within the LTX platform, with integrations through Fal, Replicate, ComfyUI, OpenArt, and others. These partners are adopting LTX-2 to bring high-fidelity, synchronized generative video into their products, enabling end-to-end workflows from prototyping and editing to production-ready workflows.
About Lightricks
Lightricks is an AI-first company creating next-generation content creation technology for businesses, enterprises, and studios. Its proprietary foundation models, infrastructure, and creative platforms power every stage of production - from concept to final render - enabling high-quality, efficient, and scalable creation across industries.
At the center of this innovation is LTX-2, our open-weights foundation model. The company is also known globally for pioneering consumer creativity through products like Facetune, one of the world’s most recognized creative brands, which helped introduce AI-powered visual expression to hundreds of millions of users.
Media Contact:
Marguerite Pinheiro
Mpinheiro@thisisoutcast.com
973–557-5974