This AI video generator could be almost as fast as shooting a real video
Artificial intelligence-based video producer Runway now offers Gen-3 Alpha Turbo, which upgrades the recently released Gen-3 Alpha model with even more speed than the successor to the Gen-2 model. The latest version is claimed to be seven times faster and costs half as much as Gen-3 Alpha, which is likely to generate a lot of interest among professional and amateur filmmakers interested in AI.
As the name suggests, Gen-3 Alpha Turbo is all about speed. The time between sending a prompt and seeing a video is reduced to near real-time production, according to Runway. The idea is to offer something for industries where that kind of speed is critical, such as social media content and timely advertising. The trade-off is quality. While Runway insists the Turbo model’s videos are essentially as good as the standard Gen-3 Alpha, the non-Turbo variant can produce higher-quality images throughout the video.
Still, the Turbo model is fast enough that Runway CEO Cristóbal Valenzuela boasted on X: “It now takes me longer to type a sentence than it does to create a video.”
Creatives who prefer to focus on plotting and producing videos rather than waiting for rendering will likely find Gen-3 Alpha Turbo faster. That goes double when the price is halved in this case. A second video costs five credits, as opposed to ten credits for a second of a standard Gen-3 Alpha model video. Credits on Runway are available in packages starting at $10 for 1,000 credits, so it’s the difference between 100 seconds of film for $10 or 200 seconds of film for $10. Those interested can also try out the new model in a free trial.
Film AI boom
Runway ML’s aggressive pricing and performance improvements come as the company faces stiff competition from other AI video generation models. OpenAI is most famous for its Sora model, but it’s far from the only one. Stability AI, Pika, Luma Labs’ Dream Machine and others are racing to bring AI video models to the public. Even TikTok’s parent company, Bytedance, has an AI video maker called Jimeng, though it’s limited to China for now.
Runway’s focus on speed and accessibility with the Turbo model could help it stand out from the crowd. Next, Runway plans to enhance its models with better controls and possibly even real-time interactivity. The Gen-3 Alpha Turbo model includes much of what video makers experimenting with AI want. But it will need to deliver consistently to truly beat the competition in turning words and images into video.
Providing reliable consistency in character and environment design is no small feat, but using a source image as a reference point to maintain coherence between different shots can help. In Gen-3, Runway’s AI can create a 10-second video guided by additional motion or text prompts in the platform. You can see how this works in the video below.
“Gen-3 Alpha Turbo Image to Video is now available and can generate 7x faster at half the cost of the original Gen-3 Alpha, while maintaining the same performance in many use cases. Turbo is available on all plans, including a free trial.
Not only does Runway’s image-to-video feature ensure that people and backgrounds stay the same from a distance, Gen-3 also includes Runway’s lip-sync feature, so a person speaking will move their mouth to match the words they’re saying. A user can tell the AI model what they want their character to say, and the movement will be animated accordingly. The combination of synchronized dialogue and realistic character movements will interest many marketing and advertising developers looking for new and ideally cheaper ways to produce video.
Next
Runway isn’t done expanding the Gen 3 platform either. The next step is to bring the same improvements to the video-to-video option as well. The idea is to keep the same motion, but in a different style. A human running down a street, for example, becomes an animated anthropomorphic fox running through a forest. Runway will also bring its control features to the Gen 3, such as Motion Brush, Advanced Camera Controls, and Director Mode.
AI video tools are still in the early stages of development. Most models excel at creating short-form content but struggle with longer narratives. This puts Runway in a strong position from a market perspective with its new features, but it is far from alone. Midjourney, Ideogram, Leonardo (now owned by Canva) and others are racing to be the ultimate AI video generator. Naturally, they are all watching OpenAI and its Sora video generator with caution. OpenAI has advantages in terms of exposure, among other things. In fact, Toys”R”Us has already shot a short commercial featuring Sora and premiered it at the Cannes Lions Festival. Still, the AI video generator movie is only in its first act, and the triumphant winner cheering in slow motion at the end is far from inevitable. As competition gets tougher, Runway’s release of Gen-3 Alpha is a strategic move to maintain a leading position in the market.