Runway just changed filmmaking forever — Act-1 lets you control AI characters

Runway, one of the leading artificial intelligence video platforms, has just announced a new feature that will completely change the game for character consistency and filmmaking in general.

Act-1 is a new approach to AI video generators. It is a form of modern-day puppeteering, allowing you to film yourself or an actor performing a part and then use AI to completely change the way they look. This solves one of the biggest problems with AI — consistency.

Access to Act-1 will begin gradually rolling out over the coming weeks. Runway says it will soon be available to everyone.

AI video tools are getting much better at human motion, lip-synching and character development, but they have a way to go before they can bridge the ‘obviously AI’ gap. Runway’s new tool may have finally solved that problem.

Instead of leaving the AI to work out how the character should move or react, it lets you upload a video along with a control image (to set the style) and basically maps the control image over your performance.

What is Runway Act-1?

Introducing Act-One | Runway – YouTube


Watch On

For me, the true benefit of AI video will come from the merger of real and generative AI rather than relying completely on AI itself. The best films already make use of visual effects alongside model shots and film shots, and artificial intelligence is just an extension of that.

Runway’s Act-1 puts human performance front and center, using AI as an overlay. You’re essentially turning the human into the puppet master, a bit like Andy Serkis and his performance of Gollum in “Lord of the Rings” — only without the need for motion capture suits and expensive cameras.

Runway’s Act-1 puts human performance front and center, using AI as an overlay. You’re essentially turning the human into the puppet master, a bit like Andy Serkis and his performance of Gollum in “Lord of the Rings” — only without the need for motion capture suits and expensive cameras.

I haven’t had the chance to try it yet, but judging by some of the examples shared by Runway, it’s as simple as sitting in front of a camera and moving your head around. An element of this has already been available for some time, including from Adobe, but without the generative AI element.

But it goes much further than we’ve seen in any tools so far. According to Runway: “With Act-1, eye-lines, micro-expressions, pacing and delivery are all faithfully represented in the final generated output.”

It also goes beyond simple puppeteering as Act-1 can create complex scenes using existing gen-3 AI video technology and integrate them into human performance.

The company explained on X: “One of the model’s strengths is producing cinematic and realistic outputs across a robust number of camera angles and focal lengths, allowing you to generate emotional performances with previously impossible character depth, opening new avenues for creative expression.”

Access to Act-1 will begin gradually rolling out to users today and will soon be available to everyone.

More from Tom’s Guide


Source link
Exit mobile version