Runway Video Editor

Runway Video Editor

Not too long ago, RunwayML pushed the boundaries of generative AI with Runway Video Editor Gen-1, a video to video model that allows you to use words and images to generate new videos out of existing ones in the week. Since launching, the model has constantly gotten better, better temporal consistency, better fidelity, and better results. And as more and more people gained access, we unlocked entirely new use cases and displays of creativity.

Runway Video Editor Gen-2

And today Runway team excited to announce their biggest unlock yet: text to video with Gen-2. Now you can generate a video with nothing but words, no driving video, no input image. This represents yet another major research milestone and another monumental step forward for generative AI with Runway Video Editor Gen-2. Anyone, anywhere can suddenly realize entire worlds, animations, stories, and anything you can imagine!

Gen-2 enables you to create new videos with realism and consistency. You can either use an image or text prompt to adjust the composition and style of an existing video (Video to Video) or generate content using only words (Text to Video). It’s akin to producing a new clip without actually recording anything.

Runway Gen-2 Capabilities

Expanding on the power of Gen-2, it now offers a diverse range of modes to cater to your creative needs. Dive into the array of functionalities it presents:

  • Mode 01: Text to Video
    Craft videos using solely a text prompt, bringing any imagined style to life. Essentially, if you can articulate it, you can visualize it.
  • Mode 02: Text + Image to Video
    Combine the essence of a chosen image with a text prompt to birth a fresh video. This mode fuses visual cues with textual inspirations for a unique output.
  • Mode 03: Image to Video (Variations Mode)
    Transform a static image into a dynamic video. This mode breathes life into your chosen image, showcasing it in diverse variations.
  • Mode 04: Stylization
    Embed the distinct style of any chosen image or prompt onto your video’s entirety. It’s about giving a consistent visual tone to every moment of your clip.
  • Mode 05: Storyboard
    Transform mere mockups into animated, fully-stylized visuals. Watch static designs spring into animated existence.
  • Mode 06: Mask
    Pinpoint and isolate subjects in your footage, and reshape them using textual descriptions. It’s a powerful tool for refining and redefining video content.
  • Mode 07: Render
    Enhance raw, untextured renders by imposing the characteristics of an input image or text. It’s about refining the rough edges into polished visuals.
  • Mode 08: Customization
    Harness Gen-2’s full capabilities, tweaking and tuning the model for even sharper, more lifelike results. Your imagination is the only limit.

Setting a new benchmark in video generation, user studies have indicated that Gen-1 is favored more than traditional methods, with 73.53% preferring it over Stable Diffusion 1.5 and 88.24% favoring it against Text2Live. As we enter a transformative period for moving pictures, Runway Research remains committed to developing advanced multimodal AI systems. Gen-2 underscores their significant strides in fostering novel forms of creativity.

Read more related topics: