Tatiana Tsiguleva tested Runway Gen-3 Alpha Image to Video: Conclusions.
- Obviously, it’s better than Gen-2.
- It’s different from Luma AI. I don’t know exactly how both of them were developed, but after several tests, I have a feeling that Luma often creates 3D from images and then makes videos, while Runway takes patterns directly from the videos it was trained on. Just my assumption.
- The number of good results is much higher than in Gen-2, but often adding a prompt negatively affects the initial image aesthetic.
- Simple prompts work better.
- If you don’t add a prompt, it will generate a simple effect like zooming in in most cases.
- If you have a single object in a video, the results are often better compared to having many objects.
- Super simple scenes look pretty good, but more complex ones still look far from perfect.
- And finally, as with all other tools, the initial image is crucial for getting great results.
- And yeah, people blink if you specify it in the prompt. 🙂
Overall, it’s nice to see the progress.
Have you tried it already?
Read related articles: