□ AutoGPT is the new trend of this week since it's the first small glance at possible AGIs. □ Source by Jim Derks #video #ai #gen1 #generatedvideo #filter #effect #stablediffusion #controlnet #multicontrolnet #corridordigital #aftereffects #davinciresolve #stablediffusion #linkedai #artificialintelligence #beauty #animation #origami #videoproduction #DavinciResolve #Davinci #ebsynth ![]() □ Follow me for more AI-related discoveries and visual experiments ![]() Put everything together with standard animation skills and compositing in Fusion to get to the end result. I blended the final character videos with an export I did with Ebsynth (A piece of software where you put in 1 generated stylized still frame and it generates a video in that style, from an input video), to get an even smoother result. To get to videos to stop flickering so much as you mostly see in Ai videos, I used Fusion's Deflicker Node multiple times. Effectively disabling my need to make a Dreambooth model of the characters. ControlNet was used to let Stable Diffusion know about the exact posing and facial movement of the characters in every frame. The Stable Diffusion model I used was a model based on the Spiderverse movie by a user on Github called Ran the videos through Stable diffusion + ControlNet. Rotoscoped both the characters with Davinci Resolve Fusion's Ai-powered Magicmask, so I was left with 2 separate video's on a greenscreen of both the characters. ![]() ![]() So how was this done? Jim explains in just 5 steps: □ Nice! Check out this wonderful SD Animation and breakdown by Jim Derks
0 Comments
Leave a Reply. |