"Moving images that shouldn't exist."
Animation made the hard way. Every frame rendered by a diffusion model, stitched together, and told — through prompts, keyframes, and strength schedules — to please just be consistent from one frame to the next. Sometimes it listens. Most times it doesn't. That's the whole point.
Built with Deforum (2D/3D camera-driven diffusion), AnimateDiff (motion modules layered on top of SD), and ComfyUI workflows wiring ControlNet, IP-Adapter, and video post-processing into something that resembles coherent footage.
Some of these are finished pieces. Some are experiments I couldn't let go of. All of them are what happens when you let a diffusion model try to remember what the last frame looked like.
The specific tools behind these pieces. Each has its own quirks — Deforum for camera-driven diffusion, AnimateDiff for motion modules, ComfyUI for custom pipelines, and a custom model I call CommandA for the look.