Modern text-to-video synthesis models demonstrate coherent, photorealistic generation of complex videos from a text description. However, most existing models lack fine-grained control over camera movement, which is critical for downstream applications related to content creation, visual effects, and 3D vision. Recently, new methods demonstrate the ability to generate videos with controllable camera poses-these techniques leverage pre-trained U-Net-based diffusion models that explicitly disentangle spatial and temporal generation. Still, no existing approach enables camera control for new, transformer-based video diffusion models that process spatial and temporal information jointly. Here, we propose to tame video transformers for 3D camera control using a ControlNet-like conditioning mechanism that incorporates spatiotemporal camera embeddings based on Plucker coordinates. The approach demonstrates state-of-the-art performance for controllable video generation after fine-tuning on the RealEstate10K dataset. To the best of our knowledge, our work is the first to enable camera control for transformer-based video diffusion models.
We adapt a FIT-based architecture to incorporate camera control. We take the noisy input video, camera extrinsics, and camera intrinsics for each video frame as input. We compute the Plücker coordinates for each pixel within the video frames using the camera parameters. The input video and Plücker coordinate frames are converted to patch tokens, and we condition the video patch tokens using a mechanism similar to ControlNet. Then, the model estimates the denoised video by recurrent application of FIT blocks. Each block reads information from the patch tokens into a small set of latent tokens on which computation is performed. The results are written to the patch tokens in an iterative denoising diffusion process.
Reference Trajectory Video
Ours
CameraCtrl
MotionCtrl
Camera Input
Reference Trajectory Video
Camera Controlled Generation