RealCam-I2V: Real-World Image-to-Video Generation with Interactive Complex Camera Control

*Under Review

Abstract

Recent advancements in camera-trajectory-guided image-to-video generation offer higher precision and better support for complex camera control compared to text-based approaches. However, they also introduce significant usability challenges, as users often struggle to provide precise camera parameters when working with arbitrary real-world images without knowledge of their depth nor scene scale. To address these real-world application issues, we propose RealCam-I2V, a novel diffusion-based video generation framework that integrates monocular metric depth estimation to establish 3D scene reconstruction in a preprocessing step. During training, the reconstructed 3D scene enables scaling camera parameters from relative to metric scales, ensuring compatibility and scale consistency across diverse real-world images. In inference, RealCam-I2V offers an intuitive interface where users can precisely draw camera trajectories by dragging within the 3D scene. To further enhance precise camera control and scene consistency, we propose scene-constrained noise shaping, which shapes high-level noise and also allows the framework to maintain dynamic and coherent video generation in lower noise stages. RealCam-I2V achieves significant improvements in controllability and video quality on the RealEstate10K and out-of-domain images. We further enables applications like camera-controlled looping video generation and generative frame interpolation.

Demo (CogVideoX-5B-I2V 720x480)


Complex Trajectories & Scene Dynamics


Demo (DynamiCrafter 512x320)

LandScape


Cartoon

Food


Human

Pets


Product Demo

Chinese Antique


Transition & Loop

Method

Step 1 (Training & Inference): Construct 3D point cloud by monocular metric depth estimation.


Step 2 (Training): Align from relative-scale to absolute-scale.


Step 3 (Inference): Render preview video with camera trajectory on the reconstructed 3D scene.


Step 4 (Inference): Scene-constrained noise shaping.

We paste the visible latents of preview video into the predicted latent during generation process. However, we only paste on the high noise level and allow for dynamics in lower level of noise, thus we name it "noise shaping" that only shapes the noise at the initial high noise stage.

Preview Video

w. Scene-Constrained Noise Shaping

w.o. Scene-Constrained Noise Shaping