AniClipart: Clipart Animation with
Text-to-Video Priors

1City University of Hong Kong, 2Monash University

International Journal of Computer Vision (IJCV)

Abstract

Clipart, a pre-made art form, offers a convenient and efficient way of creating visual content. However, traditional workflows for animating static clipart are laborious and time-consuming, involving steps like rigging, keyframing, and inbetweening. Recent advancements in text-to-video generation hold great potential in resolving this challenge. Nevertheless, direct application of text-to-video models often struggles to preserve the visual identity of clipart or generate cartoon-style motion, resulting in subpar animation outcomes.
In this paper, we introduce AniClipart, a computational system that converts static clipart into high-quality animations guided by text-to-video priors. To generate natural, smooth, and coherent motion, we first parameterize the motion trajectories of the keypoints defined over the initial clipart image by cubic Bézier curves. We then align these motion trajectories with a given text prompt by optimizing a video Score Distillation Sampling (SDS) loss and a skeleton fidelity loss. By incorporating differentiable As-Rigid-As-Possible (ARAP) shape deformation and differentiable rendering, AniClipart can be end-to-end optimized while maintaining deformation rigidity. Extensive experimental results show that the proposed AniClipart consistently outperforms the competing methods, in terms of text-video alignment, visual identity preservation, and temporal consistency. Additionally, we showcase the versatility of AniClipart by adapting it to generate layered animations, which allow for topological changes.

How does it work?



Given an initial clipart image with M keypoints, we initialize M corresponding cubic Bézier motion trajectories, parameterized by {c(i)}i=0M-1. For a sequence of N frames, keypoints are updated at each frame by sampling along these trajectories. The displaced keypoints are responsible for driving the ARAP shape deformation algorithm, which warps the object, represented by a triangle mesh, into new poses. This gives rise to a clipart animation, which is (optionally rasterized and) passed to a T2V model to compute the video SDS loss. To ensure motion coherence across all keypoints, a skeleton fidelity loss is also applied, penalizing changes in bone lengths over time.

Varying the Prompts


woman_dance
We can alter the prompts to generate different movements.

Multi-Layer Animation


High-Order Bézier Trajectory


Comparisons to T2V Models & Prior Work


man_fencing
crab
We compare our method to five baselines: Four Text-to-Video (T2V) diffusion models (ModelScope, VideoCrafter, DynamiCrafter and I2VGen-XL) and LiveSketch.

Ablation Study