AnimateDiff 设计了3个模块来微调通用的文生图Stable Diffusion预训练模型, 以较低的消耗实现图片到动画生成。
论文名:AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning
三大模块:
这里是对u-net的LoRA微调。
原文摘录:
We implement the domain
adapter layers with LoRA (Hu et al., 2021) and insert them into the self-/cross-attention layers in
the base T2I, as shown in Fig. 3.
We then optimize only the parameters of the domain adapter on
static frames randomly sampled from video datasets with the same objective in Eq. (2).
模块结构:
sinusoidal position embedding + self-attention blocks, 添加在U-net的每个blocks中
维度处理:
图像的维度是: [batch_size, channel, height, width],
而视频会多一个<时间维度>即视频帧数: [batch_size, frames, channel, height, width]
sd:由于sd本身是处理图片,没有<时间维度>,即frames,这里将frams这个维度整合到batch_size这个维度,以便sd按照图像的方式处理frames
motion module: 这个新增部分只需要学习时间维度的特征。因此,它将空间维度 h,w合并到batch_size,即以特征shape为[batch_size, frames, channel]作为该模块的输入,输出时再将其h,w从batch_size还原。
初始化&残差
为了提升训练效果,这里用了control-net的0值初始化(在transformer的最后输出层—projection layers)
motion module用了残差连接
原文摘录:
the temporal Transformer
consists of several self-attention blocks along the temporal axis, with sinusoidal position encoding
to encode the location of each frame in the animation. As mentioned above, the input of the motion
module is the reshaped feature map whose spatial dimensions are merged into the batch axis.
Note that sinusoidal position encoding added before the self-attention
is essential; otherwise, the module is not aware of the frame order in the animation. To avoid any
harmful effects that the additional module might introduce, we zero initialize (Zhang et al., 2023)
the output projection layers of the temporal Transformer and add a residual connection so that the
motion module is an identity mapping at the beginning of training.
在Motion Module的self-attention上增加LoRA低秩可学习矩阵,再用特定的帧学习一个动作(如zoom-in,zoom-out)
该步骤需要20-50个动作帧,2000次训练迭代(约1-2小时), 30Mb的低秩。
原文摘录:
we add LoRA layers to the self-attention
layers of the motion module in the inflated model described in Sec. 4.2, then train these LoRA layers
on the reference videos of new motion patterns.
, to get videos with zooming effects, we augment the videos by gradually reducing
(zoom-in) or enlarging (zoom-out) the cropping area of video frames along the temporal axis. We
demonstrate that our MotionLoRA can achieve promising results even with as few as 20 ∼ 50 ref
erence videos, 2,000 training iterations (around 1 ∼ 2 hours) as well as about 30M storage space,
enabling efficient model tuning and sharing among users.
训练的损失函数都是根据vedio的样本进行mse,
这里核心是第二部分,即运动模块,基于sd1.5和WebVid-dataset,这个开销还是非常大的。
消费卡只能玩模块3,即运动模块的lora微调。
这里比较了运动模块的两种可行layer,temporal Transformer 和 1D Temporal Convolution:
实验表明Transformer能构建时序关系,即捕获全局时间依赖关系,更适合视频生成任务。而
1D Temporal Convolution生成的frames几乎一样,即没有视频效果。
这个部分为个人用户提供价值,在有限的视频(50个)和低训练成本下,实现特定动作生成。
可控性:可结合 ControlNet,可以使用条件(如深度图)对生成结果进行精准控制。
独立性:无需依赖复杂的反推过程(如 DDIM inversion),直接从噪声生成,简化了生成流程。
质量和细节:生成结果在动态细节和视觉表现上都非常出色,能够细腻地还原运动特征(例如头发的动态、面部表情的变化等)。
Tune-a-Video
Text2Video-Zero
因篇幅问题不能全部显示,请点此查看更多更全内容