AnimateDiff LCM Video2Video
created a month ago
vid2vid
controlnet
animatediff
ipadapter
face
lora
upscale
video
60 nodes

2.5k

680

Outputs
Description

使用AnimateDiff + LCM实现的视频转视频的工作流

生成一秒30帧的视频大概需要10分钟,如果要求不那么高,则可以把SD放大去掉,可以节省很多时间大概70%?,但是人物面部会崩。

我的配置是4070ti 12G, 一次最大可以生成5秒的视频,如果你有更高的配置,可以尝试提高一次生成视频的时长

Using the workflow implemented with AnimateDiff + LCM for video-to-video conversion, generating a video at 30 frames per second takes about 10 minutes. If the requirements are not as high, the SD upscaling process can be omitted, which can save a lot of time, approximately 70%? However, this can result in distortion of human faces.

My setup is a 4070ti 12G, which can generate up to 5 seconds of video at a time. If you have a higher configuration, you may try to increase the length of the video generated in one go.

Built-in nodes
ControlNetLoader
PreviewImage
InvertMask
LoadImage
CLIPVisionLoader
UpscaleModelLoader
VAEEncode
Custom nodes
CR Integer To String
ACN_AdvancedControlNetApply
AIO_Preprocessor
VHS_SplitImages
Yoloworld_ModelLoader_Zho
ESAM_ModelLoader_Zho
ADE_LoopedUniformContextOptions
ShowText|pysssss
ScaledSoftControlNetWeights
ADE_AnimateDiffLoaderWithContext
Yoloworld_ESAM_Zho
IPAdapterModelLoader
Efficient Loader
SAMLoader
UltralyticsDetectorProvider
ImageGenResolutionFromImage
FaceDetailer
CR LoRA Stack
KSampler (Efficient)
WD14Tagger|pysssss
StringFunction|pysssss
VHS_PruneOutputs
RIFE VFI
UltimateSDUpscaleNoUpscale
VHS_VideoCombine
UltimateSDUpscale
VHS_LoadVideo
DeepTranslatorTextNode
Models
4
0
0
0