AnimateDiff LCM Video2Video
created a month ago
vid2vid
controlnet
animatediff
ipadapter
face
lora
upscale
video
60 nodes
2.5k
680
Outputs
Description
使用AnimateDiff + LCM实现的视频转视频的工作流
生成一秒30帧的视频大概需要10分钟,如果要求不那么高,则可以把SD放大去掉,可以节省很多时间大概70%?,但是人物面部会崩。
我的配置是4070ti 12G, 一次最大可以生成5秒的视频,如果你有更高的配置,可以尝试提高一次生成视频的时长
Using the workflow implemented with AnimateDiff + LCM for video-to-video conversion, generating a video at 30 frames per second takes about 10 minutes. If the requirements are not as high, the SD upscaling process can be omitted, which can save a lot of time, approximately 70%? However, this can result in distortion of human faces.
My setup is a 4070ti 12G, which can generate up to 5 seconds of video at a time. If you have a higher configuration, you may try to increase the length of the video generated in one go.
Built-in nodes
ControlNetLoader
PreviewImage
InvertMask
LoadImage
CLIPVisionLoader
UpscaleModelLoader
VAEEncode
Custom nodes
CR Integer To String
ACN_AdvancedControlNetApply
AIO_Preprocessor
VHS_SplitImages
Yoloworld_ModelLoader_Zho
ESAM_ModelLoader_Zho
ADE_LoopedUniformContextOptions
ShowText|pysssss
ScaledSoftControlNetWeights
ADE_AnimateDiffLoaderWithContext
Yoloworld_ESAM_Zho
IPAdapterModelLoader
Efficient Loader
SAMLoader
UltralyticsDetectorProvider
ImageGenResolutionFromImage
FaceDetailer
CR LoRA Stack
KSampler (Efficient)
WD14Tagger|pysssss
StringFunction|pysssss
VHS_PruneOutputs
RIFE VFI
UltimateSDUpscaleNoUpscale
VHS_VideoCombine
UltimateSDUpscale
VHS_LoadVideo
DeepTranslatorTextNode
Models
control_v11p_sd15_openpose.pth
AnimateLCM_sd15_t2v.ckpt
ip-adapter-plus_sd15.safetensors
realcartoon3d_v15.safetensors
vae-ft-mse-840000-ema-pruned.safetensors
AnimateLCM_sd15_t2v_lora.safetensors
sam_vit_b_01ec64.pth
face_yolov8m.pt
control_v11f1p_sd15_depth_fp16.safetensors
add_detail.safetensors
ARWSweetLolita.safetensors
rife47.pth
4xUltrasharp_4xUltrasharpV10.pt