ADiff_v3 + AnimateLCM in one workflow
created a year ago
character
img2img
vid2vid
animatediff
video
lora
controlnet
ipadapter
upscale
159 nodes

4.4k

972

Tip this creator
Credits
Many...
Outputs
Description

Tutorial: https://youtu.be/XO5eNJ1X2rI

What does this workflow?

A background animation is created with AnimateDiff version 3 and Juggernaut. The foreground character animation (Vid2Vid) with AnimateLCM and DreamShaper.

Seamless blending of both animations is done with TwoSamplerforMask nodes.

This method allows you to integrate two different models/samplers in one single video. This example uses two different checkpoints, loras and animateDiff models, but the method can also be used for image compositions where you want to use, for example, a realistic model for the foreground, and an artistic drawing model for the background.

Workflow is tested with SD1.5. SDXL or other SD models could be used, but controlnet models, loras, etc., should be changed to the corresponding version.

How the workflow works

1- Background: adjust a background of your choice to the size of the frames of the original video.

2- Background animation: create an animation using the background picture. Choice in the example is to use AnimateDiff version 3. Latent will be later used for the TwoSamplerMask. LooseControlNet and Tile are recommended. Loras are optional.

3- Create masks: Masks for the foreground character from the starting video are created

4- Foreground animation: create an animation of the foreground using the frames from the video. Choice in the example is to use AnimateLCM. IP Adapter is used. Recommended ControlNets are ControlGif, Depth and OpenPose. Foreground mask is used in controlNets

5- TwoSamplers: provides two different samplers for rendering in step 6. Sampler 1 (background) uses AnimateDiff version 3, while sampler 2 AnimateLCM. The settings, thus, need to be different.

6- Rendering: First option is to directly upscale (latent to 1.5x). Higher resolution, but time consuming. Second options to directly use the TwoSamplersforMask node, which uses. With the foreground mask we can have seamless integration of the background and foreground.

7- Face Detailer and Frame interpolation: Face detailer and frame interpolation are added to correct face distortion and improve smoothness of the video.

Additional information and tips

•Check out the assets to use in the example

•Check notes in the workflow for additional instructions

Built-in nodes
Custom nodes
CLIPSetLastLayer
ADE_ApplyAnimateDiffModelSimple
ADE_UseEvolvedSampling
ADE_AnimateDiffUniformContextOptions
VHS_VideoCombine
VAEDecode
PrimitiveNode
GetImageSize+
ImageScaleBy
ImageCrop+
CheckpointLoaderSimple
VAELoader
CLIPTextEncode
ControlNetLoaderAdvanced
ACN_AdvancedControlNetApply
AIO_Preprocessor
FreeU_V2
ADE_LoadAnimateDiffModel
KSampler
IPAdapterModelLoader
CLIPVisionLoader
LoadImage
PrepImageForClipVision
DepthAnythingPreprocessor
DWPreprocessor
ToBasicPipe
KSamplerProvider
TwoSamplersForMask
UpscaleModelLoader
TwoSamplersForMaskUpscalerProvider
IterativeLatentUpscale
UltralyticsDetectorProvider
SAMLoader
ImpactSimpleDetectorSEGS_for_AD
FILM VFI
ImpactImageBatchToImageList
SegmDetectorSEGS
SegsToCombinedMask
MaskListToMaskBatch
GrowMaskWithBlur
DetailerForEachPipeForAnimateDiff
VHS_LoadVideo
EmptyLatentImage
PreviewImage
MaskPreview+
Custom files
new

These are separate files that the creator has uploaded for this workflow.

Friday vibes got us dancing shorts lillyk_1080p.mp4

custom file comfy_workflows_user_uploads/db427477-a3aa-45ca-a6c4-cc47bec34d63/custom_files/sASGNlsG_1710422665018.webp

sASGNlsG_1710422665018.webp

custom file comfy_workflows_user_uploads/db427477-a3aa-45ca-a6c4-cc47bec34d63/custom_files/C4-WIp0U_1710422672297.webp

C4-WIp0U_1710422672297.webp

custom file comfy_workflows_user_uploads/db427477-a3aa-45ca-a6c4-cc47bec34d63/custom_files/pexels-jeremy-bishop-2765869.jpeg

pexels-jeremy-bishop-2765869.jpeg

Models
5
0
1
0