AnimateDiff LCM+SD WF to perfectly blend foreground and background (with IPAdapter)
created a year ago
img2img
vid2vid
animatediff
video
lora
controlnet
ipadapter
upscale
159 nodes

2.1k

333

Tip this creator
Credits
Many...
Outputs
example video output optimized_comfy_workflows_user_uploads/c9bea1e1-297f-45c5-a00d-0c388dc206ea/assets/gif_FzHXcFMj_1701669275421_raw.webp for workflow.
Description

What this workflow does

Creates a Vid2Vid Animation where your hero (foreground) blends perfectly with the background (anything you want). The combination of foreground + background is achieved by creating a scene and later use conditional masking with separated controlnets streams (for foreground and background). It uses a combination of LCM + regular SD1.5 KSampler to speed up generation time while still having good detailed frames.

The resulting animation can be upscaled further (here I have Facedetailer and video frame interpolation. Further  upscaling/refining is also possible.

Video tutorials link 🎥

👉  LCM + AD https://youtu.be/QdQANF3YLuI

👉  AD only https://youtu.be/gDUeqCErjt4

How to use this workflow

👉Details can be found here: https://tinyurl.com/34wvyzbs

  • Load all the corresponding models to be used (AnimateDiff, IPAdapter, clipvision, LCM, etc.)

If you are using Openart's runnable workflow, you can download the example assets by clicking here 👉  https://civitai.com/api/download/attachments/12274

  • Load your foreground (hero) and background images in the Load Images node in the Image Blending Group.

  • Adjust the images to have the foreground image in the right position.

  • Load the Images that are being used for the controlnets (foreground: openpose; background: zoe depth, MLSD lines,). In this workflow they are load directly, but can be generated in the workflow via preprocessors. Foreground requires of openpose/DWPose, but for the background others can be used.

  • Make sure the right models are introduced in the different nodes. In OpenArt's runnable workflow, they are all available but some of them have different name.

  • Write a prompt that describes the Animation

  • Adjust the different parameters of the workflow. Most critical are the foreground mask and the segmentation for the screen

  • Run the workflow. Start with small amount of frames, e.g. 12, then later adjust parameters of the different nodes. When everything looks nice, all the frames (or video) can be run. In OpenArt's runnable workflow, you may want to limit the frames to 32 (depending on the complexity, it might be more or less)

Tips about this workflow

  • Openpose/DWPose is creating the masks, so these are needed.

  • SDXL would need some adjustments

  • mm-Stabilized-mid has provided best results with the movement

  • Masks and automatic are tricky always...so some trial and error may be needed

Built-in nodes
Custom nodes
CLIPSetLastLayer
ADE_ApplyAnimateDiffModelSimple
ADE_UseEvolvedSampling
ADE_AnimateDiffUniformContextOptions
VHS_VideoCombine
VAEDecode
PrimitiveNode
GetImageSize+
ImageScaleBy
ImageCrop+
CheckpointLoaderSimple
VAELoader
CLIPTextEncode
ControlNetLoaderAdvanced
ACN_AdvancedControlNetApply
AIO_Preprocessor
FreeU_V2
ADE_LoadAnimateDiffModel
KSampler
IPAdapterModelLoader
CLIPVisionLoader
LoadImage
PrepImageForClipVision
DepthAnythingPreprocessor
DWPreprocessor
ToBasicPipe
KSamplerProvider
TwoSamplersForMask
UpscaleModelLoader
TwoSamplersForMaskUpscalerProvider
IterativeLatentUpscale
UltralyticsDetectorProvider
SAMLoader
ImpactSimpleDetectorSEGS_for_AD
FILM VFI
ImpactImageBatchToImageList
SegmDetectorSEGS
SegsToCombinedMask
MaskListToMaskBatch
GrowMaskWithBlur
DetailerForEachPipeForAnimateDiff
VHS_LoadVideo
EmptyLatentImage
PreviewImage
MaskPreview+
Models
1
0
0
0