animatediff pose control and facedetailer
created a year ago
inpainting
animatediff
controlnet
video
lora
38 nodes
1.6k
205
Credits
prompting pixels, purni, goshnii AI, andiamo
Outputs
Description
A convulted workflow that takes an input video 512x512 x32 frames
Creates a Canny, Openpose, and Depth control nets.
Pushes it through animatediff pipeline
and finally a face detailer to unmuck faces.
removed batch prompt travel
working on replacing the face detailer with a consistent face mask detect, and inpaint from a single frame extreme close up face shot generated first.
From the Load Video Path, I suggest setting your select every nth to a divisor value that will return 32 frames from the source video
You may can check how many frames are being generated by checking the Pose Control Preview Images
Built-in nodes
Custom nodes
ADE_ApplyAnimateDiffModelSimple
PreviewImage
ControlNetLoader
ADE_LoadAnimateDiffModel
ACN_AdvancedControlNetApply
UltralyticsDetectorProvider
SAMLoader
ADE_StandardUniformContextOptions
CannyEdgePreprocessor
DepthAnythingPreprocessor
DWPreprocessor
VHS_LoadVideoPath
VHS_VideoCombine
CheckpointLoaderSimple
ADE_UseEvolvedSampling
SEGSPaste
ImpactSimpleDetectorSEGS_for_AD
VAEDecode
KSampler
PrimitiveNode
EmptyLatentImage
ADE_AnimateDiffLoRALoader
ToBasicPipe
VAELoader
SEGSDetailerForAnimateDiff
CLIPTextEncode
Custom files
new
These are separate files that the creator has uploaded for this workflow.
jumpingjacks.mp4
Models