Anime vid2vid - extreme
created 7 months ago
vid2vid
lora
controlnet
face
upscale
animatediff
video
180 nodes

3.6k

818

Credits
Kijai
Outputs
Description

This workflow is even worse than the last. I dont really have the energy to clean it up and put tutorial in it, sorry. most likely if you try this workflow, and it asks for a LOT of missing nodes, installing them all will prob break your comfyui.

I HIGHLY recommend using stability matrix to manage multiple installations of comfyui to trouble shoot. It also helps manage all the packages for each installation, and shared folders for all the models. 10/10

This is proof of concept trying to turn live action into anime. The workflow has been cleared of prompts and loras. I use different ones for every video. Try looking for loras that have a strong anime style and or color saturation to them, like colorize. i also run negative add detailer lora to smooth details during diffusion.

The first part (left) of this work flow turns the video into anime style, this is most important one. face will be fixed later. The first video output you see filled will be first benchmark. kill generation here and make sure the body and overall style is to your liking. next will upscale with anime upscaler to get better color and lines. dont worry about this.

Next is face correction (bottom) it will regenerate face, then fix mouth and eyes, repeat, repeat, and end on a final regeneration. mess around with eye and mouth retargeting to get better results here. if live portrait throws errors, you're either missing a face in the first frame, or need to change the cropper model.

the third step post processing, single frame preview (right row bottom.) disconnect top row. find the select image node that feeds into bottom row and select the frame you want to use for overall video adjustments. adjust nodes along this row to get post processing adjustments you like. right now its set up based on a youtube video for recreating evangelion screenshot in photoshop.

forth step is top row. fix all nodes to mirror settings on bottom row. let it run. HSV was removed because it was taking an absurd amount of time to run.

Built-in nodes
Custom nodes
CLIPSetLastLayer
ADE_UseEvolvedSampling
FreeU_V2
LoraLoaderModelOnly
ACN_AdvancedControlNetApply
LineArtPreprocessor
ControlNetLoaderAdvanced
MediaPipeFaceMeshToSEGS
MaskToImage
SegsToCombinedMask
ImageListToImageBatch
FL_UpscaleModel
ToBasicPipe
ImpactControlNetApplySEGS
MiDaS_DepthMap_Preprocessor_Provider_for_SEGS //Inspire
DetailerForEachPipeForAnimateDiff
Skimmed CFG
PreviewImage
LivePortraitProcess
ImageSelector
MaskPreview+
CreateShapeMask
ImageScale
VHS_VideoCombine
ImpactImageBatchToImageList
MediaPipe-FaceMeshPreprocessor
ImageCASharpening+
VAELoader
VHS_SelectImages
ImageScaleDownBy
UpscaleModelLoader
ImpactSimpleDetectorSEGS_for_AD
LineArt_Preprocessor_Provider_for_SEGS //Inspire
MediaPipe_FaceMesh_Preprocessor_Provider_for_SEGS //Inspire
LivePortraitComposite
VHS_VideoInfoLoaded
GrowMaskWithBlur
DownloadAndLoadLivePortraitModels
UltralyticsDetectorProvider
ADE_StandardUniformContextOptions
ADE_AnimateDiffSamplingSettings
ADE_ApplyAnimateDiffModelSimple
LivePortraitCropper
LayerUtility: ImageBlend V2
LayerFilter: MotionBlur
LayerFilter: GaussianBlurV2
LayerStyle: Gradient Map
LayerColor: ColorofShadowHighlightV2
LayerColor: HSV
LayerColor: Levels
ImageRepeat
Image Blank
ImageScaleBy
LivePortraitRetargeting
ImageCrop+
VHS_LoadVideo
VAEEncode
ADE_LoadAnimateDiffModel
ProPostFilmGrain
KSampler (Efficient)
CLIPTextEncode
GetImageSizeAndCount
LivePortraitLoadFaceAlignmentCropper
CheckpointLoaderSimpleWithNoiseSelect
0
0
0
0