https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace
You must read this page - you need more than just this model to make this face pre-processor work. I'm just uploading here as an alternative mirror to Huggingface, but there are other required files.
Note: I cannot provide technical support for this file. Also note, this is a 2.1 ControlNet model.
My original pruned ControlNet models are here, and here (difference), here (Tencent Adapters), and here (Furusu's SD 2.1 models)
Please consider joining my Patreon! Happy to share my SD knowledge, through advanced tutorials. Also news, settings explanations, adult art, from a female content creator (me!) patreon.com/theally
Pototype ControlNet models based on MediaPipe for pose and hands estimation. Higger "e" numer is better estimation but also greater impact on image. You will need external preprocessor available here!
Usage guideline:
1. Download preprocessor and txt file from gumroad
2. Install requirements - run pip install -r requirements.txt
command (in folder where you have downloaded file)
3. Prepare folder with images that you want to preprocess
4. Run command python preprocess.py -mh -mp -s C:\path\to\your\folder
- mh is for hands detection, mp for pose (you can try with just the pose which works great!)
5. Inside selected folder will appear detection folder (C:\path\to\your\folder\detection)
6. Copy downloaded models (.ckpt files) to ...\stable-diffusion-webui\extensions\sd-webui-controlnet\models
7. Inside Automatic1111 GUI select ControlNet enabled, preprocessor to None, one of the downloaded models and put image from earlier detection
8. Generate!
These are the models required for the ControlNet extension, converted to Safetensor and "pruned" to extract the ControlNet neural network. I have tested them, and they work.
These models are embedded with the neural network data required to make ControlNet function, they will not produce good images unless they are used with ControlNet.
These models were extracted using the extract_controlnet_diff.py
script, and produce a slightly different result from the models extracted using the extract_controlnet.py
script.
The original version of these models in .pth format can be found here. BUT YOU DO NOT NEED THESE .pth FILES! The files I have uploaded here are direct replacements for these .pth files!
control_sd15_canny
control_sd15_depth
control_sd15_hed
control_sd15_scribble
control_sd15_normal
control_sd15_openpose
control_sd15_seg
control_sd15_mlsd
Download these models and place them in the \stable-diffusion-webui\extensions\sd-webui-controlnet\models
directory.
Note: these models were extracted from the original .pth using the extract_controlnet_diff.py
script contained within the extension Github repo. Kohya-ss has them uploaded to HF here.
Please consider joining my Patreon! Advanced SD tutorials, settings explanations, adult-art, from a female content creator (me!) patreon.com/theally - I also have a write-up of ControlNet and will be updating with the latest news/developments!
Hide
Original ControlNet Difference Models HuggingFace Repository
Control Nets allow you to guide the structure of an image based on existing ones and preprocessing them, or by using images that match the expected format for a given control net. More information regarding ControlNet and the different types can be found at https://github.com/lllyasviel/ControlNet.
All examples generated using preprocessors, with the AbyssOrangeMix2 - Hardcore model, with the prompt used being the same as the original input image (but with different seeds). All settings default except where mentioned on a specific control net type. Canvas width set to 512 x 768 to match the input image.
You only need to download the models you want to use, you do not need to download all of them.
These are the models required for the ControlNet extension, converted to Safetensor and "pruned" to extract the ControlNet neural network. I have tested them with AOM2, and they work. Download these models and place them in the \stable-diffusion-webui\extensions\sd-webui-controlnet\models
directory.
Diagram was shared by Kohya and attempts to visually explain the difference between the original controlnet models, and the difference ones.
These are the models required for the ControlNet extension, converted to Safetensor and "pruned" to extract the ControlNet neural network. I have tested them, and they work.
These models are embedded with the neural network data required to make ControlNet function, they will not produce good images unless they are used with ControlNet.
The original version of these models in .pth format can be found here. BUT YOU DO NOT NEED THESE .pth FILES! The files I have uploaded here are direct replacements for these .pth files!
control_sd15_canny
control_sd15_depth
control_sd15_hed
control_sd15_scribble
control_sd15_normal
control_sd15_openpose
control_sd15_seg
control_sd15_mlsd
Download these models and place them in the \stable-diffusion-webui\extensions\sd-webui-controlnet\models
directory.
Note: these models were extracted from the original .pth using the extract_controlnet.py
script contained within the extension Github repo.
Please consider joining my Patreon! Advanced SD tutorials, settings explanations, adult-art, from a female content creator (me!) patreon.com/theally - I also have a write-up of ControlNet and will be updating with the latest news/developments!