Others

ComfyUI Manager

файл на civitaiстраница на civitai

This extension provides assistance in installing and managing custom nodes for ComfyUI.

Announcement:

Features:

Please refer to the GitHub page for more detailed information.

https://github.com/ltdrdata/ComfyUI-Manager

Install guide:

Тэги: toolsextensioncomfyuitool
SHA256: 0FFEF84BB658458BE5EA955768394233855C399CE423A4602EA5FA08C2D7BAA5

ComfyUI Multiple Subject Workflows

файл на civitaiстраница на civitai

This is a collection of custom workflows for ComfyUI

They can generate multiple subjects. Each subject has its own prompt.

They require some custom nodes to function properly, mostly to automate out or simplify some of the tediousness that comes with setting up these things.

Please check the About this version section for each workflow for required custom nodes.

Some notable features

There are four methods for multiple subjects included so far:

Latent Couple

Limits the areas affected by each prompt to just a portion of the image

From my testing, this generally does better than Noisy Latent Composition

Noisy Latent Composition (no longer updated)

Generates each prompt on a separate image for a few steps (eg. 4/20) so that only rough outlines of major elements get created, then combines them together and does the remaining steps with Latent Couple.

Character Interaction (Latent) (no longer updated)

First of all, if you want something that actually works, check Character Interaction (OpenPose). This one doesn't, i'm leaving it for archival purposes.

This is an """attempt""" at generating 2 characters interacting with each other, while retaining a high degree of control over their looks, without using ControlNets. As you may expect, it's quite unreliable.

We do this by generating the first few steps (eg. 6/30) on a single prompt encompassing the whole image that describes what sort of interaction we want to achieve (+background and perspective, common features of both characters help too).

Then, for the remaining steps in the second KSampler, we add two more prompts, one for each character, limited to the area where we "expect" (guess) they'll appear, so mostly just the left half/right half of the image with some overlap.

I'm not gonna lie, the results and consistency aren't great. If you want to try it, some settings to fiddle around with would be at which step the KSampler should change, the amount of overlap between character prompts and prompt strengths. From my testing, the closest interaction I've been able to get out of this was a kiss, I've tried to go for a hug but with no luck.

The higher the step that you switch KSamplers at, the more consistently you'll get the desired interaction, but you'll lose out on the character prompts (I've been going between 20-35% of total steps). You may be able to offset this a bit by increasing character prompt strengths.

Character Interaction (OpenPose)

Another method of generatign character interaction, except this time it actually works, and very consistently at that. To achieve this we simply run latent composition with ControlNet openpose mixed in. To make it more convenient to use, the openpose image can be pregenerated (you can still input your own if you want), so no need to hassle with inputting premade ones yourself. As a result it's about as simple and straightforward to use as normal generation. You can find instructions in the note to the side of the workflow after importing it into ComfyUI.

From a more technical side of things, implementing it is actually a bit more complicated than just applying OpenPose to the conditioning. Because we're dealing with a total of 3 conditionings (background and both subjects) we're running into problems. Applying ControlNet to all three, be it before combining them or after, gives us the background with OpenPose applied correctly (the OpenPose image having the same dimensons as the background conditioning), and subjects with the OpenPose image squeezed to fit their dimensions, for a total of 3 non-aligned ControlNet images. For that reason, we can only apply unchanged OpenPose to the background. Stopping here, however results in there being no ControlNet guidance for our subjects and the result has nothing to do with our OpenPose image. Therefore, now we crop parts of the OpenPose that correlate with subject areas and apply that to the subject conditioning. With all 3 conditionings having OpenPose applied we can finally combine them and proceed with the proper generation.

The following image demonstrates our resulting conditioning:

btw the workflow will generate similar ones for you :)

Background conditioning covers the entire image and contains the entirety of the pose data.

Subject 1 is reresented as the green area and contains a crop of the pose that is inside that area.

Subject 2 is reresented as the ble area and contains a crop of the pose that is inside that area.

The image itself is generated first, then the pose data is extracted from it, cropped, applied to conditioning and used in generating the proper image. This saves you from having to have applicable openpose images on hand, though if you do, care is taken to ensure that you can use them. You can also input an unprocessed image, and have it preprocessed for ControlNet inside the workflow.

And here is the final result:

This includes second pass after upscaling, face restoration and additional upscaling at the end, all of which are included in the workflow.

A handy preview of the conditioning areas (see frst image) is also generated. Ideally it would happen before the proper image generation, but the means to control that are not yet implemented in ComfyUI, so sometimes it's the last thing the workflow does. Sadly, I can't do anything about it for now.

Some more use-related details are explained in the workflow itself.

Тэги: multiple peoplemulti-charactercomfyuiworkflowtool
SHA256: A17DF27A6E76A030D64D6CB9CC4B073CB94AE16736D8A497A7E3978C0DD1B990

Convert .bin/.pt files to Safetensors

файл на civitaiстраница на civitai

I wanted an easy way to convert .pt (PyTorch/PickleTensors) and .bin files for Textual Inversions and VAEs to the Safetensors format. DiffusionDalmation on GitHub has a Jupyter/Colab notebook (MIT license) that handled .pt files but not .bin files because of missing data in the .bin files that I had. Hugging Face has a function in the Safetensors repo (Apache license) that handles .bin, and probably .pt but I liked the training metadata details from the notebook version. Both are credited in the help text.

I started with pieces of both and worked it into a script that will try to convert both types as individual files or a directory of files. The .safetensors gets a new file hash but they are functionally identical from my testing. I have only tested on PyTorch 2 with SD1.5 TIs and VAEs though. It works on Windows with CUDA, but has theoretical support for the MacOS Metal backend and will fall back to using CPU. Buy me a Mac and I'll test it there. ;P

WARNING: code within files will be executed when the models are loaded - any malicious code will be executed, too. Do not run this on your own machine with untrusted/unscanned files containing pickle imports.

Assuming that you're in a trusted environment converting models you trust, you can activate an existing venv and run it from there, or set up a new venv with torch, safetensors, and typing packages.

Reusing the automatic1111 web UI venv, you would just run (for example):

V:\stable-diffusion-webui\venv\Scripts\activate
python V:\sd-info\safetensors-converter\bin-pt_to_safetensors.py .
deactivate

Before/after file size and hashes for two example TIs:

$ls -lG
total 465
-rw-r--r-- 1 user   3931 May 16 06:21 LulaCipher.bin
-rw-r--r-- 1 user   3192 May 30 01:58 LulaCipher.safetensors
-rw-r--r-- 1 user 231339 Apr 30 19:59 ng_deepnegative_v1_75t.pt
-rw-r--r-- 1 user 230672 May 30 01:58 ng_deepnegative_v1_75t.safetensors

$for i in *;do sha256sum ${i};done
433c565251ac13398000595032c436eb361634e80e581497d116f224083eb468 *LulaCipher.bin
907fe94bb9001d6cb1c55b237764a6d31fd94265745a41afaa9eac76dd64075b *LulaCipher.safetensors
54e7e4826d53949a3d0dde40aea023b1e456a618c608a7630e3999fd38f93245 *ng_deepnegative_v1_75t.pt
52cd09551728a502fcbed0087474a483175ec7fa7086635828cbbccece35d0bb *ng_deepnegative_v1_75t.safetensors

Тэги: convertertoolsafetensorspickletensors
SHA256: CF1B9EE5BA551F8113A3DBA10C8EF2624307ADFAEA818914D893D64436B2205A

WAS Node Suite - ComfyUI

файл на civitaiстраница на civitai

WAS Node Suite - ComfyUI - WAS#0263

ComfyUI is an advanced node based UI utilizing Stable Diffusion. It allows you to create customized workflows such as image post processing, or conversions.


Latest Version Download

A node suite for ComfyUI with many new nodes, such as image processing, text processing, and more.

Share Workflows to the workflows wiki. Preferably embedded PNGs with workflows, but JSON is OK too. You can use this tool to add a workflow to a PNG file easily

Important Updates

Current Nodes:

Extra Nodes

Video Nodes

Codecs

You can use codecs that are available to your ffmpeg binaries by adding their fourcc ID (in one string), and appropriate container extension to the was_suite_config.json

Example H264 Codecs (Defaults)

    "ffmpeg_extra_codecs": {
        "avc1": ".mp4",
        "h264": ".mkv"
    }

Notes

Text Tokens

Text tokens can be used in the Save Text File and Save Image nodes. You can also add your own custom tokens with the Text Add Tokens node.

The token name can be anything excluding the : character to define your token. It can also be simple Regular Expressions.

Built-in Tokens

Other Features

Import AUTOMATIC1111 WebUI Styles

When using the latest builds of WAS Node Suite a was_suite_config.json file will be generated (if it doesn't exist). In this file you can setup a A1111 styles import.

You can set webui_styles_persistent_update to true to update the WAS Node Suite styles from WebUI every start of ComfyUI

Recommended Installation:

If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes, was-node-suite-comfyui, and WAS_Node_Suite.py has write permissions.

Alternate Installation:

If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes, and WAS_Node_Suite.py has write permissions.

This method will not install the resources required for Image Crop Face node, and you'll have to download the ./res/ folder yourself.

Installing on Colab

Create a new cell and add the following code, then run the cell. You may need to edit the path to your custom_nodes folder. You can also use the colab hosted here

Github Repository: https://github.com/WASasquatch/was-node-suite-comfyui

❤ Hearts and 🖼️ Reviews let me know you want moarr! :3

Тэги: pixel artanimationmaskdepth of fieldcustom nodecomfyuiwildcardsdepth mapmidasnodescustom nodesimage filterscannynoodle soup promptsnspimage combineedgesedge detectionimage stylesimage blendingtoolvideomaskingperlin noisepower noisepower fractalvoronoi noise
SHA256: D9113D89F8118556B44BD452F107DCD7A1FA965B16A3EAF61DE61041F986C939

Midjourney file name extractor

файл на civitaiстраница на civitai

when you download images in bulk from midjourney, the name will contain at least part of your prompt. This can be useful for dataset tagging when training models. This script, with a gui, automates taking that name, removing the username prefix and the UUID suffix, as well as replace the underscores in the text. Then it puts it in a .txt and saves it in the same directory as the image, using the exact same name except for the file ending.

for example the image file would have a name like this: your_username_your_prompt_45987923023409.png

this script creates a text file with the text "your prompt" with the name "your_username_your_prompt_45987923023409.txt"

https://github.com/nej-dot/imgname2txt/tree/main

Тэги: midjourneytoolstaggingtooldataset
SHA256: 5264678A8F067D96A55C0173E22FA18EFE0E6B600F6B2EBC7A8D62B5EFD895AA

LoraList

файл на civitaiстраница на civitai

https://github.com/nej-dot/loralist

This is a python based toolkit to parse long strings of loras to a format that either xyz plot or "prompt from file" can understand, (comma and line separted). It has a bunch of other features as well, and im always wiling to add more or change if anyone has suggestions.

currently it also has a find and replace, and a parse list function.

If you leave the input box for these two functions empty it will default to work on the text in the workspace box.

one feature is a bit tricky, if you want to use several loras in a plot, but also want several values for each, ie <dkjsdfk:0.4> and <dkjsdfk:0.8>, first run a find and replace for ":1" to ":x.x", and then set up all the values for your list, start stop etc, and then fill in blankspace for your separator, and press the parse and generate list button.

Тэги: toolstoolkittoolloratool lora
SHA256: EBEB5D643F81B28BADAC3CD734C2E2E7901142C309500FD721E948CB0A974032

chicyann

файл на civitaiстраница на civitai
Тэги: charactergirls
SHA256: 262A2092A6FBCD9982D295DAFFF2DD594172B98277BD5E1438D5A444791BEE02

1

файл на civitaiстраница на civitai

she is looking like a mix of lily rose deep and emrata . And the picture is at a party

Тэги: charactersexyblonde hairgreen eyeswomanrealisticlily rose deep
SHA256: D5083312E21B57643D8F51E91B57259141A472C55F1A7CE72E6A7E308DD87336

IF prompt Maker extension for A1111

файл на civitaiстраница на civitai

This is a extension I made that makes prompts for SD uses local LLMs with oobabooga API

you can use it with my characters that are especially mad to make SD prompts I also made a NSFW character which I will release soon.

I also have videos on how to install Oobabooga

Тэги: animesexyfemalewomanrealistictool
SHA256: 730593C5FCD295B23F874709A6D57C3149D717BE7CA0F83D7CED98CBCD52CD77