This extension provides assistance in installing and managing custom nodes for ComfyUI.
Announcement:
A bug was discovered in the patch of ComfyUI-Manager v0.6.x that prevented updates on Windows. If you are using that version on Windows, please go to the custom_nodes/ComfyUI-Manager directory and manually perform a git pull to update it.
Features:
missing nodes: automatic recognition and suggest installation
custom nodes: install/uninstall/update/enable/disable
Node suggestion of A1111 alternatives.
ComfyUI: update
models: download
Please refer to the GitHub page for more detailed information.
https://github.com/ltdrdata/ComfyUI-Manager
Install guide:
Method 1:
Download
Uncompress into ComfyUI/custom_nodes
Restart ComfyUI
Method 2:
goto ComfyUI/custom_nodes in cmd
Restart ComfyUI
They can generate multiple subjects. Each subject has its own prompt.
They require some custom nodes to function properly, mostly to automate out or simplify some of the tediousness that comes with setting up these things.
Please check the About this version section for each workflow for required custom nodes.
Some notable features
Includes Face Restoration with custom nodes from Impact Pack (due to the structure of this workflow it can be easily removed if undesired)
Easy to control subject areast where possible thanks to ComfyUI - Visual Area Conditioning / Latent composition
Tedium is automated where possible, thanks primarly to the nodes from WAS's node suite
There are four methods for multiple subjects included so far:
Limits the areas affected by each prompt to just a portion of the image
From my testing, this generally does better than Noisy Latent Composition
Generates each prompt on a separate image for a few steps (eg. 4/20) so that only rough outlines of major elements get created, then combines them together and does the remaining steps with Latent Couple.
First of all, if you want something that actually works, check Character Interaction (OpenPose). This one doesn't, i'm leaving it for archival purposes.
This is an """attempt""" at generating 2 characters interacting with each other, while retaining a high degree of control over their looks, without using ControlNets. As you may expect, it's quite unreliable.
We do this by generating the first few steps (eg. 6/30) on a single prompt encompassing the whole image that describes what sort of interaction we want to achieve (+background and perspective, common features of both characters help too).
Then, for the remaining steps in the second KSampler, we add two more prompts, one for each character, limited to the area where we "expect" (guess) they'll appear, so mostly just the left half/right half of the image with some overlap.
I'm not gonna lie, the results and consistency aren't great. If you want to try it, some settings to fiddle around with would be at which step the KSampler should change, the amount of overlap between character prompts and prompt strengths. From my testing, the closest interaction I've been able to get out of this was a kiss, I've tried to go for a hug but with no luck.
The higher the step that you switch KSamplers at, the more consistently you'll get the desired interaction, but you'll lose out on the character prompts (I've been going between 20-35% of total steps). You may be able to offset this a bit by increasing character prompt strengths.
Another method of generatign character interaction, except this time it actually works, and very consistently at that. To achieve this we simply run latent composition with ControlNet openpose mixed in. To make it more convenient to use, the openpose image can be pregenerated (you can still input your own if you want), so no need to hassle with inputting premade ones yourself. As a result it's about as simple and straightforward to use as normal generation. You can find instructions in the note to the side of the workflow after importing it into ComfyUI.
From a more technical side of things, implementing it is actually a bit more complicated than just applying OpenPose to the conditioning. Because we're dealing with a total of 3 conditionings (background and both subjects) we're running into problems. Applying ControlNet to all three, be it before combining them or after, gives us the background with OpenPose applied correctly (the OpenPose image having the same dimensons as the background conditioning), and subjects with the OpenPose image squeezed to fit their dimensions, for a total of 3 non-aligned ControlNet images. For that reason, we can only apply unchanged OpenPose to the background. Stopping here, however results in there being no ControlNet guidance for our subjects and the result has nothing to do with our OpenPose image. Therefore, now we crop parts of the OpenPose that correlate with subject areas and apply that to the subject conditioning. With all 3 conditionings having OpenPose applied we can finally combine them and proceed with the proper generation.
The following image demonstrates our resulting conditioning:
btw the workflow will generate similar ones for you :)
Background conditioning covers the entire image and contains the entirety of the pose data.
Subject 1 is reresented as the green area and contains a crop of the pose that is inside that area.
Subject 2 is reresented as the ble area and contains a crop of the pose that is inside that area.
The image itself is generated first, then the pose data is extracted from it, cropped, applied to conditioning and used in generating the proper image. This saves you from having to have applicable openpose images on hand, though if you do, care is taken to ensure that you can use them. You can also input an unprocessed image, and have it preprocessed for ControlNet inside the workflow.
And here is the final result:
This includes second pass after upscaling, face restoration and additional upscaling at the end, all of which are included in the workflow.
A handy preview of the conditioning areas (see frst image) is also generated. Ideally it would happen before the proper image generation, but the means to control that are not yet implemented in ComfyUI, so sometimes it's the last thing the workflow does. Sadly, I can't do anything about it for now.
Some more use-related details are explained in the workflow itself.
I wanted an easy way to convert .pt (PyTorch/PickleTensors) and .bin files for Textual Inversions and VAEs to the Safetensors format. DiffusionDalmation on GitHub has a Jupyter/Colab notebook (MIT license) that handled .pt files but not .bin files because of missing data in the .bin files that I had. Hugging Face has a function in the Safetensors repo (Apache license) that handles .bin, and probably .pt but I liked the training metadata details from the notebook version. Both are credited in the help text.
I started with pieces of both and worked it into a script that will try to convert both types as individual files or a directory of files. The .safetensors gets a new file hash but they are functionally identical from my testing. I have only tested on PyTorch 2 with SD1.5 TIs and VAEs though. It works on Windows with CUDA, but has theoretical support for the MacOS Metal backend and will fall back to using CPU. Buy me a Mac and I'll test it there. ;P
WARNING: code within files will be executed when the models are loaded - any malicious code will be executed, too. Do not run this on your own machine with untrusted/unscanned files containing pickle imports.
Assuming that you're in a trusted environment converting models you trust, you can activate an existing venv and run it from there, or set up a new venv with torch, safetensors, and typing packages.
Reusing the automatic1111 web UI venv, you would just run (for example):
V:\stable-diffusion-webui\venv\Scripts\activate
python V:\sd-info\safetensors-converter\bin-pt_to_safetensors.py .
deactivate
You can pass '.' in as the content_path to convert anything in the current directory, or provide the full path to a file or directory of files. It does not currently recurse through subdirectories.
If you get an error on a specific file, it may just have the wrong extension, e.g. try renaming .bin to .pt or the other way around (my .pt VAEs needed to be named .bin)
The .pt conversion will tell you the training model, hash, steps, and vector/dim counts when available, and will embed these in the .safetensors as metadata.
The .bin conversion does not provide these details, the format seems to lack that data.
Post-conversion, the script will check file sizes and compare that the output tensors match the original. It will throw an error if the file has changed too much or there's a mismatch.
The script should halt whenever there's an error, and will overwrite any existing .safetensors files with the same base name as the original file.
Before/after file size and hashes for two example TIs:
$ls -lG
total 465
-rw-r--r-- 1 user 3931 May 16 06:21 LulaCipher.bin
-rw-r--r-- 1 user 3192 May 30 01:58 LulaCipher.safetensors
-rw-r--r-- 1 user 231339 Apr 30 19:59 ng_deepnegative_v1_75t.pt
-rw-r--r-- 1 user 230672 May 30 01:58 ng_deepnegative_v1_75t.safetensors
$for i in *;do sha256sum ${i};done
433c565251ac13398000595032c436eb361634e80e581497d116f224083eb468 *LulaCipher.bin
907fe94bb9001d6cb1c55b237764a6d31fd94265745a41afaa9eac76dd64075b *LulaCipher.safetensors
54e7e4826d53949a3d0dde40aea023b1e456a618c608a7630e3999fd38f93245 *ng_deepnegative_v1_75t.pt
52cd09551728a502fcbed0087474a483175ec7fa7086635828cbbccece35d0bb *ng_deepnegative_v1_75t.safetensors
ComfyUI is an advanced node based UI utilizing Stable Diffusion. It allows you to create customized workflows such as image post processing, or conversions.
[Updated 5/29/2023] ASCII
is deprecated. The new preferred method of text node output is STRING
. This is a change from ASCII
so that it is more clear what data is being passed.
The was_suite_config.json
will automatically set use_legacy_ascii_text
to false
.
Video Nodes - There are two new video nodes, Write to Video
and Create Video from Path
. These are experimental nodes.
BLIP Analyze Image: Get a text caption from a image, or interrogate the image with a question.
Model will download automatically from default URL, but you can point the download to another location/caption model in was_suite_config
Models will be stored in ComfyUI/models/blip/checkpoints/
SAM Model Loader: Load a SAM Segmentation model
SAM Parameters: Define your SAM parameters for segmentation of a image
SAM Parameters Combine: Combine SAM parameters
SAM Image Mask: SAM image masking
Image Bounds: Bounds a image
Inset Image Bounds: Inset a image bounds
Bounded Image Blend: Blend bounds image
Bounded Image Blend with Mask: Blend a bounds image by mask
Bounded Image Crop: Crop a bounds image
Bounded Image Crop with Mask: Crop a bounds image by mask
Cache Node: Cache Latnet, Tensor Batches (Image), and Conditioning to disk to use later.
CLIPTextEncode (NSP): Parse noodle soups from the NSP pantry, or parse wildcards from a directory containing A1111 style wildacrds.
Wildcards are in the style of __filename__
, which also includes subdirectories like __appearance/haircolour__
(if you noodle_key is set to __
)
You can set a custom wildcards path in was_suite_config.json
file with key:
"wildcards_path": "E:\\python\\automatic\\webui3\\stable-diffusion-webui\\extensions\\sd-dynamic-prompts\\wildcards"
If no path is set the wildcards dir is located at the root of WAS Node Suite as /wildcards
Conditioning Input Switch: Switch between two conditioning inputs.
Constant Number
Create Grid Image: Create a image grid from images at a destination with customizable glob pattern. Optional border size and color.
Create Morph Image: Create a GIF/APNG animation from two images, fading between them.
Create Morph Image by Path: Create a GIF/APNG animation from a path to a directory containing images, with optional pattern.
Create Video from Path: Create video from images from a specified path.
CLIPSeg Masking: Mask a image with CLIPSeg and return a raw mask
CLIPSeg Masking Batch: Create a batch image (from image inputs) and batch mask with CLIPSeg
Dictionary to Console: Print a dictionary input to the console
Image Analyze
Black White Levels
RGB Levels
Depends on matplotlib
, will attempt to install on first run
Diffusers Hub Down-Loader: Download a diffusers model from the HuggingFace Hub and load it
Image Batch: Create one batch out of multiple batched tensors.
Image Blank: Create a blank image in any color
Image Blend by Mask: Blend two images by a mask
Image Blend: Blend two images by opacity
Image Blending Mode: Blend two images by various blending modes
Image Bloom Filter: Apply a high-pass based bloom filter
Image Canny Filter: Apply a canny filter to a image
Image Chromatic Aberration: Apply chromatic aberration lens effect to a image like in sci-fi films, movie theaters, and video games
Image Color Palette
Generate a color palette based on the input image.
Depends on scikit-learn
, will attempt to install on first run.
Supports color range of 8-256
Utilizes font in ./res/
unless unavailable, then it will utilize internal better then nothing font.
Image Crop Face: Crop a face out of a image
Limitations:
Sometimes no faces are found in badly generated images, or faces at angles
Sometimes face crop is black, this is because the padding is too large and intersected with the image edge. Use a smaller padding size.
face_recognition mode sometimes finds random things as faces. It also requires a [CUDA] GPU.
Only detects one face. This is a design choice to make it's use easy.
Notes:
Detection runs in succession. If nothing is found with the selected detection cascades, it will try the next available cascades file.
Image Crop Location: Crop a image to specified location in top, left, right, and bottom locations relating to the pixel dimensions of the image in X and Y coordinats.
Image Crop Square Location: Crop a location by X/Y center, creating a square crop around that point.
Image Displacement Warp: Warp a image by a displacement map image by a given amplitude.
Image Dragan Photography Filter: Apply a Andrzej Dragan photography style to a image
Image Edge Detection Filter: Detect edges in a image
Image Film Grain: Apply film grain to a image
Image Filter Adjustments: Apply various image adjustments to a image
Image Flip: Flip a image horizontal, or vertical
Image Gradient Map: Apply a gradient map to a image
Image Generate Gradient: Generate a gradient map with desired stops and colors
Image High Pass Filter: Apply a high frequency pass to the image returning the details
Image History Loader: Load images from history based on the Load Image Batch node. Can define max history in config file. (requires restart to show last sessions files at this time)
Image Input Switch: Switch between two image inputs
Image Levels Adjustment: Adjust the levels of a image
Image Load: Load a image from any path on the system, or a url starting with http
Image Median Filter: Apply a median filter to a image, such as to smooth out details in surfaces
Image Mix RGB Channels: Mix together RGB channels into a single iamge
Image Monitor Effects Filter: Apply various monitor effects to a image
Digital Distortion
A digital breakup distortion effect
Signal Distortion
A analog signal distortion effect on vertical bands like a CRT monitor
TV Distortion
A TV scanline and bleed distortion effect
Image Nova Filter: A image that uses a sinus frequency to break apart a image into RGB frequencies
Image Perlin Noise: Generate perlin noise
Image Perlin Power Fractal: Generate a perlin power fractal
Image Paste Face Crop: Paste face crop back on a image at it's original location and size
Features a better blending funciton than GFPGAN/CodeFormer so there shouldn't be visible seams, and coupled with Diffusion Result, looks better than GFPGAN/CodeFormer.
Image Paste Crop: Paste a crop (such as from Image Crop Location) at it's original location and size utilizing the crop_data
node input. This uses a different blending algorithm then Image Paste Face Crop, which may be desired in certain instances.
Image Power Noise: Generate power-law noise
frequency: The frequency parameter controls the distribution of the noise across different frequencies. In the context of Fourier analysis, higher frequencies represent fine details or high-frequency components, while lower frequencies represent coarse details or low-frequency components. Adjusting the frequency parameter can result in different textures and levels of detail in the generated noise. The specific range and meaning of the frequency parameter may vary depending on the noise type.
attenuation: The attenuation parameter determines the strength or intensity of the noise. It controls how much the noise values deviate from the mean or central value. Higher values of attenuation lead to more significant variations and a stronger presence of noise, while lower values result in a smoother and less noticeable noise. The specific range and interpretation of the attenuation parameter may vary depending on the noise type.
noise_type: The tyoe of Power-Law noise to generate (white, grey, pink, green, blue)
Image Paste Crop by Location: Paste a crop top a custom location. This uses the same blending algorithm as Image Paste Crop.
Image Pixelate: Turn a image into pixel art! Define the max number of colors, the pixelation mode, the random state, and max iterations, and max those sprites shine.
Image Remove Background (Alpha): Remove the background from a image by threshold and tolerance.
Image Remove Color: Remove a color from a image and replace it with another
Image Resize
Image Rotate: Rotate an image
Image Save: A save image node with format support and path support. (Bug: Doesn't display image
Image Seamless Texture: Create a seamless texture out of a image with optional tiling
Image Select Channel: Select a single channel of an RGB image
Image Select Color: Return the select image only on a black canvas
Image Shadows and Highlights: Adjust the shadows and highlights of an image
Image Size to Number: Get the width
and height
of an input image to use with Number nodes.
Image Stitch: Stitch images together on different sides with optional feathering blending between them.
Image Style Filter: Style a image with Pilgram instragram-like filters
Depends on pilgram
module
Image Threshold: Return the desired threshold range of a image
Image Tile: Split a image up into a image batch of tiles. Can be used with Tensor Batch to Image to select a individual tile from the batch.
Image Transpose
Image fDOF Filter: Apply a fake depth of field effect to an image
Image to Latent Mask: Convert a image into a latent mask
Image to Noise: Convert a image into noise, useful for init blending or init input to theme a diffusion.
Image to Seed: Convert a image to a reproducible seed
Image Voronoi Noise Filter
A custom implementation of the worley voronoi noise diagram
Input Switch (Disable until *
wildcard fix)
KSampler (WAS): A sampler that accepts a seed as a node inpu
Load Cache: Load cached Latent, Tensor Batch (image), and Conditioning files.
Load Text File
Now supports outputting a dictionary named after the file, or custom input.
The dictionary contains a list of all lines in the file.
Load Batch Images
Increment images in a folder, or fetch a single image out of a batch.
Will reset it's place if the path, or pattern is changed.
pattern is a glob that allows you to do things like **/*
to get all files in the directory and subdirectory or things like *.jpg
to select only JPEG images in the directory specified.
Mask to Image: Convert MASK
to IMAGE
Mask Batch to Mask: Return a single mask from a batch of masks
Mask Invert: Invert a mask.
Mask Add: Add masks together.
Mask Subtract: Subtract from a mask by another.
Mask Dominant Region: Return the dominant region in a mask (the largest area)
Mask Minority Region: Return the smallest region in a mask (the smallest area)
Mask Arbitrary Region: Return a region that most closely matches the size input (size is not a direct representation of pixels, but approximate)
Mask Smooth Region: Smooth the boundaries of a mask
Mask Erode Region: Erode the boundaries of a mask
Mask Dilate Region: Dilate the boundaries of a mask
Mask Fill Region: Fill holes within the masks regions
Mask Ceiling Region": Return only white pixels within a offset range.
Mask Floor Region: Return the lower most pixel values as white (255)
Mask Threshold Region: Apply a thresholded image between a black value and white value
Mask Gaussian Region: Apply a Gaussian blur to the mask
Masks Combine Masks: Combine 2 or more masks into one mask.
Masks Combine Batch: Combine batched masks into one mask.
ComfyUI Loaders: A set of ComfyUI loaders that also output a string that contains the name of the model being loaded.
Latent Noise Injection: Inject latent noise into a latent image
Latent Size to Number: Latent sizes in tensor width/height
Latent Upscale by Factor: Upscale a latent image by a factor
Latent Input Switch: Switch between two latent inputs
Logic Boolean: A simple 1
or 0
output to use with logic
MiDaS Depth Approximation: Produce a depth approximation of a single image input
MiDaS Mask Image: Mask a input image using MiDaS with a desired color
Number Operation
Number to Seed
Number to Float
Number Input Switch: Switch between two number inputs
Number Input Condition: Compare between two inputs or against the A input
Number to Int
Number to String
Number to Text
Random Number
Save Text File: Save a text string to a file
Seed: Return a seed
Tensor Batch to Image: Select a single image out of a latent batch for post processing with filters
Text Add Tokens: Add custom tokens to parse in filenames or other text.
Text Add Token by Input: Add custom token by inputs representing single single line name and value of the token
Text Compare: Compare two strings. Returns a boolean if they are the same, a score of similarity, and the similarity or difference text.
Text Concatenate: Merge two strings
Text Dictionary Update: Merge two dictionaries
Text File History: Show previously opened text files (requires restart to show last sessions files at this time)
Text Find and Replace: Find and replace a substring in a string
Text Find and Replace by Dictionary: Replace substrings in a ASCII text input with a dictionary.
The dictionary keys are used as the key to replace, and the list of lines it contains chosen at random based on the seed.
Text Input Switch: Switch between two text inputs
Text List: Create a list of text strings
Text Concatenate: Merge lists of strings
Text Multiline: Write a multiline text string
Text Parse A1111 Embeddings: Convert embeddings filenames in your prompts to embedding:[filename]]
format based on your /ComfyUI/models/embeddings/
files.
Text Parse Noodle Soup Prompts: Parse NSP in a text input
Text Parse Tokens: Parse custom tokens in text.
Text Random Line: Select a random line from a text input string
Text String: Write a single line text string value
Text to Conditioning: Convert a text string to conditioning.
True Random.org Number Generator: Generate a truly random number online from atmospheric noise with Random.org
Write to Morph GIF: Write a new frame to an existing GIF (or create new one) with interpolation between frames.
Write to Video: Write a frame as you generate to a video (Best used with FFV1 for lossless images)
CLIPTextEncode (BlenderNeko Advanced + NSP): Only available if you have BlenderNeko's Advanced CLIP Text Encode. Allows for NSP and Wildcard use with their advanced CLIPTextEncode.
You can use codecs that are available to your ffmpeg binaries by adding their fourcc ID (in one string), and appropriate container extension to the was_suite_config.json
Example H264 Codecs (Defaults)
"ffmpeg_extra_codecs": {
"avc1": ".mp4",
"h264": ".mkv"
}
For now I am only supporting Windows installations for video nodes.
I do not have access to Mac or a stand-alone linux distro. If you get them working and want to PR a patch/directions, feel free.
Video nodes require FFMPEG. You should download the proper FFMPEG binaries for you system and set the FFMPEG path in the config file.
Additionally, if you want to use H264 codec need to download OpenH264 1.8.0 and place it in the root of ComfyUI (Example: C:\ComfyUI_windows_portable
).
FFV1 will complain about invalid container. You can ignore this. The resulting MKV file is readable. I have not figured out what this issue is about. Documentaion tells me to use MKV, but it's telling me it's unsupported.
If you know how to resolve this, I'd love a PR
Write to Video
node should use a lossless video codec or when it copies frames, and reapplies compression, it will start expontentially ruining the starting frames run to run.
Text tokens can be used in the Save Text File and Save Image nodes. You can also add your own custom tokens with the Text Add Tokens node.
The token name can be anything excluding the :
character to define your token. It can also be simple Regular Expressions.
[time]
The current system microtime
[time(format_code
)]
The current system time in human readable format. Utilizing datetime formatting
Example: [hostname]_[time]__[time(%Y-%m-%d__%I-%M%p)]
would output: SKYNET-MASTER_1680897261__2023-04-07__07-54PM
[hostname]
The hostname of the system executing ComfyUI
[user]
The user that is executing ComfyUI
When using the latest builds of WAS Node Suite a was_suite_config.json
file will be generated (if it doesn't exist). In this file you can setup a A1111 styles import.
Run ComfyUI to generate the new /custom-nodes/was-node-suite-comfyui/was_Suite_config.json
file.
Open the was_suite_config.json
file with a text editor.
Replace the webui_styles
value from None
to the path of your A1111 styles file called styles.csv. Be sure to use double backslashes for Windows paths.
Example C:\\python\\stable-diffusion-webui\\styles.csv
Restart ComfyUI
Select a style with the Prompt Styles Node
.
The first ASCII output is your positive prompt, and the second ASCII output is your negative prompt.
You can set webui_styles_persistent_update
to true
to update the WAS Node Suite styles from WebUI every start of ComfyUI
If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes
, was-node-suite-comfyui
, and WAS_Node_Suite.py
has write permissions.
Navigate to your /ComfyUI/custom_nodes/
folder
Run git clone https://github.com/WASasquatch/was-node-suite-comfyui/
Navigate to your was-node-suite-comfyui
folder
Portable/venv:
Run path/to/ComfUI/python_embeded/python.exe -m pip install -r requirements.txt
With system python
Run pip install -r requirements.txt
Start ComfyUI
WAS Suite should uninstall legacy nodes automatically for you.
Tools will be located in the WAS Suite menu.
If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes
, and WAS_Node_Suite.py
has write permissions.
Download WAS_Node_Suite.py
Move the file to your /ComfyUI/custom_nodes/
folder
WAS Node Suite will attempt install dependencies on it's own, but you may need to manually do so. The dependencies required are in the requirements.txt
on this repo. See installation steps above.
Start, or Restart ComfyUI
WAS Suite should uninstall legacy nodes automatically for you.
Tools will be located in the WAS Suite menu.
This method will not install the resources required for Image Crop Face node, and you'll have to download the ./res/ folder yourself.
Create a new cell and add the following code, then run the cell. You may need to edit the path to your custom_nodes
folder. You can also use the colab hosted here
!git clone https://github.com/WASasquatch/was-node-suite-comfyui /content/ComfyUI/custom_nodes/was-node-suite-comfyui
Restart Colab Runtime (don't disconnect)
Tools will be located in the WAS Suite menu.
when you download images in bulk from midjourney, the name will contain at least part of your prompt. This can be useful for dataset tagging when training models. This script, with a gui, automates taking that name, removing the username prefix and the UUID suffix, as well as replace the underscores in the text. Then it puts it in a .txt and saves it in the same directory as the image, using the exact same name except for the file ending.
for example the image file would have a name like this: your_username_your_prompt_45987923023409.png
this script creates a text file with the text "your prompt" with the name "your_username_your_prompt_45987923023409.txt"
https://github.com/nej-dot/loralist
This is a python based toolkit to parse long strings of loras to a format that either xyz plot or "prompt from file" can understand, (comma and line separted). It has a bunch of other features as well, and im always wiling to add more or change if anyone has suggestions.
currently it also has a find and replace, and a parse list function.
If you leave the input box for these two functions empty it will default to work on the text in the workspace box.
one feature is a bit tricky, if you want to use several loras in a plot, but also want several values for each, ie <dkjsdfk:0.4> and <dkjsdfk:0.8>, first run a find and replace for ":1" to ":x.x", and then set up all the values for your list, start stop etc, and then fill in blankspace for your separator, and press the parse and generate list button.
she is looking like a mix of lily rose deep and emrata . And the picture is at a party
This is a extension I made that makes prompts for SD uses local LLMs with oobabooga API
you can use it with my characters that are especially mad to make SD prompts I also made a NSFW character which I will release soon.
I also have videos on how to install Oobabooga