This is a personal side project I started mainly to improve my testing methods for Lora collisions for filter out overfitting epochs more reliably.
The concept is simple:
you have the ability to make multiple layers, each one controls a Lora or an attribute that moves from start to end each frame by a value of step size.
There are two types of layers: Prompt layers (like Lora layers) where you can have multiple ones at the same time, and Attribute layers (like a clip skip) where you can have only one for each parameter type.
Tips:
Install by moving it to the /Scripts folder.
The resulting GIF is not shown after generation; instead, you can find it in Outputs/txt2img-images/txt2gif.
When adding a Lora layer, just input its name (e.g., "lora:Steampunk:1"), which becomes "Steampunk."
Warning: This tutorial is simple and written on the fly, there is no PDF version and the download is just the zip file full of the same images you see in this model card. Of the exception there ARE poses added to the zip file as a gift for reading this. (it wouldn't let me add more than one zip file sorry!)
This is an absolutely FREE and EASY simple way to fast make your own poses if you've unable to use controlnet pose maker tool in A1111 itself. This uses HUGGINGFACE spaces, which is 1001% FREE if you're using the spaces that are linked in this tutorial. You do not even need to LOG in at all.
Your first step is to go to: https://huggingface.co/
Once you're there you're going to have a CHOICE to login, or if you don't have an account it isn't required but it is helpful to sign up. This allows you to bookmark or duplicate spaces at will.
Again it's entirely up to you, this is what it looks like when you're NOT logged in, from here on in you'll see "DARK MODE" as that's our standard choice for websites.
Your search bar is at the VERY TOP, as you notice i'm logged in, so it's in dark mode. search for "POSES" - you'll have a few options show up.
You're looking for the two options of "JONIGATA" - One of them WILL say when you get it to it not to use it, but this is weird - in our opinion, because the older repository/space is a little better at recognizing line art/manga for poses (it's hit or miss but it works!).
If easier the link to this space is here: https://huggingface.co/spaces/jonigata/PoseMaker
When you go to POSE MAKER (not posemaker 2) - it looks very similar, you'll notice this when you see pose maker 2 in a bit. You have options for width/height but we're focusing on the "ESTIMATION" section on both versions. Clearly you CAN edit the skeleton on both versions, as the command keys by save show in this screenshot.
In this example we spent time back in Second Life filtering through animated poses. We graciously resized each photo ahead of time to 512x768 - We didn't remember until it was too late that Control net has a maximum input size on A1111. It's best your images are smaller so you can use BIRME - birme.net/ for better resizing needs fast. You'll drop your resized or non resized image in the "ESTIMATION" box on the left hand side.
I'll show what your button options are and how to use them:
Once you've put your picture in, it'll tell you what the size of the picture is, and if there's an indvidual OR more than one in the picture. Once it loads this information before you do any estimation buttons, make sure you've pressed APPLY SIZE. This is important because you may not have the room/screenspace to resize it to the box it gives you.
"REPLACE" is where it replaces the default skeleton with the one that it's currently estimated from the picture. Provided you've already APPLIED the size at present to the canvas, you can replace the skeleton and either prepare to edit it or other actions.
The keyboard actions above are the ones that you'll be given to be able to edit before you save - don't forget you can just use your mouse to edit and create a more stylistic pose in case it missed a few details. Remember: This isn't perfect, you may have to stretch certain limbs to suit your needs.
Once you're entirely happy with your pose, hit save - this will save an image to your downloads folder, and you can then collect these and release your own pack OR HOARD THEM!
BUT WAIT THERE'S MORE! - We'll go through Pose Maker 2 - just to give you a 2nd go at this information!
Posemaker2 is a little more accurate for realistic (or video game) poses, though it can mess up sitting and/or half body poses. SO both pose spaces are REALLY good for different things. Most of the key commands are the same, and the estimation and edit process is the same.
IF easier than searching the link to this space is here: https://huggingface.co/spaces/jonigata/PoseMaker2
In case there were MORE commands (and I think there are) these are for posemaker 2, and allow you to edit your skeleton after you finish your estimations.
So again, this is an estimation but this time it's in Posemaker2 - you can see that the layout in HF gradio spaces is quite similar to the one you'd see in A1111. This may not be as simple for SOME, but if you're limited by space in your notebook this is fairly fast and quick and easy.
For example these are images used and the images output from the poses:
Both of these were based on an avatar we'd pre made over the years in Second Life. Second life is not related to Stable Diffusion other than we use it still for our needs and to make poses and make pics to make into Lycoris/Lora!
Here are example poses that are included in the pose pack that's included in this tutorial for fun!
So now you can see how to create your own poses, I'll likely cover some more gradio spaces on hugging face for civit users in future!
And yes, in the downloads are the POSES and the screenshots from this tutorial, but NOT the images used to MAKE the poses. You CAN re feed it AI created pictures to make poses btw!
This is a TEST to see if it is possible to generate a wide variety of nationalities and races using only prompts, without using TI, LoRA, etc., with a base model of increasingly Asian-leaning Japanese women who have been trained and merged.
prompts:
masterpiece, best quality, ultra high res, (photorealistic:1.8), unreal_engine, photograph, realistic_skin_texture, (__wildcards/nationality__ {woman|men}:1.8), solo, ultra face detailed, outdoors
negative prompts:
paintings, sketches, (worst quality:2), multiple girls, lowres, text, error, missing arms, missing legs, missing fingers, extra digit, fewer digits, cropped, jpeg artifacts, signature, watermark, out of frame, extra fingers, mutated hands, (poorly drawn hands), (poorly drawn face), (mutation), (deformed breasts), (ugly), blurry, (bad anatomy), (bad proportions), (extra limbs), cloned face, flat color, monochrome, limited palette, child faced, octane_render, futa, futanari, hentai, nsfw
"Super Easy AI Installer Tool" is a user-friendly application that simplifies the installation process of AI-related repositories for users. The tool is designed to provide an easy-to-use solution for accessing and installing AI repositories with minimal technical hassle to none the tool will automatically handle the installation process, making it easier for users to access and use AI tools.
For Windows 10+ and Nvidia GPU-based cards
Don't forget to leave a like/star.
For more Info:
https://github.com/diStyApps/seait
Please note that Virustotal and other antivirus programs may give a false positive when running this app. This is due the use Pyinstaller to convert the python file EXE, which can sometimes trigger false positives even for the simpler scripts which is a known issue
Unfortunately, I don't have the time to handle these false positives. However, please rest assured that the code is transparent on https://github.com/diStyApps/seait
I would rather add features and more AI tools at this stage of development.
Source: https://github.com/pyinstaller/pyinstaller/issues/6754
Download the "Super Easy AI Installer Tool" at your own discretion.
Multi-language support [x]
More AI-related repos [x]
Set custom project path [x]
Custom arguments [x]
Pre installed auto1111 version [ ]
App updater [ ]
Remembering arguments [ ]
Maybe arguments profiles [ ]
Better event handling [ ]
Fully standalone version no python or git needed [ ]
Support
https://www.patreon.com/distyx
https://coindrop.to/disty
To create node template for LoRA Stacking with key word input
I am still testing this
Mixing LoRA sometimes is more a game of guessing compatibility, so experiment around with it and don't expect best results right away.
V 1.0 is here 🥳. I have been experimenting with the extension and found good hires + img2img workflow. This is somewhat different from the original tutorial, so i will be leaving the V 0.35, and deleting the older ones. It has some information this is lacking and vise versa. I tried to make more straight forward and easy to understand version of the tutorial. I hope i succeeded.
Please leave feedback and images you manage to create with the tutorial :)
This version will be only PDF, so the quality of the tutorial can be higher.
Updates
04/25
V1.1 of Multidiffusion upscaler how to + workflow
Fixed some typos, uncompressed images, wording
04/24
V1.0 of Multidiffusion upscaler how to + workflow
This is information i have gathered experimenting with the extension. I might get something wrong and if you spot something wrong with guide, please leave comment. Any feedback is welcome.
I am not native English speaker and write such text. I can't do anything about that. :)
I am not the creator of this extension and i am not in any way related to them. They can be found from Gitgub . Please show some love for them if you have time :).
Github Repo:
https://github.com/receyuki/stable-diffusion-prompt-reader
A simple standalone viewer for reading prompts from Stable Diffusion generated image outside the webui.
There are many great prompt reading tools out there now, but for people like me who just want a simple tool, I built this one.
No additional environment or command line or browser required to run it, just open the app and drag and drop the image in.
Support macOS, Windows and Linux.
Simple drag and drop interaction.
Copy prompt to clipboard.
Remove prompt from image.
Export prompt to text file.
Detect generation tool.
Multiple formats support.
Dark and light mode support.
A1111's webui
PNG
JPEG
WEBP
NovelAI
PNG
ComfyUI*
PNG
Naifu(4chan)
PNG
* Limitations apply. See format limitations.
If you are using a tool or format that is not on this list, please help me to support your format by uploading the original file generated by your tool as a zip file to the issues, thx.
Download executable from above or from the GitHub Releases
Open the executable file (.exe or .app) and drag and drop the image into the window.
OR
Right click on the image and select open with SD Prompt Reader
OR
Drag and drop the image directly onto executable (.exe or .app).
Click "Export to txt" will generate a txt file alongside the image file.
To save to another location, click the expand arrow and click "select directory".
Click "Remove Data" will generate a new image file with suffix "_data_removed" alongside the original image file.
To save to another location, click the expand arrow and click "select directory".
To overwrite the original image file, click the expand arrow and click "overwrite the original image".
Support for comfyUI requires more testing. If you believe your image is not being displayed properly, please upload the original file generated by ComfyUI as a zip file to the issues.
1. If there are multiple sets of data (seed, steps, CFG, etc.) in the setting box, this means that there are multiple KSampler nodes in the flowchart.
2. Due to the nature of ComfyUI, all nodes and flowcharts in the workflow are stored in the image, including those that are not being used. Also, a flowchart can have multiple branches, inputs and outputs.
(e.g. output hires. fixed image and original image simultaneously in a single flowchart)
SD Prompt Reader will traverse all flowcharts and branches and display the longest branch with complete input and output.
comfy_assemble_tags_node
2023 04-26 Troubleshooting Instructions:
I am very sorry that I overwrote the code of search path display and selected path display when I packaged yesterday. I just found out today that I have uploaded the code to github and civitail again. If you have downloaded it before and want to use the function of path display, Please download the latest code to overwrite this file ComfyUI\custom_nodes\comfy_assemble_tags_node, then go to ComfyUI\web\extensions\select_tags and delete select_tags folder. If you have modified or added your content to tags.xlsx in this folder, please back up and restart ComfyUI
---
It took me over a week to write. Please give me a star if you like, or give me a star on github
gitHub: https://github.com/laojingwei/comfy_assemble_tags_node
Gratitude model:https://civitai.com/models/10415/3-guofeng3
description
comfy_assemble_tags_node is a keyword selection and assembly modification plugin that can help you quickly generate various ai keywords.
It has the following characteristics:
1. It covers most of the keywords used in ai, and makes a lot of classifications, including some common presuppositions, etc.
2. Keywords are marked in Chinese, which is convenient for users who do not speak English to choose.
3, provides the search function, can quickly locate the approximate location of keywords, search the current page can also be global search, save time and energy.
4, with the function of choosing memory, you can see the keywords you selected last time and the path on the home page, convenient modification and adjustment.
5, can be customized to increase keywords, to meet different needs and preferences.
6, you can view the current generated random seeds, easy to copy and share.
7, can split the assembly of keywords, if the keywords are more recommended to use the assembly node, can let you want to modify the follow-up keyword can be faster to locate the keyword position to modify.
Download method
git
git clone https://github.com/laojingwei/comfy_assemble_tags_node.git
zip
ZIP
Installation mode
1. Put the downloaded plug-in folder into this folder ComfyUI_windows_portable\ComfyUI\custom_nodes 2. Restart comfyui software and open the UI interface
Node introduction
Show Seed Displays random seeds that are currently generated
Select Tags Tags Used to select keywords
Assemble Tags (more keywords and often modify is recommended)
Show Tags You can check the keywords selected by the previous nodes, which can be adjusted in more detail here (if the process does not monitor the change after adjusting the keywords, you can disconnect the chain connected with the front and click to generate)
Usage method
1. Double-click the left button to search Select Tags, Assemble Tags, Show Tags and Show Seed to add nodes
2. Right-click Add Node->. xww-> tags-> ... Select the node you want to add
Select Tags:
1. You can enter the corresponding tab to find the keywords you need, or you can use search to search. If \** is added in front of the search, it means to check the contents of all tabs, and the corresponding path will be displayed on the found value. If you do not add \** Indicates that only the content of the current tab is searched, regardless of the global or the current tab, as long as the corresponding value is found, the text in the search box will be converted to green, and it will be white if it cannot be found, and the upper right corner of the corresponding keyword option will be displayed An arrow animation will appear to tell you what you are searching for. 2. Click OK to display the English in reverse. The content in the reverse display here cannot be changed just to show you what you have selected. 3. You can tag the content in it by yourself. Modify or add in the xlsx table 4. The input box on the right side of the option is weight input 5. For more detailed operations, please see the screenshot below
Assemble Tags:
1, can be used with Select Tags, can also be used alone, when used with Select Tags, please right-click in the bottom to find which option you want to change to take the input content, then you can Select Tags connected to the option 2, the options are preset, available without
Show Tags:
1. Keywords can be fine-tuned with Select Tags or Assemble Tags, or can be used alone
Show Seed:
1. It can be connected to VAE Decode, there are as many KSampler seeds as there are in the process, and you can copy corresponding seeds for use. When we generate images, we think the generated images are pretty good. If we wanted to modify the seeds a little bit, we couldn't get the generated seeds, because the seeds shown in KSampler will generate the next seeds immediately after the call. In this case, you can use this node to solve your needs
you can also run it on the WEBUI
https://github.com/todhm/sd_feed
download it from here!
1. You can browse world class AI images generated from Stable Diffusion (automatic1111)
2. You can easily copy the generation data from the images
3. You can also generate your own images through our custom Extension called Feed.
I found interesting User Loop between maker's tool(Stable Diffusion, webui) and community.
1. Browse the AI generated images in a certain community.
2. Copy the Generation data of the image.
3. Tweak easily on Stable Diffusion Webui.
4. Post his own AI images on the community.
So I am trying to create a platform that combines the tools used by creators and the communities where they post.
That it! SD-Feed. and I also launched custom extension that makes you easily upload on my website directly on Webui
You can download the extension down here
It's much easier to upload your own images in webui!
https://civitai.com/models/38445/feed-tab-stable-diffusion-webui-extension
https://civitai.com/models/44143/feed-extension-webui-custom-extension
お兄ちゃんはおしまい!の緒山まひろちゃんのボイスを学習したものです。
第一話・第二話のセリフでうめき声や叫び、残響の強いものを除去して学習させました。
誰でもまひろちゃんの声で喋ったり歌ったりできます。
サンプルのようにね!
dummy