Others

Stable Diffusion Prompt Reader

файл на civitaiстраница на civitai

Stable Diffusion Prompt Reader

Github Repo:

https://github.com/receyuki/stable-diffusion-prompt-reader

简体中文

A simple standalone viewer for reading prompts from Stable Diffusion generated image outside the webui.

There are many great prompt reading tools out there now, but for people like me who just want a simple tool, I built this one.

No additional environment or command line or browser is required to run it, just open the app and drag and drop the image in.

Features

Supported Formats

* Limitations apply. See format limitations.

If you are using a tool or format that is not on this list, please help me to support your format by uploading the original file generated by your tool as a zip file to the issues, thx.

Download

For macOS and Windows users

Download executable from above or from the GitHub Releases

For Linux users (not regularly tested)

Usage

Read prompt

OR

OR

Export prompt to text file

Remove prompt from image

Edit image

Format Limitations

TXT

  1. Importing txt file is only allowed in edit mode.

  2. Only A1111 format txt files are supported. You can use txt files generated by the A1111 webui or use the SD prompt reader to export txt from A1111 images

ComfyUI

Support for comfyUI requires more testing. If you believe your image is not being displayed properly, please upload the original file generated by ComfyUI as a zip file to the issues.

  1. If there are multiple sets of data (seed, steps, CFG, etc.) in the setting box, this means that there are multiple KSampler nodes in the flowchart.

  2. Due to the nature of ComfyUI, all nodes and flowcharts in the workflow are stored in the image, including those that are not being used. Also, a flowchart can have multiple branches, inputs and outputs.
    (e.g. output hires. fixed image and original image simultaneously in a single flowchart)
    SD Prompt Reader will traverse all flowcharts and branches and display the longest branch with complete input and output.

Easy Diffusion

By default, Easy Diffusion does not write metadata to images. Please change the Metadata format in settings to embed to write the metadata to images

TODO

Credits

Тэги: promptutilityguitoolstoolkitmetadatatoollinuxmacoswindows
SHA256: 9FDCE22CA9DAAEB2BD746956CB1F39648B9CFF659431508E552E7DED36C472B8

Soldier (Rick May) so-vits-svc Model

файл на civitaiстраница на civitai

A so-vits-svc model trained on a few of Soldier's voice lines from Team Fortress 2. Input audio is recommended to not be fast paced, the target also has to speak loud and clear.

Here is a link to the software I use: https://github.com/voicepaw/so-vits-svc-fork

You can either build from source or install through pip

Comes with a gui where you can indicate the weights, config, and input audio to transform

Тэги: charactersoldierteam fortress 2tf2so-vits-svc-forkvoiceso-vits-svc
SHA256: E1656DF1E05844911568F6846A39653A867F6098F699B7666E095F7ADDD0A936

[C站常用高质量提示词]High quality parameters on CIVITAI

файл на civitaiстраница на civitai

C

High quality parameters on CIVITAI

如果有发现更好的提示词,欢迎在下方评论发图!

If you find better parameters, feel free to send me a picture in the comments below!

————————————

在线版本 | Online version:

【google文档】

https://docs.google.com/document/d/1xFGVMr_ELDDW9WkKw27wjsWqSDNYlLWtsY93Ost-0QE/edit?usp=sharing

————————————

更新 | update

说明 | description

总结了自2023,2,22-2023,5,10C站常用的tag串里面比较好用的一部分,并且筛除了过于简单的提示词。提示词合集为个人整理,可能并不全面。

我不希望看到那些拿着一两个tag关键词就说自己的tag并且不允许别人使用这种行为:也不希望某些人拿着“高贵”的AI图去FanBox等平台自诩所谓的“价值连城”,让别人付费才能购买那劣质的AI图。

I don't want to see people who take a word or two and say it's for themselves and no one else to use

I also don't want some people to take "noble" AI graphics to FanBox or other platforms claiming that they are "worth a lot" and make others pay for that inferior AI graphics.

————————————

使用 | Use

第一步:从文档复制提示词

Copy the parameters from the document

第二步:直接粘贴到正面提示词那一栏

Direct paste

第三步:点击右侧的箭头,一键导入生成信息

Click the arrow on the right to import parameters with one click

第四步:根据自己情况调整下面的参数,记得点一下随机种子

Adjust the following parameters according to your situation, remember to click on the random seed

————————————

感谢【秋叶甜品店(频道)】让我有了这个想法。之前就有人提出CIVITAI上模型的提示词都是万年不变的(例如向日葵女孩最早在2月就已经出现了,直到5月10日仍有新发布的模型使用这个提示词),于是就有了整理的想法。

Thanks to Akiba Dessert Shop (Channel) for giving me the idea. It was suggested earlier that the CIVITAI model's parameters would remain the same for thousands of years (for example, Sunflower Girl first appeared in February and was still used in new models released as of May 10), so the idea of reorganizing was born.

祝大家玩的开心<3

Тэги: animetool
SHA256: 7CAB85D22621BADD233E44A3B220D387F0B55071C776A887C8AE9348C7528F4D

Guide: How to prompt for beginners (anime)

файл на civitaiстраница на civitai

(I added all images from this post to .zip archive. Click download to see them in full resolution)

Everything I write here is about txt2img without img2img, hires.fix, inpaint and extensions like controlnet. I’m gonna focus on creating prompt, not how to make more detailed, upscaled or edited image. Everything was tested mostly on anime models so it may not work with realistic models.


Before we start I’d like to keep your attention to set up a few things if you haven’t done it yet. This will help you get good images.

First of all you should set up VAE you will be using with your model. (If model has baked VAE you don’t need to do anything). To do so download VAE file (if you haven’t) and put it into  “stable-diffusion-webui\models\VAE” folder. Go to settings->Stable Diffusion, in SD VAE click blue arrows and select your VAE. Click apply settings.

You can also go to Settings -> User Interface -> Quick settings List and add there sd_vae if you want to have this setting in your txt2img tab.

For all examples I was using this VAE: https://civitai.com/models/23906/kl-f8-anime2-vae

I highly recommend you using Negative embeddings. All examples in this post were made with EasyNegative embedding. No additional negative tags.

To use it, download and put it into “stable-diffusion-webui\embeddings” folder. Then add its file name to negative prompt. (I suggest you use EasyNegative which you can download from here: https://civitai.com/models/7808/easynegative, but you can use other embeddings)

I suggest you try Clip skip: 2. It should work better with anime models. Try both - 1 or 2, to find what you like more.

To add Clip skip add CLIP_stop_at_last_layers in Settings -> User Interface -> Quick settings List, click apply settings and you should have this option in txt2img tab.

____

You can find many prompts on the Internet or this site but I think you should not just copy them, at least not at the beginning. Better way is to look at them and try every tag separately. 

It’s important to know how a specific tag works because you will be creating compositions with it and adding random words to get result you want may be really frustrating. Sometimes tag is different from a normal word, so you can spend a lot of time trying to get it on your image which you could do with a simple tag. 

Weights in your tags.

By adding more weight to your tag (tag:1.x/1.1/1.5 etc.) you can force AI to add something to your image, but I highly recommend you start every new prompt without weights, because changing it will change your image. If you use many different weights in many tags, it may change your image into a complete mess which we don’t want to. Sure, you can use it if AI is not giving you results you like but first see how image looks like on :1.0 and change this value only if you really have to. Be careful with this, set this value as low as possible (like (tag:1.1/ 1.2) not (tag:1.6/2), it concerns negative and positive prompt).

Personally, in many cases I’m actually decreasing weights rather than increasing them.

Example (prompt: 1girl in positive and EasyNegative in negative prompt):

How to write prompt?

In my opinion best way to start is using very simple prompt. Like:

P: 1girl

N: EasyNegative (this is negative embedding, if you are using other use their file name)

Then you can add tags to see how it will change your image.

Most anime models use Danbooru tags, so you can go to Danbooru site and search for tags you can use in your image. If you click “?” in tag you want, you can read a short explanation of what it is and see in which situations it’s used. If tag has _ in it use space instead, it should work better.

Basically, you need to be precise what do you want in your image. If you want something that is too general AI may not be able to generate it. Depending on model you're using, you may get decent results but on some models you may end up with weird objects or deformed image. To prevent this you should add at least few informations about what is going on and what the characters (if you want to have them in your image) look like. Try to think what elements your image needs. If for example you want to have person outdoors you can add details like grass, leaves, trees, but also blue sky, flying butterflies to get more natural look. 

Don’t add too many tags, if you do it may be harder to get result you want.

I would think about creating prompt in this way: 

[character description] + [actions the character performs] + [background] + [rest of your prompt]

Example:

1girl, blue eyes, black hair, long hair, upper body, school uniform, standing, hand up, facing viewer, classroom, chalkboard, kanji

or

1girl, white shirt, long sleeves, black jeans, backpack, full body, walking, landscape, mountain, night, from behind, snow

You may need tags for viewing angles and body position/movement to have more control of what is going on in your image. Of course clothes and background are important too but many models will add decent looking clothes even if you don’t add them to your prompt. With background you can specify elements you want to have in your image, you don’t need to write “trees in the background…”, adding outdoors, trees, autumn will work instead. Be sure to specify background or it may give you weird results. If you don’t have an idea for that you can add white background or indoors/outdoors.

Here is a list of tags that may help you:



If AI is giving you something you don’t want on your image add it to negative prompt. Some things are obvious like: you are getting cat on your image so you can add cat to negative prompt to (hopefully) remove it. Other things may not be so obvious. For example: you want your character to be wearing a skirt and shirt but AI gives you some odd clothes. It may be a good idea to add dress to negative prompt. AI may be confused and merge shirt and skirt together. 

Some prompt examples:

Тэги: animeprompttutorialguide
SHA256: D5EA6AB6C0A91AF04A183CCA0CD20454DCE0F59EB22F12C60D617E547938232E

Guide for Images

файл на civitaiстраница на civitai

This is just a basic pdf showing how I make images. Most of this is by my preference. You have any other questions feel free to leave a comment.

Тэги: guide
SHA256: C389030866DAB9B3D8E5C89238A1E597C8C6C19F049EC81D42B98F5CF8FB48B3

Cozy Nest

файл на civitaiстраница на civitai

Cozy Nest is a UI extension for Automatic's sd-webui.

https://github.com/Nevysha/Cozy-Nest

Installation

Go in the extenstion tab and search for "Cozy-Nest" or install manually by following those step :

  1. Open your SD-Webui

  2. Go to Extension Tab

  3. add Extenion by pasting this URL, since this extension is not public in the repository, yet https://github.com/Nevysha/Cozy-Nest

Features

Тэги: tool
SHA256: E98D6FC2BC189EC46CE2727B70EE07089554D8E76BEDED8BADC44E9F4FB5B6DA