该页面为推荐用于 AnimeIllustDiffusion [1] 模型的所有文本嵌入(Embedding)。您可以从版本描述查看该文本嵌入的信息。
您应该将下载到的负面文本嵌入文件置入您 stable diffusion 目录下的 embeddings 文件夹内。之后,您只需要在填写负面提示词处输入 badv4 即可。
每个文本嵌入将从以下参数描述其特性:
正/负面文本嵌入:如果一个文本嵌入是正面文本嵌入,则您应该将它填写在正面提示词内发挥作用;反之,如果一个文本嵌入是负面文本嵌入,则您应该将它填写在负面提示词内发挥作用。
作用范围:一个文本嵌入主要影响的画面内容。
向量数:代表该文本嵌入占多少个词元。通常来说,您输入提示词中超过 75 个词元的部分会被忽略。因此,一个文本嵌入的向量数越少,您填入它之后剩余输入提示词的空间越多。
向量强度:一个词元向量值的大小。向量强度越高,效果越强。普通词元的向量强度为[-0.05,0.05]。不可将用括号增强提示词等效于增强向量强度。
This page contains all the text embeddings recommended for use with the AnimeIllustDiffusion model [1]. You can view information about the text embeddings from the version description.
You should place the downloaded negative text embedding file in the embeddings folder of your Stable Diffusion directory. After that, you only need to enter "badv4" in the field for negative prompts.
Each text embedding is described by the following parameters:
Positive/Negative Text Embedding: If a text embedding is a positive text embedding, it should be used in positive prompts. Conversely, if a text embedding is a negative text embedding, it should be used in negative prompts.
Scope: The main content of the image that a text embedding will primarily affect.
Vector Count: The number of token that the text embedding represents. Generally, the part of a prompt containing more than 75 tokens will be ignored. Therefore, the fewer the number of vectors in a text embedding, the more space you will have left to input the prompt.
Vector Strength: The magnitude of a token's vector. The higher the vector strength, the stronger the effect. The vector strength of token is usually between [-0.05, 0.05]. Using parentheses to enhance prompts is not equivalent to increasing vector strength.
[1] AnimeIllustDiffusion Model Webpage: https://civitai.com/models/16828/animeillustdiffusion
aid29
Textual Inversion embedding I did of actor Florence Pugh. I used about 26 images here.
If you have a celebrity/person you like to see let me know.
florence2
Textual Inversion embedding I did of actress Margot Robbie. This was testing on using low set of images, about 18 here. I think it turned out really well since the quality of the images were pretty high.
If you have a celebrity/person you like to see let me know.
robbie3
My first go at a realistic textual inversion.
Based off 72 images and trained for 1500 steps.
KokoaAisu
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Update: 26 May 2023
Continued the TI training. Here’s version 2.80 (i.e. 8000 steps).
Depending on where you put mrblng02-8000 in your prompt, it can be over- or underwhelming. Despite the training images all being landscapes, this version has got better at objects/still lifes.
I had to come up with some new prompting to get the kind of looks I wanted. Mostly I’m trying to use the TI to add detail/pattern/texture of a type I like, and at step 8000 I feel it can provide balanced patterns on things in images, as well as provide a patterned structure.
It’s a strange journey with Stable Diffusion. I thought you had to use AND to make TIs and LORAs behave structurally, but I’m seeing it sometimes without the difficulty of AND... in things like the placement of trees or feathers or whatever, as well as the actual patterns on those things.
The journey continues: I’ve trained this TI up to 12,000 steps, but not released it because I’m struggling to get usable prompts now. Either I get pretty marbled-paper-pattern squares, or no apparent effect at all. It may be that 8000 steps is the limit for the 86 landscape training images I’ve used. I still intend adding objects/still lifes to the trainset and then regen the TI to see what difference that makes.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
This is a TI embedding that puts marbled paper patterns into your image generations. They tend to be illustrative rather than photorealistic - I assume that’s because the source material is illustrative?
Wikipedia Paper Marbling:
https://en.wikipedia.org/wiki/Paper_marbling
When I was young I used to have great fun making marbled paper out of all kinds of stuff. I’d let it dry and then draw in extra lines to bring out stuff that I could almost see. Like when you can imagine scenes in clouds... stalking tigers or leaping fish... or spaceships!
I wondered if Stable Diffusion could do something similar so I started trying to make this TI months ago... but gave up due to ignorance and frustration.
Then konyconi made the MarblingAI LORA and I got enthusiastic again:
https://civitai.com/models/55080/marblingai
Thanks to @konyconi for all the marvellous LORAs!
This TI is marbling, but different from konyconi’s model.
I’ve uploaded two TIs; from step 1500 and step 4000 of the training:
mrblng02-1500
mrblng02-4000
They’re v2.15 and v2.40. (Version 1 too inconsistent to release.)
V2.15 is quite simple and punchy. V2.40 is often subtler, but can produce more detailed pictures. I was expecting the marbling to get overwhelming the higher I went with the step-count, but that’s not what happened. Go figure.
The showcases are landscapes, since I was trying to get a particular style, but I’ve added a subsidiary gallery for each version to show that at least some objects are possible.
This TI is a bit niche, but I hope the showcase images will spark interest for somebody.
Training was mostly on the base Stable Diffusion v1-5-pruned.ckpt [e1441589a6], but also with avalonTruvision_v31.safetensors [f17ac2a0b7]. I think. Lack of sleep made me lose track a bit.
Most of the showcase gens are done with Avalon TRUvision. This is my go-to model for just about everything. This model is excellent at photoreal ppl, but it is so much richer than that. Highly recommended, and thanks to @avalon for producing it.
https://civitai.com/models/13020
Some models need a higher weighting, e.g. for ReV Animated I sometimes had to use (mrblng02-1500:1.1) or even higher if the effect was too subtle. Thanks to @s6yx by the way - ReV Animated is great to play with for all manner of stuff.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
The training here is a 2-step process:
(1) Found/made some hi-res scans of pre-20th century marbled endpapers and sampled 512px squares of the different types of shapes. Trained a few TIs on those. That gave me a TI that mostly wanted to draw more marbled patterns. But they looked authentic.
(2) Use that TI to generate 100s of images with txt2img and img2img using a wide range of prompts. Discard all the pure patterns and select good/interesting pictures from the rest. Train a new TI with those synthetic images.
I’ve only used landscape/terrain images in step (2). That means that this MarblingTI “likes” to produce landscapes rather than objects. It will often paint landscapes onto objects rather than just use marbling patterns. It’s a feature, not a bug :-)
However, much to my surprise (as is every bleeping thing with Stable Diffusion), it can do some objects quite well. And a lot of objects very badly... in the course of training, it seems to have “forgotten” a bunch of words.
I’m intending to create a set of object-related images using step (1) above, then see what happens in step (2). Might take a while.
Hopefully I can then combine both datasets to train a 3rd TI that is “happy” with both landscapes and objects. We’ll see. Got to find the time to do all this!
mrblng02-8000
Isabella Valentine (イザベラ・バレンタイン, Izabera Barentain), commonly called Ivy (アイヴィー, Aivī), is a fictional character in the Soulcalibur series of video games. Created by Namco's Project Soul division, she first appeared in the original Soulcalibur and its subsequent sequels, later appearing in various merchandise related to the series. She was voiced in Japanese by Yumi Tōma between Soulcalibur and Soulcalibur III, Kanako Tōjō between Soulcalibur Legends and Soulcalibur: Broken Destiny, and Miyuki Sawashiro in Soulcalibur V, and Soulcalibur VI; in English, she was voiced by Renee Hewitt in Soulcalibur II and Lani Minella for the remainder of the series.
Adding "(dark green eyes), (silver hair), (purple outfit), (golden shoulder pad)" to the prompt seems to work fine for getting as close to the character as possible.
If you like my work and you feel like it, you can invite me to a Ko-fi!
1vyv4l3nt1n3
Raquel Pomplun is an actress, broadcaster, singer, dancer, Playmate of the Month 04/2012 and Playmate of the Year 2013.
Trained on 1.5 but examples are on Analog Madness and Consistent Factor
Skews NSFW but there is plenty training in there to make sfw work as well
rqpom
A textual inversion embedding for generating 80s style erotic photos of nude women.
Add the listed keyword to your prompt depending on the version you use. Add keyword to start of prompt for strong effect. Add to middle or end of prompt for lessened effect.
vintbux
Bobby Cipher is a person who doesn't exist. You can use him if you need a consistent character. Please have a look at other Nobody models for contributions by other folks including those created with the LastName and NBDY tags.
Follow me if you'd like to be notified of future models!
Turns out making a male character work is a bit more challenging than a female; all of the models I tried with this embedding seem to be really female-centric. Not surprising of course given how these models are mostly used 😊. But it does mean that randomly you'll get a female Bobby Cipher--no kidding--especially if your prompt even hints at a female subject (clothes, setting, etc). Prompting with “BobbyCipher, a man” can also help.
In my hours of trying to get good images with Bobby, it appears that a lot of men trained in the base models are classic squared-jawed (and sometimes muscly) actors or models. Not that you can't prompt away from those obviously. You also might need to prompt for color or put black and white in your negative prompt because Bobby looks a lot like classic actors from the 40s-60s.
I'm still exploring males, and have another embedding I'm working on, so stay tuned as I get better at doing them.
To install, download the .bin
file and then place in the stable-diffusion-webui/embeddings/
folder. Then you'll either need to restart WebUI or tap the Refresh button in the Textual Inversions tab in the Extra Networks tab.
The trigger word for Automatic1111's webui is BobbyCipher
which you can use in your prompt. (It should be <bobbycipher>
for InvokeAI). See the gallery below for some examples.
My main go-to photorealistic model is Avalon TRUVision V2 but I also might use RealisticVision 2.0, Protogen Infinity, or HARDBlend
Many of the images in the gallery use TRUVision for the initial txt2img and then RealisticVision/Protogen for a quick img2img pass with denoising set to .10 - .15 to add some additional skin texture. Unfortunately that first generation information isn't embedded into the PNG.
I always use the vae-ft-mse-840000 as my VAE. You can find it at HuggingFace. You can enable the VAE dropdown menu in WebUI by going to Settings -> User Interface -> Quicksettings list and adding "sd_vae"
When I use Hires Fix to upscale I usually use 4x-UltraSharp for the Upscaler and enable Tiled VAE which is part of the Multidiffusion Upscaler package
If I upscale even further in img2img
I'll use a combination of ControlNet 1.1's tile_resample
/tile model and Ultimate SD Upscaler
BobbyCipher
一些衣服样式
更容易使用的衣服召唤方法
这个版本会持续更新
为了不太干扰出图的泛化,向量为1,如果需要增强权重可以这样写 (heibai:1.30), 1girl, lace,
heibai: 黑白搭配
longpao: 刺绣样式
neiyi: 内衣样式
nvdi: 女帝
suijing: 水晶
some clothing styles
Easier to use clothing summoning method
This version will continue to be updated
In order not to interfere with the generalization of the graph, the vector is 1. If you need to enhance the weight, you can write it like this (heibai:1.30), 1girl, lace,
heibai: black and white collocation
longpao: embroidery pattern
neiyi: underwear style
nvdi: empress
suijing: crystal
suijing
Claire Sinclaire is a model, a hotel entrepreneur, Las Vegas headline act ,Playmate Of The Month October 2010 and Playmate Of The Year 2011.
This was train on 1.5 base model, but the examples are from Analog Madness and Consistent Factor.
As a Playmate it skews heavily towards nsfw and pin up style, but there is enough training to allow for sfw pictures. This also skews heavily toward retro 50's style, as that is Ms. Sinclair's motif.
clsin
In this first version, I attempted to recreate the mesmerizing effect of placing ✨glass✨ in front of your camera lens. What it does is split the light, resulting in a beautifully blurry image. 📷✨ The lights become ethereal, and certain parts or even the entire subject appear multiplied, adding an element of enchantment. ✨🌟
We've become accustomed to seeing highly detailed images here, but there's a whole different world in film photography, where room is left for delightful "mistakes." 😄🎞️ I'll continue sharing more TI models like this one, so stay tuned and follow me to stay updated! 📸✨🔥
✨ Works best when accompanied by ChilloutMix and with women, and don't forget to add token "perfect body" for better results. Play with weights from 1.0 - 1.3
kaleidcp-3450
A commission of @Zorglub.
Archie Panjabi is a British/Indian actress, mainly known for her roles in various TV shows, such as Life on Mars and The Good Wife.
1020-step TI trained on a dataset of 18 images with my usual settings.
Appreciate my work? My TIs are free, but you can always buy me a coffee. :)
Curious about my work process? I have summarized it here.
You're obviously free to experiment, but bear in mind that my TIs are trained with a more or less fixed phrasing, that normally starts with:
"photo of EMBEDDING_NAME, a woman"
So I recommend always starting your prompt like that and then building the rest of the prompt from there. For instance, "photo of beautiful (arch1ep4njabi:0.99), a woman as a movie star, hair upsweep updo, sweater off-shoulders, trousers, at a movie premiere gala, dark moody ambience (masterpiece:1.2) (photorealistic:1.2) (bokeh) (best quality) (detailed skin:1.2) (intricate details) (nighttime) (8k) (HDR) (cinematic lighting) (sharp focus), (looking at the camera:1.1), (closeup portrait:1.1)"
arch1ep4njabi
Vanessa Kirby is a british actress mostly known from her appearances in Mission Impossible: Fallout and The Crown.
Trigger word is vanessakirby_ti
v1 was trained with 18 images for 4100 steps
vanessakirby_ti
Over the last 6 months, I've created quite the collection of helper and tool embeds for doing things like fixing faces, pushing realism, creating anime scenes, making subjects smile, or cry, or face the camera, whatever. Rather than creating separate listings for the literal dozens of tools I have, I've decided instead to just create a "master" listing for all of my tools that I can then update whenever I have new tools to add.
Unlike some of my more grand embeddings out there, many of these serve simple purposes and are of limited effect by themselves. Some are little more than funny experiments that turned into something useful, while others were created as helpers as part of a larger project.
I will continue to update this listing as I create more tools. I'm not planning on exposing my ENTIRE toolbox, just the ones I use and share with friends most frequently.
Don't expect a lot of demo images or a lot of fanfare on this listing. This is my toolkit. If you want to use my tools, please, be my guest, but I make no guarantee of results - Sometimes they work great, sometimes they don't, the fun part is finding the best mix to get the results you want!
As with all of my other embeds, my tools are designed to be mixed with my other 1.5 compatible embeds available on my profile page. Mix and match for some really interesting outputs, and follow me here on Civit to be notified when I release new embeddings!
--------
To use these embeddings in AUTOMATIC1111, copy them to your embeddings folder in your stable-diffusion-webui directory. No need to restart, just invoke the embedding from your prompt. Make sure you are using a 1.5 version compatible model or you will get errors!
--------
No512
Jennifer Walcott (Archuleta) is an American Actress, Fitness Model and Playmate of the Month 09/2001.
Trained on 1.5 base model but examples are from Analog Madness and Consistent Factor.
As you might suspected, since she is a Playmate this will skew towards NSFW, but there is another clothed pictures in the training that it is not difficult ro produce SFW images too
Best results are .08 to 1. Since she is sometimes a redhead and sometimes a strawberry blonde, specifying redhead or blonde should help it lean one way or another.
Any Feedback or suggestions for a PMOM to do next is greatly welcomed
jnwct
这个emb功效和pureerosface类似,它有瘦弱身形和亚洲脸,也能改善在非亚洲模型上的外貌。当然也可以瞎几把混合。
样品图展示了一些亚洲模型和两个非亚洲模型还有单个emb,emb与lora混合,多个混合的例子。
This emb is similar to pureerosface in that it has a thin body shape and an Asian face, and can also improve appearance on non-Asian models. Of course, you can mix it up with other lora.
The sample diagram shows several Asian models and two non-Asian models as well as single emb,emb mixed with lora, and multiple mixed examples.
jinyu
This ti was requested by @dreday28
Jackie Guerrido is a well-known Puerto Rican television presenter and meteorologist. She gained fame through her work on Univision's morning show "Despierta América" and as a weather reporter on "Primer Impacto." Guerrido is recognized for her charismatic on-screen presence and her expertise in delivering weather forecasts with clarity and professionalism. She has become a beloved figure in the Hispanic media landscape and has garnered a significant following throughout her career.
Feel free to leave me a tip/buy me a coffee if you like what I am doing :)
J4cki3Gu3rrid001
ronaldo
Re-upload Season helper because civitai taked down.
kkw-Autumn
A fast way to trigger concept.
Just use it in top of prompt.
This is a collection, so i upload some related TI here.
The sample image show how it work as standard with no negative, as photo-real, and with subject.
I usually use it for background , but maybe you can find your own method to use it.
Fire--> kkw-el-fire
Water--> kkw-el-water
Grass--> kkw-el-grass
Ice--> kkw-el-ice
Rock--> kkw-el-rock
Sand--> kkw-el-sand
Lightning--> kkw-el-Lightning
kkw-el-Lightning
nessa