Gorillaz-inspired Stable Diffusion model.
Though trained with images of Gorillaz in the old days, it works surprisingly good when asked to generate 3D examples.
Trigger-word: newgorillaz / style newgorillaz / in the style of newgorillaz
For more experiments, project files, and hopefully cool tips/tutorials: https://linktr.ee/uisato
For commercial uses, contact me.
newgorillaz
A photorealistic model I've been using for a little bit. I hate smooth airbrushed skin so I refined this model to be very realistic with great skin texture and details. There are 2 models, a standard and an ultra.
The Ultra model is nearly 3 times as large, but it's not 3 times as good. It is better though with a broader knowledge. You can even generate 1024x1024 in many cases. The config file is required for the Ultra model, be sure to download that too.
So I recommend using the normal version unless you have the need or vram to run the Ultra model. If you'd like to run the Ultra model with modest vram, try --medvram or --lowvram in your auto1111 startup script.
A side-by-side comparison. Link to see full-size samples with metadata. Like I said, the Ultra model isn't 3 times as good, but it is better across multiple subjects, not all. The pruned version of the Ultra model is debatable, might be better to use the normal V1 model in some cases. But all have been provided so you can make the choice yourself. For me, I use the Ultra model in every case.
Do you have requests? I've been putting in many more hours lately with this. That's my problem, not yours. But if you'd like to tip me, buy me a beer. Beer encourages me to ignore work and make AI models instead. Tip and make a request. I'll give it a shot if I can. Here at Ko-Fi
Master Ch
I don't speak English so I'm translating at DeepL. Let me know if the English is weird.
I mixed my favorite NostalgiaMix with the 2.5D model DucHaiten-StyleLikeMe.
The result is a realistic yet painterly model that retains the nostalgia and background of NostalgiaMix,
I think it turned out to be a pretty good model.
私の大好きなNostalgiaMixに2.5DモデルのDucHaiten-StyleLikeMeを混ぜてみた。
NostalgiaMixのノスタルジー感や背景の良さを残しつつ、リアル寄りでありながら絵画調という、
なかなか良いモデルに仕上がったと思う。
It's hard to think of the name when uploading.
good luck.
exam using
best quality, detailed background, girl, ,euro_street, random_wear,
Negative prompt: EasyNegative,lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name ,verybadimagenegative_v1.2-6400
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8.5, Seed: 3126894931, Size: 576x768, Model hash: 3622db1a31, Model: saiena, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires steps: 37, Hires upscaler: R-ESRGAN 4x+ Anime6B
Introducing my versatile photorealistic model - the result of a rigorous testing process that blends various models to achieve the desired output. While I cannot recall all of the individual components used in its creation, I am immensely satisfied with the end result. This model incorporates several custom elements, adding an extra layer of uniqueness to its output.
One of the model's key strengths lies in its ability to effectively process textual inversions and LORA, providing accurate and detailed outputs. Additionally, the model requires minimal prompts, making it incredibly user-friendly and accessible.
VAE recommended: sd-vae-ft-mse-original.
日常产出
实在不知道名字咋起了,正巧刚才打印机没墨了,就这样,ღ( ´・ᴗ・` )比心
Mix of Cartoonish, DosMix, ReV Animated and Level4.
Very versatile, can do all sorts of different generations, not just cute girls. But it does cute girls exceptionally well.
Using vae-ft-ema-560000-ema-pruned as the VAE. CLIP 1 for v1.0. CLIP 2 for v2.0.
For v2.5: Using vae-ft-mse-840000-ema-pruned as the VAE. CLIP 2.
Negative embeddings used in almost all images EXCEPT v2.5: easynegative, bad-hands-5
Also check out my other mix, CarDos Anime!
Realistic anime girl illustrations inspired by early 20th century impressionism with pastel like colors.
It defaults to generating female characters with slim figures and has the capability to generate some nsfw content with focus in nude, semi-nude. (Use tags such as "mature" and "big breast" to get less slim female characters)
A lower CFG scale will give more creative illustrations with oil-paint like depictions.
When you use Hires.Fix, you'll get less saturated color and less texture.
~~~~~~~~~~~~~~~~
Use of after detailer is recommended for better faces. ADetailer
To install ADetailer, within automatic1111, click (extensions -> install from URL -> paste GitHub link.)
~~~~~~~~~~~~~~~~
Settings I like using:
Sampling Method: DPM++ 2M Karras
Sample Steps: 25
GFG Scale: 3.5
ADetailer Model: face_yolov8m
ADetailer Mask Blur: 10~15
~~~~~~~~~~~~~~~~
Recommended Resolution:
832x1248 (mixed)
768x1152 (ok)
640x960 (good)
512x768 (great)
~~~~~~~~~~~~~~~~
Merged Models include:
BeenYou - B17 | Stable Diffusion Checkpoint | Civitai
ReV Animated - v1.2.2 | Stable Diffusion Checkpoint | Civitai
Hassaku (hentai model) - V1.1 | Stable Diffusion Checkpoint | Civitai
迭代产物,融合物
《游戏图标研究所》
This is a model of a large test game ICON, which belongs to the beta version. Welcome to join the QQ group: 489141941 to discuss and give comments, which is convenient for model development. This model is mainly suitable for the generation of game icons. There is no need to add back tags when using it. This version is mass-produced by the Icon Academy model, and everyone is welcome to use and comment for better optimization in the future.
1. Icon 2.0 (official) version will be released on May 1
2. Contains French style, two-dimensional and European and American cartoons
3. Fix the poor quality of version 1.0, the problem of being uncontrollable, and solve the problem of generating strange pictures
4. After the release, the test will start in the QQ group
game icon
This model is created from 500 pictures of the Japanese Idol Mashu Yuino.
You can find her picture on Twitter: https://twitter.com/fl75a5/media
mashu_yuino
後半に日本語の説明を入れています。
This is a 2D based Anime model (not using 2.5D approach). Targetting to realize both simplified and clear drawing and enough detail.
Merged models are as follows.
Counterfeit v3
Anything v5
Falkons v1.2 created by Falcons
Contaby v2 created by PlayaEmblem
Koji
Both SFW and NSFW is capable. VAE is not needed but you can try some.
2Dベースのアニメ系モデルで、いわゆるリアルモデルによる2.5Dのアプローチを排除したモデルです。シンプルではっきりした塗りと細部の詳細な表現の両立を目指したモデルとなっています。
Counterfeit v3、Anything v5、Falkons v1.2(Falcons氏作)、Contaby v2(PlayaEmblem氏作)、Kojiがマージベースです。
エロ・非エロどちらでも可。VAEは焼き込み済ですが別のものも試してみていいと思います。
It's a merged model that focuses on the person rather than the background
I tried to simplify the line as much as possible
I simplified it and maintained a clear outline
If you don't add 1girl, you can only see the background, so add 1girl plz
(I'm also making models for nsfw, so please wait a little bit :)
※2023/05/04
DefbeaCF3AR2D_mixを追加しました。
顔に対してAnimelike_2Dというモデルを少量適用しました。
目のハイライトの入りが若干良い気がします。
現在Animelike_2Dがどこにあるかは不明です。
Added DefbeaCF3AR2D_mix.
Applied a small amount of the Animelike_2D model to the face.
I think the highlighting of the eyes is slightly better.
It is not known where Animelike_2D is currently located.
model_A | model_B | model_O | base_alpha | weight_name | weight_values
DefbeaCF3AR_mix_v1 | Animelike_2D_V2_Pruned_fp16 | DefbeaCF3AR2D_mix | 0 | 0,0,0,0.2,0.4,0,0,0,0,0,0,0,0,0,0,0,0,0,0.2,0.4,0,0.3,0,0,0
========
アニメ調のモデルです。
nsfwを目的として個人的にマージしていましたが、nsfw以外の出来もよかったので共有したいと思います。
人物の顔のパーツや背景の書き込みが好きです。
This is an animated style model.
I had personally merged it for nsfw purposes, but I would like to share it with you because I liked how it turned out outside of nsfw.
I like the facial parts of the person and the background writing.
以下のモデルをマージしました。
The following models were merged.
・Counterfeit-V3.0
Counterfeit-V3.0 - v3.0 | Stable Diffusion Checkpoint | Civitai
・AniReality-Mix
AniReality-Mix - v1 | Stable Diffusion Checkpoint | Civitai
・(1~6) Defacta
(1~6) Defacta - Defacta 6 | Stable Diffusion Checkpoint | Civitai
・BRA(Beautiful Realistic Asians) V4
BRA(Beautiful Realistic Asians) V4 - v4.0 | Stable Diffusion Checkpoint | Civitai
マージ比率はOrangeMixsを参考にしました。
Merge ratios are based on OrangeMixes.
WarriorMama777/OrangeMixs · Hugging Face
1.
model_A | model_B | model_O | base_alpha | weight_name | weight_values
16Defacta_defacta6 | braBeautifulRealistic_v40 | temp1 | 0 | 1,0.9,0.7,0.5,0.3,0.1,1,1,1,1,1,1,0,0,0,0,0,0,0,0.1,0.3,0.5,0.7,0.9,1
2.
Counterfeit-V3.0 * 0.5 + AniReality-Mix * 0.5 = temp2
3.
model_A | model_B | model_O | base_alpha | weight_name | weight_values
temp2 | temp1 | DefbeaCF3AR_mix_v1 | 0 | 0,0,0,0,0,0,0,0,0,0.3,0.1,0.3,0.3,0.3,0.2,0.1,0,0,0,0.3,0.3,0.2,0.3,0.4,0.5
This is a merge of protogen + analog madness + openjourney for creating photo realistic images. It can definitely get NSFW and if that's your goal I recommend inpainting over any clothing with the inpaint on fill and a simple prompt like "nude woman" and once the clothes are gone you can inpaint over any wonky anatomy with either HARDpaint or f2222. If there are any color issues just take the full body to img2img and give it a .2-.4 denoising scale and it should smooth everything out.
Custom model merge from some realistic and anime models
This model comes pre-baked with vae-ft-mse-840000-ema-pruned so no need to use VAE's.
Merged using weight diff:
Highly suggested to use adetailer with this model.
Join our discord for more content: discord.gg/dreamlabs
SquiggyMix v1 is still unexplored to its full potential. Makes fairly realistic pretty women for sure though.
Use these keywords DEPTH OF FIELD, FILM, PORTRAIT, REALISTIC
for multiple girls use https://civitai.com/models/51136/multiple-girls-group?modelVersionId=55644
depth of field
film
portrait
realistic
Please 🧡 this model by reviewing it.
🖼️ Generate online: Babes 1.1, Babes 2.0.
❤️ Support Babes ... 🫶 Discord Server
👀 See also: 💋 Babes Kissable Lips 💋 and 🍒 Sexy Toons feat. Pipa 🍒and ❤️ Babes 1.1 ❤️.
ℹ️ This model was inspired by Babes 1.1.
Babes 2.0 is based on new and improved training and mixing.
Trained on 1600 images from a few styles(see trigger words), with an enhanced realistic style, in 4 cycles of training. Trained on 576px and 960px, 80+ hours of successful training, and countless hours of failed training 🥲.
Mix ratio: 25% Realistic, 10% Spicy, 14% Stylistic, 30% Anime, 9.5% Furry, the rest is core training that was reinforced to 96% of the original training.
📌 Are your results not 100% identical to any specific picture?
Make sure to use Hires-fix from example SwinIR_4x / 4x-UltraSharp / 4x-AnimeSharp / RealESRGAN_x4plus_anime_6B (Upscaler Download) with "Upscale latent space image when doing hires. fix", it is what I usually use for hires-fix.
Use VAE: vae-ft-mse-840000-ema-pruned for better colors. Download it into "stable-diffusion-webui/models/VAE" folder. Select it in the settings.
I use xformers - it's a small performance improvement that might change the results. It is not a must to have and can be hard to install. Can be enabled with a command argument "--xformers" when launching WebUI.
WebUI is updated constantly with some changes that influence image generation. Many times technological progress is prioritized over backward compatibility.
Hardware differences may influence changes. I've heard that a bunch of people tested the same prompt with the same settings, and the results weren't identical.
I have seen on my own system, that when running as part of a batch, may change a little bit the results.
I suspect there are hidden variables inside modules we can't change that produce slightly different results due to internal state changes.
Any change in image dimension, steps, sampler, prompt, and many other things, can cause small or huge differences in results.
📌 Do you really want to get the exact result from the image? There are a few things that you can do, and possibly get even better results.
Make a single word changes to prompt/negative prompt and test, and push it slowly to your desired direction.
If the image has too much of something or doesn't have enough of something, try to use emphasis. For example, too glossy? use "(glossy:0.8)", or less, or remove it from the prompt, or add it to the negative. Want more, use values 1.1-1.4, then additional descriptors in the same direction.
Use variations - use the same seed, and to the right of the seed check "Extra". Set "Variation strength" to a low value of 0.05, generate a few images, and watch how big the changes are. Increase if you want more changes, and reduce if you want fewer changes. That way you can generate a huge amount of images that are very similar to the original, but some of them will be even better.
📌 Recommendations to improve your results:
Use VAE for better colors and details. You can use VAE that comes with the model or download "vae-ft-mse-840000-ema-pruned from (https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main) , ckpt or safetensors file into "stable-diffusion-webui/models/VAE" folder. In the settings find "SD VAE", refresh it, and select "vae-ft-mse-840000-ema-pruned"(or the version included with the model). Click "Apply settings" button on the top. The VAE that comes with the model is "vae-ft-mse-840000-ema-pruned", you don't need both, use the one that you downloaded, it will work very well with most of the other models too.
Use hires-fix, SwinIR_4x / 4x-UltraSharp / 4x-AnimeSharp / RealESRGAN_x4plus_anime_6B (Upscaler Download), first pass around 512x512, second above 960x960, and keep the ratio between the two passes the same if possible.
Use negatives, but not too much. Add them when you see something you don't like.
Use CFG 7.5 or lower, with heavy prompts, that use many emphases and are long, you can go as low as 3.5. And generally try to minimize the usage of emphasis, you can just put the more important things at the begging of the prompt. If everything is important, just don't use emphasis at all.
Make changes cautiously, changes made at the beginning of the prompt have more influence. So every concept can throw your results drastically.
Read and use the manual (https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features).
Learn from others, copy prompts from images that look good, and play with them.
DPM++ 2M Karras is the sampler of choice for many people, including me. 40 steps are plenty, and I usually use 20.
Discord server for help, sharing, show-offs, experiments, and challenges.
verism style
samdoesart style
thepit style
owler style
cherrmous style
arosen style
uodenim style
stanleylau style
puffy lips
hoop earrings
thick eyebrows
by fitCorder
This checkpoint is a fine tune focuses on female fitness and bodybuilding. Keywords are not required. Works well with the Lora's i've uploaded.
2.5D anime style, very good at generating rich details in human portraits, also quite realistic.
It can be used to generate realistic photos together with chilloutmix (details will be richer).
2.5D动漫风格,在生成人像的丰富细节上非常好,也比较接近照片写实。
可以用来配合chilloutmix来生成写实照片(细节会更丰富)。
This model is not only capable of generating NSFW portraits, it is good to generate any content through prompt.I've been using this model for a month and it feels good so I'm sharing it now.
Thank you for using, This is my first model. Welcome to provide evaluations and useful criticisms or feedback.
该模型不仅可以生成人像,通过提示生成任何内容都不错。
感谢使用,欢迎反馈问题和建议。
Hellmix, but flatter!
The same smell of sulfur, just filtered through fine japanese soil instead.
Absolutely positively use the anything-v4.0 Vae! I'll include it even, in case you don't have it!
The model merge has many costs besides electricity. With your support, we can continue to develop them.
モデルマージは、電気代以外にも多くのコストがかかります。皆さんの応援があれば、これからも開発を続けることができます。
I have merged this model with a lot of background and detail.
The output of this BreakDomain series will be more like an illustration or animation than BreakDro.
背景や細部の緻密な描写にこだわってマージをしています。
このモデルシリーズではBreakDroよりもイラストやアニメ寄りの出力が期待できます。
No baked VAE.
The recommended VAE is "vae-ft-mse-840000-ema-pruned.ckpt".
If you try it and make a good one, I would be happy to have it uploaded here!
The merging source is listed in each version detail.
VAEは焼いてません。
おすすめのVAEは"vae-ft-mse-840000-ema-pruned.ckpt"です。
使ってみて、いいのができたら、ぜひここにアップしてもらえると嬉しいです!
I have gone back to the drawing board and redone the merge to remove the Creator Credit Required license from BreakDro. this is a new model that is finished to the same level as BreakDro.
BreakDroからCreator Credit Requiredライセンスを取り除くために、初心に戻ってマージをやりなおしました。BreakDroと同等に仕上げた新作モデルです。
Future updates will be made to this model.
今後の更新はこのモデルを行っていきます。
all rights and congratulations to: NoCrypt https://huggingface.co/NoCrypt/SomethingV2_2
Welcome to SomethingV2.2 - an improved anime latent diffusion model from SomethingV2
A lot of things are being discovered lately, such as a way to merge model using mbw automatically, offset noise to get much darker result, and even VAE tuning. This model is intended to use all of those features as the improvements, here's some improvements that have been made:
VAE: None (Baked in model, blessed2)
Clip Skip: 2
Sampler: DPM++ 2M Karras
CFG Scale: 7 ± 5
Recommended Positive Prompt: masterpiece, best quality, negative space, (bioluminescence:1.2), darkness, dark background
Recommended Negative Prompt: EasyNegative
For better results, using hires fix is a must.
Hires upscaler: Latent (any variant, such as nearest-exact)
Due to SD-Silicon's Terms of use. I must specify how the model was made
Model AModel BInterpolation MethodWeightNamedpepmkmpsilicon29-darkMBWReverse CosinedpepsilisomethingV2_1dpepsiliMBWCosineSomethingV2_2 rawSomethingV2_2 rawBlessed2 VAEBake VAE-SomethingV2_2
Since this model was based on SomethingV2 and there's not THAT much of improvements in some condition. Calling it V4 is just not right at the moment 😅
(bioluminescence:1.2)
dark background
darkness
negative space
best quality
masterpiece
This model was trained on 162 images from ddari, the source checkpoint is Midnight Maple(not v2). The model is pretty mid fr fr no cap on god, like some the gens be straight bussin while other be like ok.
In all seriousness it's a pretty ok model.
If you like you can buy me a coffee https://www.buymeacoffee.com/Norian01
New version more consistent with positions!!!
Merged using add difference instead of sum, with Hassaku instead of Grapefruit, this way my original model is more present in the model
Merged BetterHands Locon https://civitai.com/models/47085/envybetterhands-locon
Has a more strong style, sometimes cartoon, in the future i will solve this with regularization images
Use in positive prompt: perfect hands, nice hands
Use 512x768 most of the time for better results
Works very well with Loras
Specifications of training:
1280 images
10 repeats
8 epoch
2 batch size
base model: AnyHentay_v2.0 https://civitai.com/models/5706/anyhentai
merged low ratio with grapefruit because lack of good style (Version 1) https://civitai.com/models/24383/grapefruit-hentai-model
USE CLIP SKIP 2!!!
aftersex: aftersex, ass, cum in pussy, cum in face, fellatio, missionary, buttjob
breast grab guided: guided breast grab, pov, guiding hand, breast grab, holding another's wrist
breast grab pov: breast grab, pov
own breast grab: grabbing own breast
cheek bulge: cheek bulge:1.4, fellatio
cowgirl position, cowgirl position, pov
reverse cowgirl position: reverse cowgirl position, pov
squatting cowgirl position: squatting cowgirl position, pov
doggystyle: doggystile, pov, from side, standing
doggystyle+fellatio: doggystyle, fellatio
fellatio: fellatio, pov, from side
fingering: fingering, vaginal, anal
masturbation: masturbation
imminent penetration: imminent penetration, missionary, cowgirl position, squatting cowgirl position
Jack'O Challenge: Jack'O Challenge, sex from behind, deep penetration
licking penis: licking penis, tongue, open mouth
lying, leg up: lying, sex, leg lift, lef up, on side
mating press: mating press (not working)
missionary: missionary, pov, legs up
paizuri: paizuri, penis
piledrive: piledrive position
prone bone: prone bone, from side
suspended congress: suspended congress
reverse suspended congress: reverse suspended congress
standing_leg_up: standing, sex, split legs, leg lift
triple fellatio: triple fellatio, 3girls
x-ray: x-ray cervix, cross-section, internal cumshot, cum
x-ray fellatio: x-ray mouth, fellatio, cross-section, from side, cum
Animation sheet (work in progress) will give specification in next days
You probably will need (animation sheet) in negative prompt
Read Description
After a lot of request, I decided to share my checkpoint since why not. Have fun with it and please leave a review if you enjoy!
Both models are great. Once is a lot more anime while the other tends more realistic. None of them are completely anime or realistic, see the pics for comparison.
Use clip skip 1 for better results :D
I don't remember the weights I put into this models but it includes a mix of a lot of models including but maybe not limited to :
License
For the complete permissions and rights go see the dreamlike page as this models as the same permissions and restrictions as the original!
I have been shocasing some of my LyCORIS models using my personal custom SD1.5 model and some have asked if I could share it. So here it is.
Word of caution, the model has a tendency to nudity when prompted for people... so be ready to reign it in if this is not what you want in the output.
The model was created by mergind a bunch of my published and unpublished LoRAs in SD1.5. I have use this tool for the task: s1dlx/sd-webui-bayesian-merger: opinionated bayesian optimisation for stable diffusion models block merge (github.com)
Photo Realistic MIX
https://civitai.com/models/54818/real-plus-x4-diffusion
https://civitai.com/models/49592/fantasy-mix-v15
https://civitai.com/models/52506/photo-and-anime-v5
you have a hard drive to save, so next week the most realistic. and we'll see it.
Steps: 20,
Sampler: DDIM,
CFG scale: 9,
Face restoration: CodeFormer
Hello Guys, I hope you like the model. Subscribe to my Channel,
https://www.youtube.com/@world-ai => I will be very grateful ♥.
If you want too you can help: https://ko-fi.com/worldai
scottie style model:
v1.0: preliminary test version
v2.0: complete test version
v3.0&3.5: improved version based on previous iterations
v4.0: official series
v5.0: test experimental version based on v4.0
v6.0: fully optimized version
v7.0 series: Probably the last release
I took down the original Ametrine. I'm not bringing it back. I do not like or want to spread lolicon, and despite the model's capabilities to make mature looking women, it being prone to making loli imagery was too much for me.
This is an NSFW prone model. It is meant to generate primarily women.
I use Hires Fix with 4x Fatal Anime 50000 G at 15 Hires steps and 0.45 Denoising strength, upscaled by 2 times. I use the Orangemix VAE. Everything else is in the tags.
I also have a Discord. Join if you wanna share results, get advice, or make requests.
基本上是我之前融合模型的融合 basically a mix of my previous mix
majicMIX realistic - v4 | Stable Diffusion Checkpoint | Civitai
majicMIX fantasy - v2.0 | Stable Diffusion Checkpoint | Civitai
TRY THIS: official art, unity 8k wallpaper, ultra detailed, beautiful and aesthetic, masterpiece, best quality, (zentangle, mandala, tangle, entangle), 1girl, extremely detailed, dynamic angle, cowboyshot, the most beautiful form of chaos, elegant, a brutalist designed, vivid colours, romanticism, by james jean, roby dwi antono, ross tran, francis bacon, michal mraz, adrian ghenie, petra cortright, gerhard richter, takato yamamoto, ashley wood, atmospheric, ecstasy of musical notes, streaming musical notes visible
这个模型风格炸裂,远距离脸部需要inpaint以达成最好效果
This model is strongly stylized in creativity, but long-range facial detail require inpainting to achieve the best results.
请参考我的prompt来使用
recommended positive prompts: official art, unity 8k wallpaper, ultra detailed, beautiful and aesthetic, beautiful, masterpiece, best quality, (zentangle, mandala, tangle, entangle)
use ng_deepnegative_v1_75t and badhandv4 in negative
脸部修复方法 to inpaint the face:inpaint-->only masked-->set to 512x512-->Denoising strength:0.2~0.5
我的TG频道可以看到更多例图 https://t.me/majic_NSFW