Simple but interesting mix I made for use in my own works. V1.1 is a 55:45 weighted sum mix of MeinaMixV9 and Aurora, with a 0.3 weighted sum of NeverEnding Dream with that. V1.21 is 1.1, mixed with 0.15 sum of GeminiX_Mix, and then merging that with a 0.15 sum of MeinaMixV9 again. The result is a model that is still stylized, but intricately detailed and able to represent cultures from around the world. I've noticed in doing characters that have darker skin, I don't have to fight stable diffusion anywhere near as much as I used to. It's very versatile and finally feels like its more then the sum of its parts. I use it for concept art, and I feel it fills that role perfectly.
Source Models:
MeinaMixV9
Aurora
NeverEnding Dream
GeminiX_Mix
In the interest of honesty I will disclose that many of these pictures here have been cherry picked, hand-edited and re-generated. That's because the majority are working pieces of concept art for a story I'm working on. I don't want to give the impression that this model can't give good first-generation images - it can - but when using it you should just kind of expect to have to hi-res fix and/or inpaint at least once. None of the pictures posted here use a Lora for style, but as part of the typical prompt I use they do use EasyNegative and BadHandV4 at usually reduced weights (0.6 and 0.4 respectively). The ones that do use Lora are only for character specific features (Moth Girls, Obsidian Skin, Black Sclera, etc.). All images were post-process upscaled with 4x_NMKD-Superscale-SP_178000_G to give a painted look. If you're wondering why all my pictures have extremely high CFG scales, its because I use Dynamic Thresholding. If you're doing anything where you want a very specific result based on prompts alone, I highly recommend it. After testing, high CFG + DT didn't show much improvement over low CFG (~7), so YMMV. The parameters I use for all images are:
Mimic CFG Scale - 7 (rarely 3)
Mimic/CFG Scale Scheduler - Half Cosine Up
Min. Value of Mimic/CFG Scale Scheduler - 3
Below are recommended gen info based on the stuff I typically use, combined with legal stuff inherited from NED, Aurora and MeinaMix, who thankfully use the same license parameters. This mix uses CreativeML OpenRAIL-M as the two previous models do, and since Meina disallows the use of the model on generation services or paid services without her consent, this model is similarly banned for use in similar contexts.
Recommendations:
Enable Quantization in K samplers.
Hires.fix in some form is needed for high quality images. You can either use the t2i hires fix, which I'd recommend running at 1.5-2x scale at 0.3 denoising for 15 steps. I do this manually by running the best finished images through img2img
Recommended parameters:
Sampler: DPM++ 2M Karras: 25-35 steps.
CFG Scale: 7+. See note on Dynamic Thresholding above.
Resolutions:
-Default: 512x512 txt2img -> 1024x1024 img2img, 0.3-0.65 Denoising
-512x768, 512x1024 for Portrait
-768x432, 1024x576 for Landscape (16:9)
Hires.fix: 4x_NMKD-Superscale-SP_178000_G or 4x_fatal_Anime_500000_G, with 15 steps at ~0.3 denoising.
Clip Skip: 1 or 2.
Negatives: ' (worst quality:2, low quality:2), (zombie, sketch, interlocked fingers, comic), '
--------------------------------------------------------------------------------
License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
The use of this learning model is entirely at the discretion of the user, and they have the freedom to choose whether or not to create NSFW content.
This is important to note that the model itself does not contain any explicit or inappropriate imagery that can be easily accessed with a single click.
The purpose of sharing this model is not to showcase obscene material in a public forum, but rather to provide a tool for users to utilize as they see fit.
The decision of whether to engage with SFW or NSFW content lies with the user and their own personal preferences.
Since I've learned about how to merge models and inspired by the results from NostalRealMix, decided to try to make my own just for a little fun.
So I've decided to merge BRA(Beautiful Realistic Asians) model with BB-MIX-EVE. The results are not bad.
Parameters I use:
DPM++SDE Karras
Sampling steps: 20
R-ESRGAN 4x+ Anime6B
Denoising Strengh: 0.45
Hire steps: 10
Upscale: 2
Resolution: 512x768
CFG Scale: 7
Clip Skip: 1 or 2
To begin with, English is not my mother tongue, so I apologize in advance if there are any mistakes.
This checkpoint wasn't intended to be shared but some of you asked for it, so, there it is.
I think this checkpoint really likes doing portraits.
Why forgottenMix ? Well, it's a Mix and I forgot what checkpoints it's made of....
I think 512x512 and 512x768 are better, but please, experiment.
Don't use restore face.
Prompt recommendations ? (That's more prompt is tried and liked than recommendation as i didn't experiment that much (I'm new the the AI world)).
Positive:
photography, bokeh, highres, ultra detailed, intricate details, absurdres, masterpiece,
beautifully detailed woman, beautifully detailed mouth, extremely detailed eyes and face, beautiful detailed eyes,
matte eyeshadow, eyelashes, eyeliner, ultra glossy lipstick, pouty lips, makeup, blush,
detailed background,
Negative:
(worst quality, low quality, normal quality:2),
And the most important thing: please, show me what my checkpoint is capable of !
PS: Every image is directly from Txt2Img, no Inpainting here.
Some of the images might use embeddings or Loras.
If you want more creative results, go for Clip Skip 1. On the other hand, Clip Skip 2 is your best bet for consistent outcomes. As of version 8.0, Clip Skip 3 is just as (if not more) reliable as Clip Skip 2, but you´ll need to test that out for yourself.
Use This VAE if you have a desaturated image.
Most of the Yandere images use this Lora(made by me): https://civitai.com/models/48579/yameroyandere
Info:
I forgot to keep track of the models used in this process, so I don't know which ones were merged. However, starting with version 2, I'll make sure to keep a record and update it here.
It is a collection of things that are unfortunate in quality, but are wasteful to throw away, and it is called a trash can.
The first time I tried merge, there are still many imperfections
Based on michianyv30_03_AbyssOrangeMix2_sfw made by Larry, merge with various models
Other ingredients (including but not limited to):hyperbreasts、anylactation、bigger-girls-model、kotosmix、rnqqv1、anyhentai、fake_pvc_style、chilloutmix…
Better with EasyNegative embedding network: https://huggingface.co/datasets/gsdf/EasyNegative
In principle, I do not support the use of this model to generate any adult content. When using this model, users must abide by laws and regulations, and do not infringe other people's reputation, privacy or portrait rights.
v1.0-Allamanda: First release
v2.0-Bergamot: detailed face textures, better for portrait and upper body images
People who prefer SFW images
People who prefer “Asian” models
Creative prompters
no need long sampling steps, start with 15 steps gives good result
compatible with 2.5D and 3D models
compatible with most LoRAs
DPM++ SDE Karras
Use 15-30 sampling steps
CFG: 4.5-7
Less complicated, short negative prompts are better
If using Hires:
Hires step: <20
R-ESRGAN 4x+
Denoising: 0.3-0.5
After download this model, try to share with us about your findings.
Please give star reviews with your generated picture if you like it.
p.s. It's not necessary but If you like my creations please consider to support me to create more projects by donate in here:
https://ko-fi.com/gorillamonsooniii
I'm also accept commisioning for LoRA/Ckpt
detailed face textures
the model produced three months ago did not meet my expectations, and i had no intention of continuing to improve it
I just baked AbyssOrangeMix in his VAE included to this model.
Please enable NSFW in Settings to see the NSFW picture examples!
This here is my preferred and time-tested model for general purpose prompting - a perfect balance between detail and flexibility. From my own experience (and over 25 dreambooth models trained on this model) I have concluded that at least as far as humans go, this model provides the perfect balance between quality and flexibility. In particular, it's worth noting that this model tends to lean more towards pornographic scenery. My previous model, SenBlend 1.5 seemed to struggle more with NSFW content. However, after creating this updated model I can attest that not only is this model more flexible in NSFW scenery, but also has not dropped one iota in detail and quality.
The SenBlend 1.6.3 checkpoint is composed of the following:
35% UberRealisticPornMerge 1.3
35% SenBlend 1.5 (Which itself is an unknown percentage blend of older versions of these same models and a few other ones)
30% Deliberate
The usual Realistic Vision prompts do work to varying degree due to this model having some Realistic Vision in its DNA from SenBlend 1.5. The commands are (from_side:1.3), (from_above:1.3), (from_below:1.3), etc.
Please look at the generation data from some of my examples and save the seed - it is my go-to for testing dreambooth models because it produces such reliable quality portraits. The prompts you will find there produce consistently good images!
I dare you to find a better general purpose model for people. You won't!
Another day another mix, this time a bit more anime and a little more open to working with LORAs
others will work, but I recommend using kl-f8-anime2.vae
shorter, less detailed prompts seem to work better!
Thanks!
Anime+Draw
It can be a drawing without hand on PC. little by little to paint a Tablet IPAD or pc.
Draw Things (iOS & MacOS)
In other words, your imagination idea for the paintings.
Hello Guys, I hope you like the model. Subscribe to my Channel,
https://www.youtube.com/@world-ai => I will be very grateful ♥.
If you want too you can help: https://ko-fi.com/worldai
This model is a mix of several different models to create what I personally like, It is a good general use model, and it excels at characters.
IMPORTANT!!! THIS MUST BE USED WITH THE THIS NEGATIVE EMBEDDING, DO NOT POST ISSUES WITHOUT USING THIS EMBEDDING. SERIOUSLY FOR THE LOVE GOD USE THIS NEGATIVE EMBEDDING.
https://civitai.com/models/17083?modelVersionId=20170
To use negative embeddings, place them into the textual-inversions folder of webui, and then type the name of the file (excluding file extension), into the NEGATIVE prompt. If you want to sear your eyes, place into the regular prompt.
DO NOT SELL THIS MODEL OR USE ON GENERATION SERVICES.
You may however sell images created on this model without crediting me, this is a simply a gift to the community, I just don't want greedy companies to make money, even though im not that important.
Sorry to the community for this, but I forget what I put into the model and did not record it. ): but I can tell you there is some chilloutmix and this is pretty NSFW biased.
The prompting is very easy on this, no need for complicated prompts, however they do not hurt in the slightest. JUST MAKE SURE TO HAVE THE NEGATIVE EMBEDDING IN THE NEGATIVE PROMPT!
Finally, if you have any feedback, please give it to me, Thanks ):
"IDENTITY DISORDER" - as IN Dissociative Identity Disorder.
No, the model literally doesn't have it but we do, and we wanted to MEME yet be a little artsy serious about it in prompting.
Boneless Unreality Mix
Maple Syrup @advokat
The model that i can't pronounce but has Samdoes Arts AND Inkpunk @xerminator13
Something 2.2 which i think is a mix by @nocrypt
All the berry mixes I could find
Dark Sushi 2.5d
Hentai, Elysium, NAI/Anything 3
Osenayan 1.5 Beta Illustration
Whatever's in the above model and it's loras XD
This is a different take on the 2.5d anime, and will be built upon as a series in future.
As always smack the ko-fi button if you want us to stop making models - I MEAN AHEM DO IT BECAUSE YOU CARE AND WANT US TO BE ABLE TO LIVE AGAIN (I think my models are taking me hostage)
PERSONAL USE A OK - GENERATION AND COMMERCIAL SITES PLEASE ASK - DO NOT RESELL THIS MODEL.
Downstream merges: Idgaf what you do, just take a stand about models getting uplifted without consent is all I care!
Please IF YOU CAN: https://www.buymeacoffee.com/duskfallxcrew
An extremely complicated merge, which i'm sure you'll wanna know what the mix is - right now we're still trying to remember.
What we DO KNOW IS, it does have @DarkAgent 's Marvels and Dungeons in it - and it's a mix of a TON of different LORA plus Osenayan Mix, Indentity Disorder's recent update and more. There is a "JOE MADURERIA" or JOE MAD lora that is currently unreleased that is in this model, we'll get to making it into a lycoris soon!
HUGGING FACE REPOSITORY WITH DIFFUSERS OPTION: https://huggingface.co/Duskfallcrew/Illustration-Mix
If you got requests, or concerns, We're still looking for beta testers: JOIN THE DISCORD AND DEMAND THINGS OF US: https://discord.gg/Da7s8d3KJ7
Listen to the music that we've made that goes with our art:
https://open.spotify.com/playlist/00R8x00YktB4u541imdSSf?si=b60d209385a74b38
We stream a lot of our testing on twitch: https://www.twitch.tv/duskfallcrew
any chance you can spare a coffee or three? https://ko-fi.com/DUSKFALLcrew
Include "sex with two men", see also supported positions. Placing unwanted positions in the negative prompt also helps.
It is based on the venerable Uber Realistic Porn Merge.
This model is designed to merge with other models to "lewd" them. See the generation grid for a preview of this.
Supported positions:
"legs open"
"cowgirl" (difficult)
"doggystyle" (difficult)
Other easy prompts:
latina, blonde, asian, black, stockings, fishnets, tattoos, sucking cock,
Most consistent results with CFG 7 and 50 steps. Restore faces also helps.
If you enjoyed this model, please consider leaving a review. Your feedback helps develop better content. More Jiwenji Creations
Versioning history
major.minor.patch
major
indicates a change in at least one of training data, tagging, or LoRA generation methodology. You can use the prompt from any image with the same major
version and it will operate under similar conditions.
minor
indicates nothing important unless indicated in release notes
patch
indicates the number of training epochs, default is 1
legs open
perfect camera angle
having sex with two men
Hi, I’m product and car designer, and I’m so excited to test with AI, I think is a good tool for designing. This tool is so useful for the design process (shapes-ideas generation), but more than that it helps so much to refine aesthetically.
ORIGINAL MODEL: eddiemauro 1.5. Express the minimalism concept. Matte finishing.
VARIANT MODEL: eddiemauro 1.5b. The prompt is more precise, and it adapts better with objects, but minimalism style is less. The object shape is more realistic, so I consider the original model is more "creative". Also, is less matte. (is also known like eddiemauro3.5).
After a lot of training and testing images, I finally arrived at a stable model to generate product design images and iterations. Even that I’m trying to enhance it. Recommendation to use this model:
It is mandatory to use captions in the prompt like: “3D product render”, “product render” or "3D product render style". I’m testing to enhance model using another “instance token”, probably I will change it, but it depends on the quality of image generation. I use this token especially to enhance “product rendering” captions and center it to a certain style, also I think that can be merged with another Lora Style. Even that, I’m not so expert with training, and I will try with a different name if results get better.
Clip Skip 2. Go to settings-Stable Diffusion-Clip Skip, and put “2”. I trained model with this option.
Generate first some images by changing “Batch size”, select one, and go to img2img mode, duplicate or increase resolution (image size), with “denoise strength” of “0.3 to 0.5”. It is the same process of “hires.fix” in txt2img mode, but you can have more images generated to select. Try to preserve 512x512 (or just 512 in one side) because images are trained at this resolution, and when you change it the model-shapes start gets crazy, even that you can experiment.
Use new “extra tool” for enhancing image called “4x-UltraSharp”, to increase final resolution.
If you combine the model using Lora like “epi_noiseoffset”, you can generate another kind of images with a little style, but trained style will decrease.
Use ControlNet to generate a more controlled shape of what you want, and even you can test it with sketches. Some images shown here were created with projects that I have done, if you like, you can take like a base one of my creations: https://www.behance.net/eadesign1
You can use this model for whatever you want. If there is another app that is using my model, and you have to pay for it, just let me know. If you use it for merge, I would just ask to contact me.
You can download “EasyNegative” embedding and use it.
Use VAE 560000 or 840000 of SD model 1.5, to enhance the image created. Sometimes works. For 1.5b model is mandatory.
The sample images here were created with my model, and it has not any inpainting or post-production, just were upscaled with A111 models. You have to consider that you have to experiment so much to arrive at these results because it is hard to test just once and get an ideal result. I use Automatic1111 with most used extensions. I tried with Lora also, but I prefer Dreambooth normal because it is more precise.
I trained this model with 100 images with a “minimalistic” style (probably I will train more styles). 75% were products and 22% transportation, from that most of these are speakers, smart robots, water bottles, and others. I will enhance the models by adding 100 more images with a variety of shapes and different types. I realized that these models work so well in vehicle-car shapes, even that if were not so many vehicles in the training dataset.
Example Prompt:
3D product render, futuristic water bottle, finely detailed, purism, ue 5, a computer rendering, minimalism, octane render, 4k
Negative prompt: (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, bad_prompt, bad_prompt2, ((monochrome)), ((grayscale)), cropped, text, jpeg artifacts, signature, watermark, username, EasyNegative ,sketch, cartoon, drawing, anime, duplicate, blurry, semi-realistic, out of frame, worst quality, ugly, low quality, deformed
Steps, from 20-40 (For EulerA is enough 20, and you can use also DPM++SDE Karras, but EulerA is better mostly)
CFG scale: 6-9 (Going for 10-15, to less style, but more realistic shape of object).
You can follow me on my social networks. I will show my process and also design tips and tools:
https://www.facebook.com/eddiemauro.design
https://www.instagram.com/eddiemauro.design/
https://www.linkedin.com/in/eddiemauro/
https://www.behance.net/eadesign1
3d product render
product render
This is just a recreation of pastelmix except it uses the new version of closertodeath's detailprojectmodel, akak dpep4 to make dpep4mkmp as a base (as opposed to dpep3mkmp) and swaps out basil-mix for ChilloutMix. The big difference between dpep3 and dpep4 is that dpep4 is a lot less fried. This affects input and output layers 1-5, of the Unet, making large alterations to the common output layers and the detail input layers, as indicated by this Unet merging guide below.
In theory we're using "dpep4mkmp's understanding of things like bodyparts and feet, hair, clothes" and using sampling them using "tea's understanding", whereas we're using "tea's understanding of things like background and picture composition" and letting dpep handle upsampling them, except for super granular inputs where it's all ChilloutMix handling the show. As dpep's strength is primarily in super highly detailed backgrounds and chilloutmix's strengths are in making the most realistic textures, this plays to both their strengths.
In practice these changes mostly make a different in body parts and other high level blocks related to anatomy. We get the same pastel-like style but with big changes to the face and body parts which are more distinct and less blended together (because dpep4 is less fried). This primarily manifests in the skin and eyes but also a less squished facial shape.
You can check andite's huggingface repo for the recipe.
mksks style
detailed background
What is RevRealistic?
RevRealistic is a merge between Cyberdelia's Cyberrealistic v2.0 model and s6yx's RevAnimated v1.0 model.
Overview
Expect all of the best from both models that are merged. Examples will be included.
Works with the following:
LoRA
LyCORIS
Resolutions up 704x704 appear to be useable upon my tests.
Disclaimer:
Do not sell this model on any website without permissions from creator (me, Cyberdelia, and s6yx)
Credit me if you use my model in your own merges
I do not authorize this model to be used on generative services
Playing around with models and I wanted one that could take the minimum of prompts and produce acceptable results. The prompt: "award winning photo of a beautiful woman trending on flickr, 50mm f1.2" I had a few different versions and I settled on this one. It's based on all SD 1.5 models and has a little of the each of the following plus some personal photography work baked in: URPM, PornVision, PurePornPlusMerge, Clarity2.0, ProtoGenX3.4 and Photosomnia... so it's a merge of a many merges. The model has an affinity for brunettes and certain faces. It's obviously capable of NSFW content but not unsolicited from my experience so far. Since I have trained into this model some personal photographic work it's not intended for commercial use.
update
v1.3偏向2.5d风格,v1.2偏向二次元风格,自行选择,建议使用hires放大,重绘指数在0.55到0.6之间,也建议自行探索参数
如果眼睛不够细节:
1,请让人物靠近镜头,使用cowboy shot, upper body等提示词提高面部占比
2,或者使用放大插件提高像素,能明显改善眼部细节,可以看我发的例图, 使用SD upscale放大一倍,教程可以在β站找,另外需要注意的是放大时的分块迭代步数是步数X重绘系数,例如60步数x0.35系数=最终效果21步,也建议使用其他的放大插件
v1.3 towards 2.5d style, v1.2 towards anime style,it is recommended to use hires to enlarge, Denoising strength suggested between 0.55 and 0.6
一个简单的融合模型,使用了WolfBoys_2D和NijiV5style为主加上一些其他融合模型,并融合了一些成熟男性的lora,推荐使用muscular和facial hair为提示词。
A simple merge model, using WolfBoys_2D and NijiV5style as the main plus some other merge models, and incorporating some mature male lora, with muscular and facial hair recommended as prompt words.
Updates on V3:
Now it comes in one safetensor file
replaced complexLineart with expMixLine_v2
replaced copeseethemaid with Mistoon_Emerald_v20
replaced mtompunk with Pulp Art Diffusion
Updates on V2:
replaced Anything V3 with pikasNewGenerationV1_v10
replaced HARDBlend with edgeOfRealism_eorV20Fp16NoVae
Safetensors version uploaded too. It's on the box down to details.
Hi guys!
This is my first SD 1.5 Mix. I trying to give a little bit of comic, anime and semirrealism style.
This is the models contained in this mix:
I recomend this VAE for your experiments.
And you can buy me a coffee if you like it ;)
Greetings! :8)
insertNameHere
アニメ調のモデルです。
nsfwを目的として個人的にマージしていましたが、nsfw以外の出来もよかったので共有したいと思います。
人物の顔のパーツや背景の書き込みが好きです。
This is an animated style model.
I had personally merged it for nsfw purposes, but I would like to share it with you because I liked how it turned out outside of nsfw.
I like the facial parts of the person and the background writing.
以下のモデルをマージしました。
The following models were merged.
・Counterfeit-V3.0
Counterfeit-V3.0 - v3.0 | Stable Diffusion Checkpoint | Civitai
・AniReality-Mix
AniReality-Mix - v1 | Stable Diffusion Checkpoint | Civitai
・(1~6) Defacta
(1~6) Defacta - Defacta 6 | Stable Diffusion Checkpoint | Civitai
・BRA(Beautiful Realistic Asians) V4
BRA(Beautiful Realistic Asians) V4 - v4.0 | Stable Diffusion Checkpoint | Civitai
マージ比率はOrangeMixsを参考にしました。
Merge ratios are based on OrangeMixes.
WarriorMama777/OrangeMixs · Hugging Face
1.
model_A | model_B | model_O | base_alpha | weight_name | weight_values
16Defacta_defacta6 | braBeautifulRealistic_v40 | temp1 | 0 | 1,0.9,0.7,0.5,0.3,0.1,1,1,1,1,1,1,0,0,0,0,0,0,0,0.1,0.3,0.5,0.7,0.9,1
2.
Counterfeit-V3.0 * 0.5 + AniReality-Mix * 0.5 = temp2
3.
model_A | model_B | model_O | base_alpha | weight_name | weight_values
temp2 | temp1 | DefbeaCF3AR_mix_v1 | 0 | 0,0,0,0,0,0,0,0,0,0.3,0.1,0.3,0.3,0.3,0.2,0.1,0,0,0,0.3,0.3,0.2,0.3,0.4,0.5
lora:GirlfriendMix_v1
snowy
winter
These are by-products of making Cursepoison and Tamehead.
These models were not formally adopted for many reasons.
Therefore, each version is not linked before and after, so the version number is "number" instead of "version".
This is a merged model using AOM2nsfw and hll-s-2 model.
We can summon various VTubers.
(AOM2-nsfw) + ( (hll-s-2) - (animefinal-full-pruned) * 1) = AOM2nsfw-hlls2-vtubers-fp16
Maybe backgrounds are more beautiful than the hll3.1 version.
Vtuber characters may be more difficult to summon than the hll3.1 version.
🐷🐷🐷Please share us in the below gallery section, if you were able to get a nice CG. I wanna look various Vtuber's panties with angry face.💩💩💩
Updated - v4: More models added to merge (details in version info), more training data turned into a LoRA and merged in, and hopefully this means that the images look more realistic, with more details and more interesting backgrounds.
Following on from Gorilla With A Brick, I've merged in 10 more photorealistic models at various weights, and some more noise offset to create something that when prompted for photorealism will make you go "I Can't Believe It's Not Photography". It will happily create CGI characters and awesome landscapes as well.
This model seems to do very well with specified lighting characteristics in the prompt (e.g. "volumetric lighting"), and will give a fairly plain background unless specified in the prompt.
As always, pruned to fp16, and the VAE baked in (SD-v2 840000) - there is also a no-VAE version as well. Enjoy!
rcnz_hqr style
A Model guy
David
As always, the example images I've posted have NO postwork done to them beyond the facial fix (in some cases) and upscaling. I used NO negative prompts except for adding "NSFW" to a few of the ones where it was tending to go blue on me. The point is to show how well it works out of the box - if you take the time and create really good prompts and use good negative prompts (an example is at the end of this post) you can get even better results.
This update doesn't change the styles very much (though it's enough that you can't replicate older renders on the new version). It mainly incorporates things from Hyperfusion and Bigger Breasts. Hyperfusion is great, but it tends to sluttify the outfits, even at very low levels. By mixing in Bigger Breasts (which tries not to change up the clothing too much) and finding the right balance, it works pretty well.
A bonus is that it also lets you accomplish making breasts smaller - something very difficult to do on most models without bringing in a lot of extra help. With this "flat_chest:1.4" (or thereabouts) will do a nice job of shrinking things down in most cases. You can also use all the triggers from hyperfusion to make the breasts, butt, and belly do all sorts of things - and putting in weights there brings varied results too.
NOTE: At low levels, most of those calls don't go crazy by sexing everyone up. The higher your weights, the more likely that is to happen, so you may need to declare SFWs and even call a specific outfit in some cases. (I had to do that in the big bust sample image).
MILFY BONUS: While this isn't so much of an update as it is a little trick I learned, it is pretty cool and works well, so I thought I'd pass it along. So many models make it hard to control the age of folks - no matter what you do, the subject ends up looking under 20 (and sometimes even younger). To age folks up, use "caricature" in your prompt. The higher the weight, the older, the lower, the less aging affect it has. When adding this to your prompt, though - it will tend to push the image toward headshots - so if you're looking for something else, you'll need to call your shot and angle.
This really jumped up to the next level with a few extra tweaks to the LoRa styles plus Merging in Galena. The example images are all done with NO negative prompt and just a handful of words in the prompt - usually just a name and a quick description of what they are doing.
This also takes LoRa's and TIs a lot better than the original. For version 1 I needed to back down the weights quite considerably, for this version leaving them at the default 1.0 (or the values recommended by the creator) tended to work really well.
You can get some cool looks and variations when combining TI and LoRa triggers with the names of known celebrities, too.
This version tends to go a bit more Porny/NSFW than the original, but a lot of that is keyed into the source of the training. Celebrities who tend to not do the sexy stuff aren't as likely to default to sexy stuff - while a porn star or nude model are more likely to default to nudity and sex. Call your shots and it works out well either way, though.
Upscale is always a good thing (though in these, I just did latent) while Face Restoration is often better to leave off (but not always).
Body types tend to always be similar - but you can call them using various bodyweight/style prompts.
I wanted a model or workflow process that allowed me to quickly and easily churn out art for games and/or photo stories. I didn't want long complicated prompts to create a certain art style, but I also didn't want to be forced to use LoRa's and TI's for every character. Unfortunately, most of the models where I loved the art styles didn't do so well of loading characters (celebrities for mixing, fictional characters, etc.) and the ones that knew tons of famous characters and people tended not take to the art styles I was going for (at least not without complex prompts).
There are some great Anime style art prompts that did a nice job, but I didn't really want Anime faces (big eyes, heart shaped head with pointy chin, etc.). A wanted more of a western style, with a hint of caricature in the faces, and a consistent look from a basic <character> is wearing <outfit> at <location> type prompt.
Unfortunately, since I didn't really expect this to work, I didn't start taking notes on what all went into this until I was fairly well along. It started with a Mega Model (either SD's AIO or Clarity - I forget) and then I merged Rev Animated and Babes into that.
To my surprise, it knew LOTS of characters and celebrities (even some obscure ones that I was surprised about) and it spit out a pretty consistent style - even though it was still a bit more realistic than I wanted. It took LoRa's and style tokens well, though... so I moved to the next phase.
First I added several LoRa styles that I liked the looks of and tweaked the weights on each. They were styles that needed no triggers - so hopefully once I merged them in, then my default prompt would spit out that style every time. (Fingers crossed). I set the values so they showed up, but were low enough that the style didn't affect the clothes or other compositional elements much (if at all). The styles and the weights I ended up at are as follows: <LuisapNijijourneyLORAV2_v2:0.7>,<bootyFarmStyleLora_v10Dim128:0.4>, <lora:hungryClickerStyle_v20:0.6>
To my shock and amazement - it worked spectacularly. It wasn't quite the style I wanted, but adding simply: (bold lines:1.1), (high color), ((smooth skin)), (Masterpiece Digital Artwork:1.3), to the start of the prompt got me there. I can handle a 4 token seed prompt.
Some notes on the model:
It "mostly" defaults to SFW out of the box. i.e. If you just type in a person's name or character description with no clothes - it'll tend to put them in clothes - or at least a swimsuit. That said... with proper prompting, does nudes and porny stuff just fine, too.
I didn't care so much about making sure resulting characters look "exactly" like the person you're calling out so much as wanting to make sure that if I called out that name/person again - they'd be consistent in the way they look from image to image. So, in other words, it's not so much about "accuracy" as it is about "consistency" - and that seems to come through. (This works well for my needs to create stories and whatnot)
Depending upon the distance from camera (i.e. Closeup head shots vs. full body shots) the weight of the "bold lines" in (many of) my example prompts can be tweaked. The further back, the lower that should go and the closer in, the higher it can go.
This model (probably due to one of the LoRa styles I added) tends to want to add tattoos - even for those characters who don't have them. Adding that to the negative prompt can help (see below). Don't forget to remove those if you actually want the tattoos there.
I've added some comments to several of the example images - especially the ones without generation data.
For the sample images I used the Anything VAE (though it works fine with both the default and Anime VAEs that I tried with it). No textual inversions or LoRa usage in them - it's all coming from the checkpoint. Base images at 384x512, Euler a at 20 steps, clip skip 2, face restoration, and scale fix (2x size using latent bicubic/antialiased). The first few examples are pure raw with just a character name and/or character name + general outfit description.
Those first few examples have no negative prompt, either (just to show how well it generally behaves without them). A good neg prompt helps with various things like extra fingers/bad hands, various defects and so on. The one I use most often is: (snagged from a sample image somewhere at some point)
canvas frame, 3d, ((disfigured)), ((bad art)), ((deformed)),((extra limbs)),((b&w)), blurry, (((duplicate))), ((morbid)), ((mutilated)), [out of frame], extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy, Scribbles, Low quality, Low rated, Mediocre, 3D rendering, Screenshot, Software, UI, watermark, signature, distorted pupils, distorted eyes, (distorted face), Unnatural anatomy, strange anatomy, things on face, ((tattoo)), ((tattoos)),
Obviously, some of those may want to be removed if you're going for something that has been excluded.
BREASTS
ass
belly
hyperbreasts
hyperass
flat chest
caricature
Realistic , the least of anime I was in good detail like the photographs, realistic, but it is very difficult, they are all characters that are badly crooked. I wrote down the Ckpt data in my notebooks,
Upscaler
R-ESRGAN 4x+
Hires steps
Denoising strength
Upscale
Resize width
Resize height
nSteps: 20, Sampler: DDIM, CFG scale: 7, Seed: 2174677471, Size: 552x616, Model hash: 471b2f8abf, Model: ARRealVX9, Denoising strength: 0.3, Hires upscale: 1.5, Hires steps: 10, Hires upscaler: R-ESRGAN 4x+
Real style soft dark2.5D mix : (worst quality, low quality:1.6) max(worst quality, low quality:1.7)
Anime style 2.5D face mix : (worst quality, low quality:1.6) max(worst quality, low quality:1.7)
Little a real mix : (worst quality, low quality:1.7) max(worst quality, low quality:1.885)
VAE : vae-ft-mse-840000-ema-pruned
Need Negative prompt min(worst quality, low quality:1.7) max(worst quality, low quality:1.885)
Checkpoint trained on 1.5 to create beautiful images of this beautiful woman.
Made using Easy Mode Diffusion.
Feel free to merge, create LORA and do more with this. And do encourage your fellow creators from time to time. :)
beth woman
Модель включает в себя лучшее, на мой взгляд, сочетание реалистичных стилей. Она соединила в себе более 15 различных моделей в четко выверенных пропорциях, что позволяет получать гипер реалистичные портреты невероятного качества. Модель натренирована на многих знаменитостях, рисует их лучше, чем любая Лора или Ликорис. Также содержит реалистичных nsfw контент.
The model includes the best, in my opinion, combination of realistic styles. It combines more than 15 different models in well-adjusted proportions, which allows you to get hyper realistic portraits of incredible quality. The model is trained on many celebrities, draws them better than any LoRaor Lycoris. Also contains realistic nsfw content.
'Creativity is not just creating works of art, but the need to express one's uniqueness and leave a mark on the history of mankind.'
\( ̄︶ ̄*\))
For best result use this negative prompt:
(worst quality, low quality:1.4), (bad_prompt:0.7), simple background, multiple ears, muscles
And "Euler a" sampler.
All models after v3 are my work. For best results I recommend downloading v3
Mixed Down is a WIP. I spend 3-4hrs a day on it merging and tweaking. V1.1 has over 50 iterations. Please enjoy and share your results
Something that I've been throwing together over time and I think I've come to something stable enough to share with others. I can't remember everything I've put into it.
Has some good performance as low as 27 steps and I personally prefer DPM++ 2S a Karras and DPM++ 2M Karras, though it performs fairly well with all samplers.
The components in no particular order are: AnyHentai1.7, Bastard v4 Anime, Clockwork Oranges, Corneos, DHClassic, Digitals4wed0ff, DH SuperCute, ExpMix, Kazuki Merge, Moyu, Hassaku, DeGradeRev, Lazy Amateur Mix, URPM 1.5, SirenMix
I have uploaded the model to beta.moriverse.xyz, where you can directly try the model's performance for free. JRA V1 (Japan Style Realistic V1) is trained on over 300 high-quality and high-definition Japanese-style photographic images based on SD, and fused with models such as BRA V4. The original model performs relatively poorly in facial generation, but the atmosphere, lighting, and composition of the Japanese-style photographs are excellent. Models like BRA enhance the facial generation and compensate for some of the shortcomings of SD1.5.
我同时已经上传到 beta.moriverse.xyz,可以在该网站上直接免费尝试该模型的效果 JRA V1(Japan Style Realistic V1)是用了三百多张高质量高清日系摄影写真图片基于SD进行训练而成,并且和BRA V4等模型进行了融合原版模型在脸部生成方面表现得比较差,但是日系写真的氛围感、光影和构图测试下来都很棒,BRA等模型增强了脸部部分的生成,弥补了一些SD1.5的缺点。
また、beta.moriverse.xyzにもアップロードしていますので、誰でも無料でこのモデルを試すことができます。 JRA V1(Japan Style Realistic V1)は、SDに基づいて300以上の高品質で高精細な日本式写真画像を使用してトレーニングされ、BRA V4などのモデルと融合されています。オリジナルのモデルは顔の生成において比較的性能が低いですが、日本式写真の雰囲気、照明、構図は優れています。BRAなどのモデルは顔の生成を向上させ、SD1.5の欠点を補う役割を果たしています。
jp_style person
The animelike_v2 model seemed really good, but I had some issues getting some features to come out like plumper lips. on a whim I tried merging:
Babes_babes_v11 Animelike_2d_v2 and biggergirlsmodels_biggergirlsv2.
It seems to do well when stacking other LoRA's,
There's bit of a bias for chubbier or fat girls, you should be able to curb this with using negatives for fat, and positives for skinny like thin waists.
Tags like thick lips, parted lips, hoop earrings, and bimbo seem to work really well, it also seems to do good with Platform heels,
有人留言说模型基本不能用,麻烦新朋友看下自己的设置是否和图例的设置一样,除了全身图会有IMG2IMG放大的操作,其他图都是直出的。请一定先确定自己设置是否和图里一样哈!!!
V2.5版本的光影效果表现更好,支持更多幻想内容,调整了基础脸型,场景方面的表现更好。
V2.5 version has better lighting effects and supports more fantasy content. The basic face shape has been adjusted, and the performance in scenes has been improved.
适当调小(CFG Scale)的数值,可以得到与众不同的效果。
V2版本说明:
1.更换了默认脸。
2.更好的光影表现。
3.融合了更多模型,后面还会继续迭代。
4.如果想得到更好的皮肤质感,那么生成图片的时候不要开启高清修复,而是将生成的图片传入图生图模式,启用Tiled Diffusion,来进行放大,从而得到更真实的皮肤质感。
5.我会继续更新迭代这个大模型,希望能不断填充内容,让模型更具趣味性。
谢谢您的支持!
Version 2 Release Notes:
Changed the default face.
Improved lighting and shadow effects.
Added more models and will continue to iterate in the future.
For better skin texture, do not enable Hires Fix when generating images. Instead, use the "Tiled Diffusion" mode to enlarge the generated image and achieve a more realistic skin texture.
I will continue to update and iterate on this large model, hoping to add more content and make it more interesting.
Thank you for your support!
------------------------------------------------------------------------------------------------------
推荐参数
Recommended Parameters:
Sampler: DPM++ 2M Karras alt Karras or DPM++ SDE Karras
Steps: 20~40
Hires upscaler:4x-UltraSharp
Hires upscale: 2
Hires steps: 15
Denoising strength: 0.2~0.5
CFG scale: 6-8
clip skip 2
A good general model for photorealism and nsfw.
A merge mix of degenerate , 526mix v1.4 and urpm.
Created with invokeai merge models tool, zipped and upload as-is because I dont know how to convert back into ckpt
This model gives you the ability to create whatever you want.
Attention, the model requires VAE from Stability AI: vae-ft-ema-560000
or you can use VAE from Stability AI: vae-ft-mse-840000
NSFW Art Designers
Character designers
Professional prompters
Art designers
I hope you enjoy the results of my efforts.
Thank you very much XpucT. My first knowledge came from this man.
This model is based on his Deliberate V2 model.
GIRL 3D SUBJECT RENDER PHOTOREALISTIC FEMALE RETRO ANATOMICAL ART INTRICATE ACCURATE WOMAN ILLUSTRATION CARS CARTOON CINEMATIC FANTASY PHOTOGRAPHY REALISTIC
about v3.0
It's an improved version.
- better face, better body, better expression
- Expression enhanced by adding 2d models from multiple sides
Dynamic Thresholding is recommended when using DPM++ SDE.
This is my last model.
It may not be perfect, but we all know that the quality of the model depends on the prompt/lora/emb/vae. Therefore, I recommend that you try it out for yourself and use it in your own style.
If you are interested in the example images, you can use the following prompt format:
==Prompt==
Raw,illustration with brushstrokes,official art,(hatching \(texture\)),(muted color,partially colored:0.8),detailed hatching,(drawing_trace),flat color,1990s \(style\),recolored,
neon genesis evangelion,
<Fill in the content you want here, It is recommended to use natural language to describe, as using tags can cause some problems (SEE BELOW)>,
art by sadamoto yoshiyuki
==Negative prompt:==
(worst quality, low quality:1.4),(bad anatomy),(bad hands),digital art,1980s \(style\),twisted lines,chaotic lines,over exposure,drawing
==Other settings==
Steps: 30, Sampler: DDIM, CFG scale: 7, Seed: 2326670396, Size: 768x768 ENSD: 31337
vae: sd 86000 clip:1 or 2
You can also try some negetive embbeding like EasyNegetive, bad-artist and bad-image.
-->Known issue:
If you are using the prompt above, you may find that the character sometimes is holding a pencil/sketch/paper and painting tools often appear in the image, this is caused by tag I used and I haven't found a way to fix this problem yet.
Narco is one of the results of the last few months journey with stable diffusion. Constantly evolving - sometimes devolving - but always one that I go back to. I wanted a model that excels at both the beautiful and the horrific and, after a while bouncing from model to model, I decided to scratch my own itch.
It's far from perfect but I enjoy using it. The preview images are cherry picked from batches of 3 and no embeddings were used. I will say there are times when it seems to do its own damn thing, but still, I find it hits far more than it misses. Honestly, something I'm looking forward to the most by sharing it is seeing if other people can pull gold from it's murky depths or if it should be left gasping its last in a dingy alley somewhere.
So far there are another 2 in the pipe. One is a photorealistic model and the other a cartoony model in the western style, I’m steering away from Manga for the time being. So, enough with the grandstanding. If ye like it. Like the idea and direction feel free to https://www.buymeacoffee.com/Uzia or not, of course.
I only use Invokeai so as far as image recreation in A1111, I wouldn’t count on it. Plus on Civitai there's no options for Invoke's samplers. All images were done with either k_dpmpp_2 or k_euler_a
The ingredients for this mutant have been lost in the mists of my terrible memory. I can say I avoid models with restrictive licensing and the only restriction I wish to apply myself is that generative websites can’t use Narco without contacting me first (not that I expect it, but you never know). I just find the idea rude, like Ronald McDonald sneaking into my room and using my boxers. You however are free to use them any way you want. Mind the stains though.
Ah-hem, anyway, roadmap. Opiate (photoreal) and Lycergic (cartoon) will be dropping as soon as I’m happy with them, then I want to expand Narco’s ability with nude and pose generation. So, happy diffusing. And please share any of your images, would love to see them.
Prompt : (masterpiece, best quality:1.2), (ultra detailed), (illustration), (distinct_image), (intricate_details), (delicate illustration)
negative : (worst quality:1.4), (low quality:1.4), EasyNegative, (multiple Views:1.5), (multiple girls:1.5), (extra hands, extra fingers, extra arms, extra legs), cropped hands, extra digit, fewer digit, (bad hands:1.5), (bad antomy:1.5), (fused anatomy), (blurry:1.3), (artist name:1.5), (censored:1.4), (watermark:1.5), (text:1.5), (signature:1.5), (4 fingers, 3 fingers, 2 fingers, 3 legs, 4 legs, 3 hands, 4hands), (fewer than 5 fingers)
EasyNegative | Stable Diffusion TextualInversion | Civitai
Sampling : DPM++ 2M Karras or DPM++ SDE Karras
Sampling steps : 20 ~ 30
Clip skip : 2
Hires. fix : R-ESRGAN 4x+Anime6B or 4x ultrasharp
Upscale By : 2
Hires step : 12 ~ 15
CFG Scale : 7~8
use Xformer
use dddetailer
GitHub - Bing-su/dddetailer: Detection Detailer hijack edition
Detection Detailer sianworld edition
\extensions\dddetailer\scripts = Replace the ddetailer.py file
Setting - sampling setting
For sampling methods, use Euler a, DDIM, the model is still in beta test, I will write more in the future...
Step 1: Download SAFETENSORS.
Step 2: Put the SAFETENSORS file under "stable-diffusion-webui\models\Stable-diffusion"
Step 3: Done! Enjoy the model and write review... please <3
VAE is not required, as I understand it, I successfully built in, maybe :)
Use a minimal negative prompt for best results
Use Euler A, DDIM and 20-25 steps for best results
I also used the upscaler Latent (nearest-exact) and NONE, with a highres step of 20-30 and denoise of 0.45-0.5 to improve image quality and detail (optional)
PROMPTS
See picture description...
This is my first assembled model, if you have any concerns, please write, I will update it regularly, thank you!
Tips https://civitai.com/api/download/models/6297?type=VAE Perfect Pastel VAE
Use only DanBooru tags!
OrangeSlushyMix V1 - Renewal - V1 | Stable Diffusion Checkpoint | Civitai
Prompt : (masterpiece, best quality:1.2), (ultra detailed), (illustration), (distinct_image), (intricate_details), (delicate illustration)
negative : (worst quality:1.4), (low quality:1.4), EasyNegative, (multiple Views:1.5), (multiple girls:1.5), (extra hands, extra fingers, extra arms, extra legs), cropped hands, extra digit, fewer digit, (bad hands:1.5), (bad antomy:1.5), (fused anatomy), (blurry:1.3), (artist name:1.5), (censored:1.4), (watermark:1.5), (text:1.5), (signature:1.5), (4 fingers, 3 fingers, 2 fingers, 3 legs, 4 legs, 3 hands, 4hands), (fewer than 5 fingers)
EasyNegative | Stable Diffusion TextualInversion | Civitai
Sampling : DPM++ 2M Karras or DPM++ SDE Karras
Sampling steps : 20 ~ 30
Clip skip : 2
Hires. fix : R-ESRGAN 4x+Anime6B or 4x ultrasharp
Upscale By : 2
Hires step : 12 ~ 15
CFG Scale : 7~8
use Xformer
use dddetailer
GitHub - Bing-su/dddetailer: Detection Detailer hijack edition
Detection Detailer sianworld edition
\extensions\dddetailer\scripts = Replace the ddetailer.py file
Setting - sampling setting
OrangeSlushyMix V2 - Renewal = Civitai | Share your models
Prompt : (masterpiece, best quality:1.2), (ultra detailed), (illustration), (distinct_image), (intricate_details), (delicate illustration)
negative : (worst quality:1.4), (low quality:1.4), EasyNegative, (multiple Views:1.5), (multiple girls:1.5), (extra hands, extra fingers, extra arms, extra legs), cropped hands, extra digit, fewer digit, (bad hands:1.5), (bad antomy:1.5), (fused anatomy), (blurry:1.3), (artist name:1.5), (censored:1.4), (watermark:1.5), (text:1.5), (signature:1.5), (4 fingers, 3 fingers, 2 fingers, 3 legs, 4 legs, 3 hands, 4hands), (fewer than 5 fingers)
EasyNegative | Stable Diffusion TextualInversion | Civitai
Sampling : DPM++ 2M Karras or DPM++ SDE Karras
Sampling steps : 20 ~ 30
Clip skip : 2
Hires. fix : R-ESRGAN 4x+Anime6B or 4x ultrasharp
Upscale By : 2
Hires step : 12 ~ 15
CFG Scale : 7~8
use Xformer
use dddetailer
GitHub - Bing-su/dddetailer: Detection Detailer hijack edition
Detection Detailer sianworld edition
\extensions\dddetailer\scripts = Replace the ddetailer.py file
Setting - sampling setting
This model is fine-tuned to my own taste with around 4000+ images in my preferred style.
Trained about 200,000-300,000 steps and took about 2 days
The model is base on alu's graffiti .
VAE is the same as Anything v3.0
Most of the parameters of the previewed pictures are based on
size: 512x768
sampler: DPM++ SDE Karras
step: 28
highfix: true
Latent 1.5x ~ 2x
highfix step: 10-20
highfix denoise: 0.6~0.7
About the name:
This is the model used by a chatbot called alu.
aroamoyasi
kishida mel
amamiya yuko
shihou matsuri
hiten
mituk1
yohaku
I don't speak English so I'm translating at DeepL. Let me know if the English is weird.
I made this because I wanted NOSTALGIA coloring in dorayakimix's character design.
This model is also good at fantasy backgrounds with NOSTALGIA efficacy.
NostalYakiMix
A model that combines the good coloring and background of NostalgiaMix + the cuteness of DorayakiMix.
NostalYakiMix-Water
This model maintains the good qualities of NostalYakiMix, but with thinner outlines of NostalYakiMix.
The impression is closer to that of a watercolor painting.
dorayakimixのキャラデで、nostalgiaの彩色が欲しかったので作ってみた。
nostalgiaの効能でファンタジーな背景も得意。
NostalYakiMix
NostalgiaMixの発色の良さと背景 + DorayakiMixの可愛さを併せ持ったモデル。
NostalYakiMix-Water
NostalYakiMixの良さはそのままに、NostalYakiMixの輪郭線を細くしたモデル。
印象が水彩画に近づいた。
This is been sitting in our HF repository section for god knows how long.
We figured someone might want it.
Exception:
PLEASE literally don't LMAO.
Don't use it commercially.
Don't merge.
IT's THAT BAD XD
If you refuse to listen idgaf, XD just don't.
Burgie
prilosecotc1
a model "molded" on 50 works by the most famous Dutch painter! Being a first version, some inaccuracies:-D
I will leave this at V2, thank you & enjoy!
Thank you for all Reviews, Great Trained Model/Great Merge Model/Lora Creator, and Prompt Crafter!!! You can see what model to create "AmIReal" in "About this version"
* The results will be too much Nsfw/Nude/Topless, Still need a bit Photorealistic/photograph (will try to fix it on next version)
Base Negative Prompt:
(3d, render, cgi, doll, painting, fake, cartoon, 3d modeling:1.4), (worst quality, low quality:1.4), monochrome, child, deformed, malformed, deformed face, bad teeth, bad hands, bad fingers, bad eyes, long body, blurry, duplicate, cloned, duplicate body parts, disfigured, extra limbs, fused fingers, extra fingers, twisted, distorted, malformed hands, mutated hands and fingers, conjoined, missing limbs, bad anatomy, bad proportions, logo, watermark, text, copyright, signature, lowres, mutated, mutilated, artifacts, gross, ugly
Sampling Method:
Euler a, Euler, DPM2 Karras, DPM2 a Karras, DPM2++ 2S Karras, DPM2++ 2M Karras, DPM2++ SDE Karras, DDIM, UniPC
Image Size:
- You can use 512x512, 512x768 (normal)
- I like 640x1024 without HighRes Fix (Good)
- Up to 768x1152 without HighRes Fix (just need more generate for best result)
You can read the version details for more info.
Thank you & Enjoy! Buy me a coffee
⚠需要配合identityV_v40使用,可以提高角色面部的还原度。
⚠单独使用没有效果。
⚠推荐使用高清修复
推荐设置:
LoRA:identityV_v40
LoRA强度:0.5
VAE:animevae
采样器:DPM++ 2M Karras
采样步数:20
CFG:6-8
推荐尺寸:768*512或1024*1024
——————————————————————
⚠It needs to be used with identityV_v40, which can improve the restoration of the character's face.
⚠There is no effect when used alone.
⚠Hires. fix is recommended
Recommended settings:
LoRA: identityV_v40
LoRA weight: 0.5
VAE: animevae
Sampler: DPM++ 2M Karras
Sampling steps: 20
CFG: 6-8
Size: 768*512 or 1024*1024
identity v
button eyes
Quality modern style collection exclusive, capable of specifying ↓
Textures and theme color of building:
White, grey, black, red, yellow, green, wooden, metal, glass, and so on...
Light environment:
Daytime, Sunset or Dawn, Night
Scene:
Beach, City, Downtown, Mountain, Forest, Water body
Weather:
sunny, rain, fog. and so on...
Contect me with Wechat: PHSJ2019
or Email: jlmaoju@outlook.com
Leave a comment/ Post/ 5Stars if you enjoy~ Thank you
and, BIG Thanks to @Rainn0.0, for sharing training experience.
#1 This Model Trained by 570 photos of taiwan high school uniform model, all photos shot by Canon DSLR full frame camera.
#2 train photo's size fit to 1024x1024
How to use?
put the safetensors file into stable-diffusion-webui\models\Stable-diffusion
prompt: best quality, highly detailed ,photorealistic,masterpiece,1lady, solo, school uniform, looking at viewer,
negative prompt: anime, cartoon, 3D, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality,
step:25
DPM++SDE Karras
768x1024
school uniform
1lady
1girl
Please select the version you like, not the latest.
W means wide, aims to be a multifunctional model which also has stability and prompt-controllability. (sfw+nsfw)
Prompt can be long or short, individual words or long sentences, the painting style will be affected.
Commercial use? Ask me for the mixed model list and go ask their original author. (Just a suggestion)
The example is generated with wd-2 vae and "No xformer","use cpu all" (or webui-colab default)
CLIP has incorrect positions (not a big problem) (later on the fixed ones)
Please share your amazing works here if you would like to :)
If you would like to see the recipe or more detail, feel free to ask.
since I use these amazing to merge (which aren't belong to me), I won't use these models to make money (of course you can if you got the permissions from their original author)
if you really like my works and want to give me some kind of encouragement, consider click on the link to find out what it really is, I will be grateful.
v2这个版本改善了远景构图以及远景的梦幻感,里面加入了我经过调试的新的lora-cinematic的参数,以及一些其他的辅助lora,而近景变化不大(手好像没那么稳定了= =)
This version of v2 improves the composition of the vista and the dreaminess of the vista, adding my new debugged Lora-Cinematic parameters, as well as some other auxiliary LoRa, while the close-up scene doesn't change much (the hand seems to be less stable = =)
v3改善了脸部和手部,以及一些肢体结构,还改善了画面精细度。泛用性提高了不少,现在能兼容更多lora了,污染也减少了。
The V3 improves the face and hands, as well as some limb structure, and also improves the fineness of the picture. Versatility has improved a lot, more LoRa is now compatible, and pollution is reduced.