吧
girl
A model that generates B-end style elements, which needs to be used in conjunction with the corresponding version of DDicon lora.
The V1 version has more details, and the v2 version is more concise, please choose according to your needs.
A model for generating B2B style elements is required to be used in conjunction with the corresponding version of DDicon lora.
The V1 version has more details, while the V2 version is more concise. Please choose the version that fits your needs.
DDicon
beautiful face,mix4,
女人
With those (pronounced girl)aims to be a model that specializes in generating high-quality anime-style images.
To use this model, simply add danbooru tags in the prompt.
This model is merged from Grapefruit, Counterfeit-V2.5, and AnythingV5 (Pastel-Mix pre-3.0).
Where are the older versions of the model?
I won't release them because they're NSFW, I might release noilla-legacy.
女人
Intend to produce photorealistic asian girls quickly, adding some of my preference. Can produce both SFW and NSFW images, but tend to fall into NSFW :)
Used models:
chilled_re_generic_v2
GOOD V3
concept Excessive pubic hair LORA, TrueSeifuku LORA, clerk suit LORA, and some.
Please note that pubic hair will be there as default, this is one of my porposes for this model :)
QQ exchange group: 704574483
The model is only used for exchange of scientific research interests, if there is any infringement, please contact to delete, thank you!
The bottom model mixed these three dalcefoRealistic_tallyV2 revAnimated_v11 yesmix
big guy light spray
The example picture has a head-changing initial parameter as follows
solo,masterpiece,best quality, 1girl,motorcycle,riding,driving,bangs, bare_shoulders, black_legwear, closed_mouth, collarbone, dress,grey_eyes, hair_ornament, long_hair, looking_at_viewer, pantyhose, silver_hair,
Negative prompt: easynegative, badhandv4,paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, manboobs, (ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.331), blurry, (bad anatomy:1.21), (bad proportions:1.331), extra limbs, (disfigured:1.331), (more than 2 nipples:1.331), (missing arms:1.331), (extra legs:1.331), (fused fingers:1.61051), (too many fingers:1.61051), (unclear eyes:1.331), bad hands, missing fingers, extra digit, (futa:1.1), bad body, ng_deepnegative_v1_75t, glans, multiple people, bad-hands-5, (two girls:1.5)
Steps: 25, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 1482940819, Face restoration: CodeFormer, Size: 850x1152, Model hash: 4ad3ed64e9, Model: yes-rev-dcfr, ENSD: 31337, Hashes: {"embed:bad-hands-5": "aa7651be15", "embed:badhandv4": "5e40d722fc", "embed:ng_deepnegative_v1_75t": "54e7e4826d"}
Then after running, I found that the face detail collection was not done properly.
Tusheng Tulock Seed Changer
Adjust the layering parameters of the three girls lora. Change the front to 1,1,1,1 and add the face collection
This is a hybrid model that I often use myself. The motivation for creating this model is: I need a large scene model that can draw Chinese ancient buildings.
If you want to achieve the clothing effect of ethnic minority-Miao nationality, you need to download the following Lora additionally:
Hmong Costume | Hmong costume | Stable Diffusion LORA | Civitai
——————————————————————————————
This is a hybrid model that I often use myself, and the motivation for creating this model was: I needed a model of a large scene that could draw ancient Chinese architecture.
If you want to achieve the costume effect of the ethnic minority-Hmong, you need to additionally download the Lora below:
Hmong Costume | Hmong costume | Stable Diffusion LORA | Civitai
interior design
A more mature-looking version with less body constraints
////////////////////////////////////////////////////////////////////////////////////
The model is exactly as described above
Still favors a skinny body, That's the lowest I can tolerate
Pros Cons It also follows the original model, but as always, it can lead to unpredictable results
////////////////////////////////////////////////////////////////////////////////////
Recommended Settings
Clip skip:2
Hi-Res Fix: R-ESRGAN 4x+ (Anime6B)/Denoising strength: 0.4
Sampler: DPM++ 2M Karras / DPM++ SDE Karras
CFG : 8+/Steps : 25+
Prompts
(best quality, masterpiece:1.2),intricate detail
Neg : (worst quality, low quality:1.2)
Or whatever you want! Explore the unknown!
EasyNegative or DeepNegative or Bedhand
Using them also reduces diversity. Use appropriately!
////////////////////////////////////////////////////////////////////////////////////
Do you like my work?
A cup of coffee would be nice! 😉
////////////////////////////////////////////////////////////////////////////////////
The trimmed version of the picture is more realistic. It is recommended to use VAE 84000
On the basis of the copper model, the mbw plug-in is used to continue the fusion
This mod makes it easy to draw petite looking characters
The positive tag recommends not to use medium breast, small breast is better
负面tag推荐使用Easynegative(一个负面pt模型):https://huggingface.co/embed/EasyNegative/tree/main
例如:nsfw,EasyNegative,nsfw,(low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2),bad composition, inaccurate eyes, extra digit,fewer digits,(extra arms:1.2),2girl,lanjiao5,text
VAE recommends to mount the general VAE (namely Animate.vae). It is also possible to use vae-ft-mse-840000-ema-pruned realistic style vae, but occasionally there will be cases where the color is too heavy.
v1.5
Compared with 1.0, there is not much change, and some problems with heavy colors have been solved. The color of version 1.5 will be lighter
如果觉得颜色不够鲜艳推荐使用clearvae:ClearVAE | Stable Diffusion Checkpoint | Civitai
Also improved some nsfw scenes, but not much (
More anime-like style, better anatomy on clip skip 1.
More anime-like style, better anatomy on clip skip 1.
this model is very simple, humble, and owes 14 months rent. Works better with controlnet openpose
mdrga
This model is refined on the basis of mixpro, plus a lot of my own data set specialization, which makes this model very stable, at least the hand is not easy to break. I have made two versions, and the stable version will be released first for the time being
This model is based on mixpro, plus a lot of my own dataset specialization, making this model very stable, at least not easy to break. I made two versions, the stable version for the time being
This is a merged model that excels in 2.5D anime style.
It was created to test AutoMBW as well.
The image above was generated using the following settings
VAE should be kl-f8-anime2.vae
I generated the images using the following settings
Steps: 30 ~ 40,
Sampler: DPM++ SDE Karras,
CFG scale: 15,
Size: 512x704,
Clip skip: 2,
I usually use CFG Scale Fix when generating images.
Dynamic thresholding enabled: True,
Mimic scale: 7,
Threshold percentile: 95,
Mimic mode: Half Cosine Up,
Mimic scale minimum: 0,
CFG mode: Half Cosine Up,
CFG scale minimum: 3.5
This is not necessarily the best setting.
You may change it according to your preference.
Combined with a very nice LoRA, the result may be even better
There are many other great LoRAs out there
Recommended negative embedding is also listed here
Illustration Artstyle - Mega model 2.7. This is the first illustration art style release for mega model. So please enjoy (it may do other styles but would need testing, so this is a very specific version just for illustration style) Buy asmrgaming a Coffee.ko-fi.com/asmrgaming- Ko-fi ❤️ Where creators get support from fans through donations, memberships, shop sales and more! The original 'Buy Me a Coffee' Page.
For 2. you will need to make sure both the safesensor and yaml config file are downloaded and installed in the same directory or you wont get the proper results. this is because of multi language support built into the model based on the alt-diffusion model - mostly for chinesse, french, spanish and a few other language specific models that can work with english and vise versa.
works fine with easy diffusion UI as well - which is the program i use to run it, automatic1111 and stable diffusion WebUI are fine with the config file placed alongside the saftensor - config file will be in a light blue message on the upper right side of this page on the download screen.
To find other models search for "mega model" or search my username info.
Credit to all the model makers, merges and the community in general without which this wouldnt be possible. Hope u all enjoy it and feel free to merge it into your own models as well - im interested to see what people do with this (this is a general acknowledgement to all model producers here, because if i listed 1700 models that have been merged there wouldnt be enough space and there would be complaints about clutter) so the above is a general acknowledgement to all of civitai and huggingface model producers
No description because if I describe it Im limiting it
There is not a correct method to use this model. I cant recommend you nothing because maybe you can do it better than me :D
No LoRAs was used in the images n.n
Please share your images ! I want see it all !! <3 <3
A personal merge i've been using for a while.
Hassan
Realistic Vision
UBRP
Babes
Sinful AI
All merges are low weighted but work great, will be adding more training to improve on clothing etc.
Prompting is simple and effective.
Anime Style model derived from two of my favorite existing mixes:
I baked in the pastelWaifuDiffusion VAE so using a vae will not be necessary with this model.
I do however highly recommend using hi res fix on your images, it makes them achieve the same brilliant clarity as shown in this model's sample gallery.
Example settings:
Steps: 50,
Sampler: Euler,
CFG scale: 6,
Size: 328x440,
Denoising strength: 0.7,
Clip skip: 2,
Hires upscale: 2,
Hires steps: 50,
Hires upscaler: Latent
This is a 2"GB" Dream Booth model. Only Hammann appears! And before the Retrofit ! ! !
Not LoRA or Lycoris. Works in the same place as Checkpoint. NSFW illustrations are difficult with just prompts. Use merge, LoRA, etc.
hammann_\(azur_lane\),1girl
The words are the same as LoRA and LoHa, so I will use the same explanation!
Basically, the higher the percentage, the stronger the influence.
"solo" and "white_background" are also 100% triggers, but you should use them depending on the situation. Because "white_background" is strange even though the background is the ocean, isn't it?
Q. Do I need to combine Hammann's DB with LoRA?
No, it would be nice to combine Hammann DB with Concept LoRA
Q. What is version 3e6?
It's a learning rate of 3e-6. Honestly, users don't need to worry. In my environment, there are many similar DB and it is confusing, so the learning rate is the version name.
Q. Is it necessary to divide into DB, LoRA, and LoHa?
Since the number of learning is very small, it was necessary to make up for it by branching 😭
Also, LoRA and LoHa are not perfect. DB+LoRA is a good option for those who don't know "LoRA Block Weight". Of course, for advanced users, LoHa may be enough!
Q. Ret...
No Retrofit Hammann! ! It's not in the dataset!
Q. Where is the Retrofit version?
Not now 😭 I might make one someday, I might not make it 😑
hammann_\(azur_lane\)
1girl
A short story: I personally like random prompts and wildcards. When one of the results was nsfw with bad genitals, in a drunken mood i just did something random in SuperMerger. This is the story of a model that just accidentally happened.
The model was the idea that a generic model could be a better foundation so I could fit in more of my random prompting ideas, and so the model could be more creative. So now it seems creative in both hentai and generic anime :)
I've tested it with song lyrics, massive wildcard prompts, styles, booru tags, and i've only hit a few walls yet. How far can you take it?
WARNINGS:
This model is VERY thirsty, and will steer towards NSFW if you let it.
This model does not do wel at low steps
Some things require thinking outside the box, it's based on a generic model, so go ham on crazy ideas.
Use prompt editing for the best results!
I suck at upscaling images, but this model does pretty well using large initial generations
low quality, lowres, error, fault, crude, blurry, undefined, jagged, spiky, ugly hands, extra arm, extra hand, split arm, missing finger, extra finger, three fingers, four fingers, six fingers, disfigured, unclear, indistinct, merged fingers, bad anatomy, misplaced hand, misplaced foot, sex, nsfw, leotard, slutty, signature, watermark, text, artist, distorted,sex, nsfw, leotard, slutty,
Remove the sexual parts if you want to do NSFW generation. (leotard gets rid of the all the navel-suits)
IMPORTANT USAGE GUIDE: bad-artist (the base version) is the only negative embedding I can comfortably recommend using for this model, and not even that is a requirement. To liven up color and lighting (when not using a negative embedding), I suggest putting desaturated and pixelated in your negative prompt with 2 or 3 units of down-weighting (that means [[[desaturated]]], [[[pixelated]]] in A1111 UI, or desaturated---, pixelated--- in InvokeAI). To lean output more towards an anime style rather than semi-realistic, include anime in your prompt. Put realistic in your negative prompt to further lean towards 2D, or by the front of your positive prompt to lean more 3D.
Example images were generated in InvokeAI, so you'll have to use your UI's weighting syntax (which means the + and - in my prompts likely won't do anything for you unless you're using InvokeAI).
The goal of this model was to anchor 526Mix's whimsical and artistic personality into anime. 526Mix can do a pretty cool anime style (and other 2D styles), but it can be a bit unreliable at times. This simple mix increases the reliability, depth, and nuance in 526Mix's anime and anime-esque capabilities.
This model will be great for:
People who want an anime aesthetic that is different from other SD 1.5 models
People who are more distant, casual enjoyers of anime, who might find this model more welcoming than one that's heavily leaned to modern anime art only
People who like more classic anime styles, ala Voltron and Ghost in the Shell
People who like generating in 2D and semi-real styles and liked 526Mix, who will probably enjoy the better lighting and touch of wackiness in this version
I'm not big on anime myself, so this was more of an experiment. If you liked 526Mix's variety and personality (and cozy interior design), and also like anime, you'll probably have fun with this.
This model is a straightforward mix of 526Mix-V1.3.5, and Nerfgun3's Macaron Mix, the latter having had the noise offset added at 0.70 multiplier. This is done with a weighted sum at a 0.3 multiplier with Macaron Mix. I always like Nerfgun3's art and embeddings, so I felt I could trust that model to be fairly in line with my own creative desires and expectations.
As always, I suggest going to the source models for the full experience, and Nerfgun3's Macaron Mix and Newartmodel4 aren't exceptions here.
Example images were generated in Invoke AI with the model converted to Diffusers format, hires fix on (0.45 strength, works like img2img), and the sampler DDIM (unless listed otherwise). This means unless you use Invoke AI, you likely won't be able to recreate my images exactly. Just learn from the prompts and modify the weighting in prompts as needed for the UI you use (if you use the A1111 UI, any (plus sign)+ is equal to one set of parentheses).
This is anime-based semi-realistic mix.
Model is fully compatible with all LoRA that I made and will make in the future.
The model itself can provide muted tones, high detail, and ease of use.
My regular settings:
Clip Skip 2
FEET:kl-f8-anime2
Sampler:DPM++ 2M Karras orDPM++ 2M alt Karras
ENSD:31337
Hires.fix:x1.5 and higher
Upscaler:
Option 1:4x UltraSharp,4x Valarorany like(Denoising strenght 0.35-0.5)
Option 2:Latent (any) or Lanczos (Denoising strength 0.5-0.6)
But you may feel free to experiment.
Please note that this model is ̶v̶e̶r̶y̶ horny, so I strongly recommend to tag some clothes (pants or something) and using these tags in a negative prompt to avoid NSFW: nsfw, naked, nude
I also highly recommend using tag realistic
at the end of the prompt for better details.
1. easynegative
2. bad-hands-5
3. badhandv4
All the content I make is and will be completely free on CivitAI
But if you want to support me with a coin, then you can do it here.
ISO mix now available v2.0!!!
What's New!
We updated the faces, screwed LORA for more contrast, also raised the quality of the output image by merging, the model has become more flexible and more contrast. Read more in the description of the update.
This mix is made from other models based on anime, realism styles that ended up resulting in something in between render 3d and art.
The model is very flexible, capable of both NSFW and fully censored work. Everything is limited by your imagination. You can get good results on short promt as well as on extensive.
Who would be suited for this model
- NSFW artists
- Lovers
- Designers
Comfortable sampler DPM++ 2M Karras, steps 30 to 50, SFG 5.5 to 9.5, for best results use HiRes fix upscaler R-ESRGAN 4x+ or 4X_Valar_v1 both work well
Model does not require vae since it is already built in, good results with CLIP 1-3 depending on your request.
Comfortable with many LORAs at 0.5-1 weight
Enjoy, write me if you have any mistakes
rev or revision: The concept of how the model generates images is likely to change as I see fit.
Animated: The model has the ability to create 2.5D like image generations. This model is a checkpoint merge, meaning it is a product of other models to create a product that derives from the originals.
Kind of generations:
Fantasy
Anime
semi-realistic
decent Landscape
LoRA friendly
It works best on these resolution dimensions:
512x512
512x768
768x512
Order matters - words near the front of your prompt are weighted more heavily than the things in the back of your prompt.
Prompt order - content type > description > style > composition
This model likes: ((best quality)), ((masterpiece)), (detailed) in beginning of prompt if you want anime-2.5D type
This model does great on PORTRAITS
Negative Prompt Embeddings:
Make use of weights in negative prompts (i.e (worst quality, low quality:1.4))
Olivio Sarikas - Why Is EVERYONE Using This Model?! - Rev Animated for Stable Diffusion / A1111
Olivio Sarikas - ULTRA SHARP Upscale! - Don't miss this Method!!! / A1111 - NEW Model
Do not sell this model on any website without permissions from creator (me)
Credit me if you use my model in your own merges
I do not authorize this model to be used on generative services
I have given you plentiful amount information and sources within this section, I will not answer redundant questions if it already exists here in the info section.
if you would like to support me:
https://ko-fi.com/s6yx0
The bottom model is tmnd, which replaces the painting style of the original model, and then retains the color and detail processing. This is the first version. There may be flaws. Welcome to give me feedback. I strongly recommend that you open high-resolution repairs and output pictures. High-resolution pictures Turn on high-resolution and turn on high-resolution repair when outputting the image. The sampling method is recommended DDIM, and the number of sampling steps is recommended to be more than 50 and less than 50 steps. This sampling method will lose details, and the redrawing range is 0.2-0.3. The zoom-in scheme: R- ESRGAN 4x+ Anime6B, redrawing and sampling: 20 steps, the follow-up image generation is enlarged with the plug-in SD upscale, the equipment is good, and the equipment is generally not recommended.
The bottom model is tmnd, which replaces the painting style of the original model, and then retains the color and detail processing. This is the first version. There may be flaws. Welcome to give me feedback. I strongly recommend that you open high-resolution repairs and output pictures. High-resolution pictures Turn on high-resolution and turn on high-resolution repair when outputting the image. The sampling method is recommended DDIM, and the number of sampling steps is recommended to be more than 50 and less than 50 steps. This sampling method will lose details, and the redrawing range is 0.2-0.3. The zoom-in scheme: R- ESRGAN 4x+ Anime6B, redrawing and sampling: 20 steps, the follow-up image generation is enlarged with the plug-in SD upscale, the equipment is good, and the equipment is generally not recommended.
sfafasf
advice
e621
Thank you for all Reviews, Great Model/Lora Creator, and Prompt Crafter!!!
All preview images were made before pruning, but it should not reduce quality of generated images!
Form forrequests.
I also sometimes do polls on my Patreonfor the next LORA (no payment required).
With this mix I wanted to have a model that is mainly focused on anime style images.
I wanted to have a Model that stays close to my other mix in terms of style while improving it a bit on the rpg site of things. It still struggles quite a bit for non humanoids, but I've managed to get some really pretty stuff out of it :)
My settings:
Using Hires.fix is very recommended!!!
Sampler: Euler a, Steps: 40, CFG scale: 5 / DPM++ 2M Karras, Steps: 30, CFG scale: 5
Negative prompt: (worst quality, low quality:1.4), EasyNegative , bad_prompt_version2
Hires.fix: Laten(nearest-exact)/4x-UltraSharp, Hires steps same as Sampling steps, Denoising strength ~0.5, Upscale by 1.5
Used embeddings for sample pictures:
bad_prompt_version2
Used Lora for the three Watame pictures:
If you have any feedback or questions just write in the comments or hit me up on reddit or discord.
This model tries to replicate the landscapes and sceneries style in Shinkai Makoto movies
skmkt
This model is biased and tricky, But can do more than you think
Too young? Try this
////////////////////////////////////////////////////////////////////////////////////
Change Logs
Sensitive issue is found in BY, all models are removed, except A2
Significantly improved light and color in some situations
More appropriate "skinny" bodies
New hairstyles & Eliminate hairstyle favoritism
Now maintains the intended body shape over 95% of the time
NSFW improvements
Improved embedding compatibility (maybe)
Overall, the model listens better
Correct noiseoffset
Now generates men and landscapes better (Who cares?)
////////////////////////////////////////////////////////////////////////////////////
Known issues - R4
Ignore hair color, mixed colors
Background quality is jagged
Hand and pose instability
Harder to apply dark skins
Most clothes stay in the black white green colors
////////////////////////////////////////////////////////////////////////////////////
Recommended Settings
All models include VAE
Clip skip:2
Hi-Res Fix: R-ESRGAN 4x+ (Anime6B)/Denoising strength: 0.4
Sampler: DPM++ 2M Karras / DPM++ SDE Karras
CFG : 8+/Steps : 25+
Prompts
(best quality, masterpiece:1.2), intricate detail
Neg : (worst quality, low quality:1.2)
Or whatever you want! Explore the unknown!
EasyNegative or DeepNegative or Bedhand
Using them also reduces diversity, Use appropriately
////////////////////////////////////////////////////////////////////////////////////
Do you like my work?
A cup of coffee would be nice! 😉
////////////////////////////////////////////////////////////////////////////////////
A mix of merged models, my frankenstein. Hopefully it isn't a complete mess, (still new to this). Hands kinda ew so use at your own risk.
This model is a work in progress, I will try to updated it from time to time.
I made this by merging a realistic model I was working on to a bunch of anime models and I really enjoyed the results, thought I would share.
This Model has quite a bit of anime models in it….too much to list. (if I remember I will list which ones)
For a more realistic look be sure to check out my other model - https://civitai.com/models/36538/noosphere
I recommend using:
https://civitai.com/models/7808/easynegative
(vae I use, Necceary if you want vibrant images and not washed out ones)
https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt
Suggested negative tags: (worst quality, low quality, normal quality:1.7), lowres, normal quality, ((monochrome)), ((grayscale)), ((text, font, logo, copyright, watermark:1.6))
DPM++ 2M Karras works good (sampling steps: 25-40).
Hiresfix (step:15, denoise strength: 0.1-0.5)
if faces still look ew don't forget to use inpainting to fix them, and to also fix other details.
This mix aims to keep photorealistic and capable with most anime lora.
It is mixed with the following models:
https://civitai.com/models/6424/chilloutmix
https://civitai.com/models/24383/grapefruit-hentai-model
https://civitai.com/models/20981/3moon-nireal
https://civitai.com/models/22402/fantasticmixreal
v1.1 added
https://civitai.com/models/22922/lyriel
https://civitai.com/models/9291/sunshinemixsunlightmix
Suggested steps: 20~35
Suggested CFG: 4~9
Neg promp embeddings:
https://civitai.com/models/30452/bad-hands
https://civitai.com/models/7808/easynegative
Some example images using extention: Dynamic Thresholding (CFG Scale Fix)
https://github.com/mcmonkeyprojects/sd-dynamic-thresholding
Comment and image are appreciated.
Depending on what you need, several models are included. If you want the best pretty girl portraits, you must have the SD 1.5 models for sure. If you need more details and better coherency on non-human objects, or you like to have hands with only 5 fingers, use the SD 2.1 models. Mix and match and use inpainting to touch up little mistakes.
Detailed tutorial on how I get the results in the preview images.
Check here if you're having trouble getting the same results.
You can prompt any style you need with these models, but the default aesthetic is listed for each of the models in this handy list.
Different models available, check the blue tabs above the images up top:
Stable Diffusion 1.5 (512) versions:
v2 Stronger painterly style. Higher contrast and sharpness. More RPG knowledge.
V2 offset Noise Offset added making more contrast and bringing the model back to photoreal.
V2 Art Trained model. Very artsy. Strongest painterly style. Less details and bigger brush strokes to mimic digital painting style pre-AI.
V2 inpaint Inpainting version of V2 that's good for outpainting.
V1 Smoother renders with least painterly effect.
V1 inpaint Inpainting version of V1 that's good for outpainting.
Stable Diffusion 2.1 (768) versions:
SD 2.1 768 V1 Strong painterly style, very coherent with hands and objects. Higher native resolution and detail. Not good for nudity.
Not as effective, but here's the LoRA if you like to use that instead:
A to Zovya RPG Artist's Tools LoRA
Do you have requests? I've been putting in many more hours lately with this. That's my problem, not yours. But if you'd like to tip me, buy me a beer. Beer encourages me to ignore work and make AI models instead. Tip and make a request. I'll give it a shot if I can. Here at Ko-Fi
zrpgstyle
Check the version description below (bottom right) for more info and add a ❤️ to receive future updates.
Do you like what I do? Consider supporting me onPatreon🅿️ to get exclusive tips and tutorials, or feel free tobuy me a coffee☕
Live demo available on HuggingFace(CPU is slow but free).
Available on the following websites with GPU acceleration:
MY MODELS WILL ALWAYS BE FREE.
NOTES
Version 5 is the best at photorealism and has noise offset.
Version 4 is much better with anime (can do them with no LoRA) and booru tags. IT might be harder to control if you're used to caption style, so you might still want to use version 3.31.
V4 is also better with eyes at lower resolutions. Overall is like a "fix" of V3 and shouldn't be too much different. Stay tuned for V5!
Results of version 3.32 "clip fix" will vary from the examples (produced on 3.31, which I personally prefer).
I get no money from any generative service, but you can buy me a coffee.
You should use 3.32 for mixing, so the clip error doesn't spread.
Inpainting models are only for inpaint and outpaint, not txt2img or mixing.
After a lot of tests I'm finally releasing my mix. This started as a model to make good portraits that do not look like cg or photos with heavy filters, but more like actual paintings. The result is a model capable of doing portraits like I wanted, but also great backgrounds and anime-style characters. Below you can find some suggestions, including LoRA networks to make anime style images.
I hope you'll enjoy it as much as I do.
Diffuser weights (courtesy of /u/Different-Bet-1686):
https://huggingface.co/Lykon/DreamShaper
Official HF repository: https://huggingface.co/Lykon/DreamShaper
Suggested settings:
- I had CLIP skip 2 on some pics (all of them for version 4)
- I had ENSD: 31337 for basically all of them
- All of them had highres.fix or img2img at higher resolution.
- I don't use restore faces, as it washes out the painting effect
- Version 4 requires no LoRA for anime style. For version 3 I suggest to use one of these LoRA networks at 0.35 weight:
-- https://civitai.com/models/4219 (the girls with glasses or if it says vanostyle
)
-- https://huggingface.co/closertodeath/dpepmkmp/blob/main/last.pt (if it says mksk style
)
-- https://civitai.com/models/4982/anime-screencap-style-lora (not used for any example but works great)
NOTE: if you find that the prompts below look "familiar" it's because I've taken them from other reviews and models here, to basically compare my model to other examples. Credits to the original authors. Thanks for the benchmark.
This model is trained to draw nude handsome men . Nothing mre nothing less.
Anime style version of my other mix, CarDos Animated.
Very versatile, can do all sorts of different generations, not just cute anime girls. But it does cute anime girls exceptionally well.
For v1: Using vae-ft-ema-560000-ema-pruned as the VAE. CLIP 1.
For v2: Using vae-ft-mse-840000-ema-pruned as the VAE. CLIP 2.
Negative embeddings used in almost all images: easynegative, bad-hands-5
A merge which is just a better novelai model
Based on futaall7.2 and animefull-latest ,and added some basil to fix hands,etc.
Support me on bilibili:zwh20081's personal space_哔哩哔哩_bilibili
My qq group:644797506 email:zwh20081@qq.com
Accept making models as a business(email me)
btw I'm 15 qwq
基于niji的260张图片训练
260 pictures based on niji
This model gives you the ability to create whatever you want.
Attention, the model requires VAE from Stability AI:foot-ft-ema-560000
or you can use VAE from Stability AI:vae-ft-mse-840000
NSFW Art Designers
Character designers
Professional prompters
Art designers
I hope you enjoy the results of my efforts.
Thank you very much XpucT.My first knowledge came from this man.
This model is based on his Deliberate V2 model.
Great smear of salted fish. Too lazy to write any introduction.
In short, this is a model with a painting style as shown in the picture. It is a fusion model with a lot of fusion.
I kept a part of the Chinese horror lora in it when I was processing the model (I kept it on purpose)
so you know i added chinese horror, (lol)
So when superimposing it, the weight should not be too high (about 0.45)
It is recommended to turn on HD repair, and you can get better results by turning it on a little bit
Recommended original image parameters [If you want to blur the background, please delete the negative (blurry: 2.0),]
Negative prompt: (bad and mutated hands:1.3), (worst quality:2.0), (low quality:2.0), (blurry:2.0),
Steps: 10, Sampler: DPM++ 2S a Karras, CFG scale: 7,Size: 512x640,Clip skip: 2,
Recommended high-definition repair parameters (a magnification of 1.2 or more will have a better effect)
R-ESRGAN 4x+,steps:10,Denoising:0.5,
ღ( ´・ᴗ・` ) heart-to-heart
The following machine translation
Spread salted fish. I'm too lazy to write any introduction.
Anyway, this is a model of the style shown here, it's a fusion model, a fusion of a lot.
I kept part of the Chinese horror lora in the model when I worked with it
This way you know I added Chinese horror, so don't turn the weight too high when stacking it (0.45 or so).
It is recommended to open HD repair, open a little can get better results
Recommended raw graph parameters【Delete negative if you want to blur the background“ (blurry:2.0), ”】
Negative prompt: (bad and mutated hands:1.3), (worst quality:2.0), (low quality:2.0), (blurry:2.0),
Steps: 10, Sampler: DPM++ 2S a Karras, CFG scale: 7,Size: 512x640,Clip skip: 2,
Recommended HD repair parameters (magnification above 1.2 will have a better effect)
R-ESRGAN 4x+,steps:10,Denoising:0.5, r-Esrgan 4x+,steps:10, denoising :0.5,
ღ( ´・ᴗ・` ) heart-to-heart
Attention: You need a VAE to use this model. I recommend you try this one out.
This model involves dreamlike photoreal, so here is the license that you must abide by. Also check out the license in the chillout mix pageregarding commercial use, we have chillout in this model.
This is Deimos Mix, it's a photorealistic model built on the same mental framework as Ares Mix and Mars Mix (and in fact, uses the same photoreal core as the two of them - I still did not find a better core than that one). This time around, I included a model with noise offset (iComix), which had the effect of greatly improving lighting at the cost of some smoother skin sometimes. This model has a larger focus on fantasy and sci-fi photorealistic images, and does very well in that regard.
A note to users, I am uploading the anime core as a separate version because I think that one is a very competent anime model (and it allows people to try block merging that in other photorealistic models to see what comes out). Please do not be alarmed by this. There are more details about the merge on each version's page.
NEW: TEST IT OUT ON HUGGING FACE:https://huggingface.co/spaces/Duskfallcrew/Osenayan_Mix
"OH-SEH-NA-YAHN" - An unknown race beyond space and time.
Or just a bunch cat people living in Duskfall's headspace, eitherway we'd find a way of shoving our truths into our AI work anyways.
Osea is a place beyond the stars, and it's likely unknown because well - sadly it's no longer existing if it ever did . If you've visted the Earth & Dusk discord, then you've likely met someone in our system who dares to admit to having this heritage.
Now, being that we are Dissociative Identity Disorder system, there is a debate on wether that this ever did or didn't exist - is it truth or is it fiction, is it truth or is it the disorder?
Meh, whatever we don't care - this is just a freaking merge model XD it's got nothing to do with Osea literally, just thought we'd tip our hats to some of our system truths and make this model more about us LOLOLOL.
WARNING: LORA AND LYCORIS ARE REALLY REALLY FINICKY WITH THIS MODEL, and we're very sorry - It gets hungry, and uhhh - Roi'adan V'anzey is too lazy to make Na'giza donuts to keep it away from this universe's set of wonderful beings LOLOLOL.
Also PSST: This weirdly has a VERY Gender Diverse focus on accident, so we tip our hats and wave our flags - because sadly no it can't do programmer socks, but sure its' gender diverse in other ways!
Also this is the BASE TO EPIC MIX V4 - Vibrancy.
Which you can find MOST Of the links here: https://civitai.com/models/27096/epic-mix-anime-nsfw-support
(Bar the fact we forgot to nod our head on that one to our first model HEAD NOD, FART)
The Mix is as Follows:
DeepBoys 2d
Epic Mix Ultimate Anime (V4)
Duskfall AI https://civitai.com/models/5464/duskfall-ai (2nd ever duskfall trained model)
Sam Does Arts Ultramerge
Ani-meth (hehehehhehehe) - https://rentry.co/ncpt_fav_models
We stream a lot of our testing on twitch: https://www.twitch.tv/duskfallcrew
Any chance you can spare a coffee or three? https://ko-fi.com/DUSKFALLcrew
Request image gens via our pixiv: https://www.pixiv.net/en/users/70748346
Hang with us on discord: https://discord.gg/Da7s8d3KJ7
As for who's fronting when we made this merge AND who wrote the card: The dumbass himself: Matoya'iivi Na'agora - yep, there's a lycoris made of me, wtf did you think we were doing training those JUST to make people boy thirsty? LOL.
aba aba
Suggested positive tags: masterpiece, best quality, ultra-detailed, illustration, Delicate and beautiful eyes, delicate and beautiful face, delicate and beautiful facial features, cos photo, small face, melon face, small nose, high nose bridge, thin lips, Cherry Small Mouth, pointed nose,
Suggested negative tags: (worst quality, low quality: 1.4), watermark, sexual intercourse, nsfw, uncle face, garlic nose, long face, Big nose
This is a model related to the "Challenge of the WeekEnd" contest on Stable Diffusion discord.
This version is trained on dedicated tokens per user, and doesn't work like you may be used to.
I try to make a model out of all the submissions, for people to continue enjoy the theme after the event, and see a little of their designs in other people's creations. The token stays "SDArt" and I balance the learning on the low side, so that it doesn't just replicate creations.
The pictures were tagged using the token "SDArt", and an arbitrary token given to the user that submitted it.
The dataset is available below and is composed of 39 pictures.
It was a stupid idea, I know that now. I thought I was brave, daring, that I could handle anything. But nothing could have prepared me for what I found on the other side of the veil. I had always been fascinated by the unknown and the supernatural, and finally, found a ritual that would grant me passage.
As I recited the incantation, my mind was wrenched apart, tearing at the senses as I felt an indescribable sense of disorientation. The world was drained of color, shrouded in a thick, eerie mist that clung to my skin. I couldn't see anything moving but there was a strange silence that hung in the air. It was eerie.. unsettling.. like walking through a graveyard.
And then, I saw it.
Beyond the veil was a terror beyond anything I had ever imagined. A mass of writhing tendrils, some thick and muscular, others thin and sinuous that moved with a strange, fluid grace. Its eyes were pure black within a world of gray - drawing me in like a magnet.
I tried to flee, but my feet seemed to be rooted to the spot. It was like the entity had some kind of hold over me. I could feel its presence in my mind, trying to rend it apart. It whispered to me the secrets of the universe, knowledge meant for no human to possess.
And then, I saw black.
When I opened my eyes again, I was back in my own world. But the memory of the gray world and the creature haunted me. I had looked into the abyss of the unknown, and it had looked back.
COW #2 - Cosmic Horrors
The veil between worlds is wearing thin.. and the monsters await.
Prepare yourself for an encounter with a terrifying monster that is beyond human comprehension. Beware, for these eldritch entities exist outside the realms of your reality, and to face them is to stare into the abyss of the unknown.
Challenges:
Your image must be mostly grayscale.
Your artwork must feature an eldritch entity or monster.
Create mist/fog within the composition to add an element of suspense and mystery.
SDArt
dyce
bnp
keel
fcu
cous
aved
pfa
CPC
away
elis
and
Monday
loeb
bsp
psst
irgc
mds
kts
byes
dany
mss
guin
mgt
mwf
crit
mlas
ish
pol
you see
dds
httr
pte
oxi
nery
nips
nlwx
nrg
ofi
was
SDArt
This is a fusion model suitable for drawing anime styles。You can construct a scenario similar to galgame。
For negative entries, use easynegative,nsfw,EasyNegative,nsfw,(low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2),bad composition, inaccurate eyes, extra digit,fewer digits,(extra arms:1.2)
EasyNegative is a embeddings model:https://huggingface.co/embed/EasyNegative/tree/main
Using different Negative tags will result in different painting styles, so please try by yourself
This is a finetune model based on grapefruit checkpoint.
I used around 1k images from gelbooru by artist Muririn and Kobuichi for finetuning. Initially, I wanted to just train a Lora or standard dreambooth, but neither really achieve anything even remotely useful. If anyone have tips for finetune SD models, I am all ears.
About the examples:
The first one (hot spring one) was created with simple prompts, I selected this one because it also genereated text windows just like a galgame. But this can be negated with negative prompts.
The second one (Ningguang) used thisLora. I tested a little bit, for all loras I tested, the image will be over saturated if I don't lower the lora weight. I simply dialed the weight down to 0.5. Maybe there are better solutions.
The third one used the same prompt with grapefruit intro image.
I didn't really play around with the parameters to explore further. These are just some random samples that look okay-ish. Overall, I found the finetune model is more susceptible to distorted images such as missing limbs (hot spring example). I assume some merging with more stable models or more prompts are needed to mitigate these issues.
Lastly, trigger words yuzu$style is not a must, but it does emphasize the style a little more.
yuzu$style
In case someone wishes to use the model I'm using here, it can be obtained here. This model is specifically designed for realistic photography and similar applications. It supports the Indonesian language, but it may require some English supplementation.
Just in case ada yang pengen nyobain model yang aku pake, aku upload disini. Model ini khusus dirancang untuk fotografi realistis. This model also supports Indonesian, but still needs to be accompanied by English.
Goal of this project
Realistic and Detailed models
General use for photograhy
Support Indonesian Languages
Recommended setting for this model
VAE IS RECOMMENDED :https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main
Sampler : DPM++ 2M Karras / Euler A
Steps : 30 Steps / 15 is ok but not good
Hi-res fix (very recommended)
You can't use the model to deliberately produce nor share illegal or harmful outputs or content.
Hopefully useful, Thank you for trying & using this model <3
Please FEEL FREE to comments about this model below, or if you have some recommendation that would be really appreciated for this project!
Macarolus Diffusion:
V1 - Realistic Artwork
Please, leave a review so I understand what needs to be improved
Thank you for your attention (Show More 👇)
Cyborg Woman Prompts:
a cyborg woman
Negative prompt: UNREALS
Steps: 35, Sampler: Euler a, CFG scale: 5, Size: 768x1024
Iron Man Prompts:
marvel, tony stark, (iron man:1.25), (brutal man:1.25), (city background:1.25), (realistic:1.25), (photorealistic:1.25), perfectionism, open world, (complex details:1.25), (complex microdetails:1.25), (precise details:1.25), (precise background:1.25), high resolution, high quality, (bloom:0.25), (color correction:0.25), (hdr:0.25)
Negative prompt: UNREALS, (ugly anatomy:1.25), (ugly man:1.25), (disfigured:1.25), (deformed:1.25), (fake:1.25), text, watermark, photo frame, vignette, bokeh, depth of field, (blurred background:1.25), bluriness, flashes of light, (overexposure:1.25), high pass filter, sharpness, (contrast:1.25), noise filter, sepia filter, jpeg artifacts
Steps: 35, Sampler: Euler a, CFG scale: 6, Size: 768x1024
Cyborg Cat Prompts:
a (cyborg:1.25) cat, hdr, (intricate details:1.5), (hyperdetailed:1.5), cinematic shot, vignette, centered
Negative prompt: UNREALS, (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation, flowers, human, man, woman
Steps: 24, Sampler: Euler a, CFG scale: 8, Size: 768x1024
Mug Prompts:
(my mug got two lenses:1.25), (centered camera:1.25), (realistic:1.25), (photorealistic:1.25), perfectionism, (complex details:1.25), (complex microdetails:1.25), (precise details:1.25), (precise background:1.25), high resolution, high quality, (bloom:0.25), (color correction:0.25), (hdr:0.25)
Negative prompt: UNREALS, (ugly anatomy:1.25), (ugly man:1.25), (disfigured:1.25), (deformed:1.25), (fake:1.25), text, watermark, photo frame, vignette, bokeh, depth of field, (blurred background:1.25), bluriness, flashes of light, (overexposure:1.25), high pass filter, sharpness, (contrast:1.25), noise filter, sepia filter, jpeg artifacts
Steps: 35, Sampler: Euler a, CFG scale: 6, Size: 1024x768
Recommended Negative Prompts:
UNREALS: Textual Inversion
UNBADS: Textual Inversion
Social Media:
Discord: Artex
First of all, I would like to thank everyone who uses this Checkpoint, this is my first Checkpoint. All samples can be reproduced by self-test. Both SFW and NSFW pictures are pretty nice. This model was uploaded yesterday until today and I have been experimenting with the possibilities of this model. Friends who want to use this Checkpoint, if you try it and think there are good Promts, welcome to upload pictures to share, let me know the possibility of this Checkpoint, and help me adjust the next version of GhostMix, thanks again.
Fractional art (super recommended, must produce good pictures)
(masterpiece, top quality, best quality, official art, beautiful and aesthetic:1.2), (1girl:1.3), (fractal art:1.3),
True Style-Girls and Flowers
((masterpiece, best quality)), 1girl, flower, solo, dress, holding, sky, cloud, hat, outdoors, bangs, bouquet, rose, expressionless, blush, pink hair, flower field, red flower, pink eyes, white dress, looking at viewer, midium hair, holding flower, small breasts, red rose, holding bouquet, sun hat, white headwear, depth of field,
Authentic Style - Home Pictures
Architectural digest photo of a maximalist green solar living room with lots of flowers and plants, golden light, hyperrealistic surrealism, award winning masterpiece with incredible details, epic stunning pink surrounding and round corners, big windows
color art
(flat color:1.3),(colorful:1.3),(masterpiece:1.2), best quality, masterpiece, original, extremely detailed wallpaper, looking at viewer,1girl,solo,floating colorful water
Valkyrie
(masterpiece, top quality, best quality, official art, beautiful and aesthetic:1.2), (1girl), (warrior queen armor, fur-lined cape, jeweled crown:1.2),serious
VAE: kl-f8-anime2 or vae-ft-mse-840000-ema-pruned (kl-f8-anime2 is recommended for animation style)
Textual Inversion:ng_deepnegative_v1_75t,easynegative,bad-hands-5
First of all , I want to thank all the people who use this Checkpoint. And this is my first Checkpoint.All the sample image can be reproduced. This Checkpoint works well on both SFW and NSFW.
I uploaded this Checkpoint yesterday and from yesterday to today , I am still trying all the possibility of this Checkpoint. So if you try the model and find some good promts , I hope you can upload it and share with me, it will help me for the next version of GhostMix, Thank you agian!
Fractal Art(highly recommend,awsome)
(masterpiece, top quality, best quality, official art, beautiful and aesthetic:1.2), (1girl:1.3), (fractal art:1.3),
Realistic Art-Girl & Flower field
((masterpiece, best quality)), 1girl, flower, solo, dress, holding, sky, cloud, hat, outdoors, bangs, bouquet, rose, expressionless, blush, pink hair, flower field, red flower, pink eyes, white dress, looking at viewer, midium hair, holding flower, small breasts, red rose, holding bouquet, sun hat, white headwear, depth of field,
Realistic Art-Living Room
Architectural digest photo of a maximalist green solar living room with lots of flowers and plants, golden light, hyperrealistic surrealism, award winning masterpiece with incredible details, epic stunning pink surrounding and round corners, big windows
Color Art
(flat color:1.3),(colorful:1.3),(masterpiece:1.2), best quality, masterpiece, original, extremely detailed wallpaper, looking at viewer,1girl,solo,floating colorful water
Realistic Art-Warrior Queen
(masterpiece, top quality, best quality, official art, beautiful and aesthetic:1.2), (1girl), (warrior queen armor, fur-lined cape, jeweled crown:1.2),serious
VAE: kl-f8-anime2 or vae-ft-mse-840000-ema-pruned(anime suggest: kl-f8-anime2)
Textual Inversion: ng_deepnegative_v1_75t, easynegative, bad-hands-5
A simple light weight model trained to draw anthro fox males.
My first model I've decided to release. I made this out of respect for one of my favorite shows of all time, Star Trek: Deep Space Nine.
Please do not repost or share elsewhere without my permission. If you want to do merges or whatever, feel free, just please don't publicly share without asking or sell them. Also, please don't do anything creepy or gross with this model, and keep the NSFW to yourself (I haven't trained or tried this model that way).
Please DO share your results!
This was trained on the remastered shots from What We Left Behind, in January 2023 using TheLastBen's Fast-Dreambooth. Since then, Dreambooth training seems "broken" and I haven't been able to continue training this at the moment. If I find a fix, I plan to further train this on characters and species that could use refinement.
I also want to do a series of these models for the other shows, specifically VOY and TNG. If you're interested, I may end up sharing those in the future as early releases.
You really need to be a prompt engineer to push this model to its limits. All of my image results are not inpainted, did not use Controlnet/etc., and were not edited in any way. Layering in those will help you get even better results. Please look at my shared results, which include all of the prompts and negative prompts for a wide range of applications.
Here is my basic prompt. This is based on how I captioned the training data (starred are optional):
sdn, a [closeup/medium/wide] view of [character or object], [species], [gender*], [expression*], wearing [Bajoran/Starfleet/Cardassian] [Operations/Science/Command/Security] uniform*, [lighting descriptors (diffuse glow/contrast lighting/etc.]*, [inside/outside], [location (operations/planet surface]*, [background (blurry/viewport/panel/etc.]*
Use “sdn” as the initial keyword. Not technically necessary, and sometimes overweighs training appearance, but keeps coherence of concepts and characters better.
768 x 768 is by far better than other aspect ratios or sizes. I recommend doing all generations at 768 and then outpainting. It works fine with wider dimensions, such as 1024x768 or 768x1024 but the short edge should always be 768. Sorry if your card can’t handle it but I figured if I was going to train, I might as well have trained it on 768.
Use terms like “star trek” “ds9” or “screenshot” etc. to further push certain concepts.
CFG of 7 is good for general purpose. Higher for concepts that vary away from original references. Sometimes, things look a bit too intense and contrasty. Take CFG to 5-6 to get a bit of the original film glow back.
Sampling steps with Euler a, between 20-30 look good. I stick to 24/25 unless I want a bit more contrasty crisp look.
Negative prompts help push results towards more aesthetically pleasing generations on average. My personal go-to is:
bad anatomy, bad proportions, blurry, cloned face, deformed, disfigured, duplicate, extra arms, extra fingers, extra limbs, extra legs, fused fingers, gross proportions, long neck, malformed limbs, missing arms, missing legs, mutated hands, mutation, mutilated, morbid, out of frame, poorly drawn hands, poorly drawn face, too many fingers, ugly
Of course, sometimes you do want deformed or ugly results, so adjust as you need. “Blurry” reduces natural fuzziness of original training so also optional (I negative prompt “blurry” and positive prompt “diffuse glow” for example, to sharpen and keep effect)
Uniforms, specifically Starfleet uniforms, should be keyworded, such as “Bajoran Security Uniform” and “Starfleet Command/Science/Operations Uniform”. Starfleet uniforms don’t always come out in the correct colorways, but can be inpainted if necessary. I also recommend using slight prompting to push what you want, adding in “red black and grey” etc. for example. For starfleet uniforms, Worf’s sash will occasionally appear without prompting. Negative prompt “sash” may help.
Comm badges and pips can sometimes do best with inpainting. Comm badges may appear in duplicates.
My initial training overweighs Bajoran nose ridges in all species, but especially humans. Inpainting usually works well to get rid of them if necessary. Further training I did toned them down but they might be less weighted if I train even further on other species. Negative prompt “bajoran” can help.
Including actor names can really push characters, especially Miles, Ezri, Nog, Leeta, etc. and I recommend it for basically everyone but Avery Brooks since his training worked so well.
Adding “beautiful” and “handsome” etc. can make images look better, I recommend especially for Kira. “young” “30s” and more can push toward what you want.
Kira looks older for some reason. Use “youthful” “young” “30s” or “beautiful” in prompts, or “aged” “old” “50s” etc. in negative prompts.
Using “vedic” does somewhat produce the look, but it also skews generations toward real-life concepts of vedics. Using “Indian” as a negative prompt can help.
Even though I trained and re-trained on Kai Winn, she doesn’t really come out in generations.
Most Bajorans end up in Kira’s uniform. The security uniform sometimes needs “tan” or “beige” to pop out.
The model is trained on Garak, Gul Dukat, and other Cardassians but doesn’t seem to want to “separate” them and most of the results resemble Dukat. (I might further train on Garak and others)
I included some images of Dukat as a Pah-wraith in initial training and the model overweighs red eyes for Cardassians. If you want to, use “red eyes” as a negative prompt and it usually eliminates them but does alter generations slightly from the same seeds.
Even though my training had founders, it’s kind of Odo or no one else. Further training on the other founders would possibly help.
Odo ends up looking scruffy and rough on occasion, sometimes with lots of ridges, and sometimes extremely smooth and blurry. It’ll take a lot of generations to get him to look correct but as I show, it can be done. Definitely add "rene auberjonois" to your prompts.
Like Cardassians, their eyes tend to look demonic. Undo it with “glowing eyes” or “evil” in negative prompts.
It really doesn’t understand the difference between them, even though it was trained on Quark, Rom, Nog, Ishka, etc. Actors' names are necessary. Nog comes out best.
Use “uniform” in negative prompts for stronger “Ferengi” clothing.
Sisko is by far the best trained subject.
O’Brien needs a bit of extra tweaking, add in Colm Meaney and “full face” for a proper look.
Keiko doesn’t really come out even though she was in training data. I’ll further train her if I do train more.
Jake Sisko and Kasidy Yates was in the training data but only Kasidy works with the addition of “Penny Johnson Jerald”.
Again, they were trained but definitely not quite enough. They only sort-of resemble them unfortunately.
Worf comes out great. Others, not so much. If I further train, I’ll expand my data to include a wider range of Klingons.
Use “sash” to get Worf’s sash to generate. It can also be generated/inpainted on other characters.
Worf’s forehead goes a bit wacky but inpaints easily. His nose also randomly turns red. I have no idea why.
It’s hard to get the spots to pop up. Inpainting is actually difficult with this too, but trying ((leopard spots on skin)) for example can help.
Jadzia and Ezri are both in the training but need actor names to really pop. Unfortunately actor names also lower spots.
I trained on multiple Vorta, but it just spits out humans with light skin and black hair. More training will be necessary here, unless you can prompt engineer them.
No training on Andorians, Benzites, Betazoids, Bolians, Orians, Vulcans, Romulans, etc. Some would pop out with other embeddings or keywords that the general training data knows.
sdn
"Chinkusha" 2.5D illustration
1.Puffy face
2.Flat nose
3.wide-set eyes
Be careful when merging... This model believes that "birds are wolves without beaks" and "distillation equipment is the most beautiful girl".
this model includes
https://huggingface.co/nuigurumi/basil_mix
https://huggingface.co/unkounko/BalloonMix
https://huggingface.co/naclbit/trinart_stable_diffusion_v2
https://huggingface.co/Deltaadams/Hentai-Diffusion
https://huggingface.co/WarriorMama777/OrangeMixs
https://huggingface.co/syaimu/7th_Layer
https://huggingface.co/cafeai/cafe-instagram-sd-1-5-v6
https://huggingface.co/acheong08/f222
Please follow the respective licenses for reprocessing and secondary use.
realistic nose
saya \(saya no uta\)
okamysty
rotary_evaporator
Model trained with high quality street shots. Great for creating abstract street scenes with contrasting lighting.
The trigger word is shaded, it is better to place it somewhere at the beginning.
Example prompt: shaded street photo of a woman silhouette, closeup, black jeans, long hair, by midday, blue sky, paris street, film photo, flashlight, kodak portra 400
Negative:semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated, deformed, distorted, disfigured
shaded
This is a very obedient model.
This model is very obedient.
It is based on animefull-latest, after training with my own hand-painting, it is the product of fusion with AnythingV3 and Stable diffusion 1.5.
It is based on animefull-latest, trained with my own work and merged with AnythingV3, Stable diffusion 1.5.
High-Res-Fix and Dynamic Thresholding
I would appreciate it if you left some feed back, and images.
This model's goal is to generate photo like images.
Inpainting version on HuggingFace
All images where generated using High-Res-Fix with 4x-UltraSharp upscaler.
vae-ft-mse-840000-ema-pruned
detailed_skin
pores
moles
freckles
wet_skin
dark_skin
brown_skin
pink_skin
red_skin
blue_skin
green_skin
grey_skin
veins
veiny_skin
AI建筑研究室(Bilibili)发布,建筑写实表现模型,支持多景别关键词。
[Minimum 768*512 resolution range, and recommended 896*896 resolution generation]
Bird's eye view (weaker than human vision) needs to be used with controlnet. It is recommended that the weight of controlnet should not be too high. (Example of functional trigger words: 1 museum, example of style trigger words futuristic style)
Architectural AI Research Group (Bilibili) released, premium architectural performance model, support multi-scene keywords, for the style and building function of the learning level is normal.
【Use in the minimum 768 res range, Recommended 896-960 res generation】
Architectural aerial view (not as good as perspective view)needs to be used with controlnet, and it is recommended that the weight of controlnet is not too high. (Function trigger word example: 1 museum, style trigger word example futuristic style)
It is recommended to combine relevant recommended words with trigger words and examples
Using in conjunction with the testimonials and trigger words and examples is recomended
Bilibili主页:https://space.bilibili.com/2161614
WeChat public account: AI Architecture Research Office
Cooperation contact WeChat: ym2lhl
Follow us on Youtube:https://www.youtube.com/@fanxu912
Contact me with Wechat: ym2lhl
a photo of a building
sunny
day view
night view
cloudy
rainy
dusk view
overcast
heavy fog
snowy
Found a look I liked while messing around with merges, so I figured I'd share it. Hope you enjoy!
Does both SFW and NSFW, though it can unintentionally veer towards the NSFW side without negative prompts.
My preferred settings:
CLIP skip of 1
ENSD of -1
HiRes Fix between 0.35-0.55
EasyNegative in the negative prompt
AI建筑研究室(Bilibili)发布,高级建筑表现模型,支持多景别关键词,对于风格与建筑功能的学习一般。
[Minimum 768*512 resolution range, and recommended 896*896 resolution generation]
Bird's eye view (weaker than human vision) needs to be used with controlnet. It is recommended that the weight of controlnet should not be too high. (Example of functional trigger words: 1 museum, example of style trigger words futuristic style)
Architectural AI Research Group (Bilibili) released, premium architectural performance model, support multi-scene keywords, for the style and building function of the learning level is normal.
【Use in the minimum 768 res range, Recommended 896-960 res generation】
Architectural aerial view (not as good as perspective view)needs to be used with controlnet, and it is recommended that the weight of controlnet is not too high. (Function trigger word example: 1 museum, style trigger word example futuristic style)
It is recommended to combine relevant recommended words with trigger words and examples
Using in conjunction with the testimonials and trigger words and examples is recomended
Bilibili主页:https://space.bilibili.com/2161614
WeChat public account: AI Architecture Research Office
Cooperation contact WeChat: ym2lhl
Follow us on Youtube:https://www.youtube.com/@fanxu912
Contact me with Wechat: ym2lhl
a photo of a building
sunny
nightfall
night view
cloudy
sunset
dusk
Overcast
fog
Snowy
Rainy
with mist
This model is a merge of many models. I did not use any models from authors that prohibit the merging or redistribution of their models.
That's a lie. I totally did. But by not listing the model names, you will never know. Kek.
Introduction
This is a mix of Isopropanol Addiction Mix and Flat 2d Animerge.
Recommended Settings
These are the settings that I use personally. Feel free to experiment.
Sampler: Euler a
CFG Scale: 9
Steps: 45
Clip Skip: 2
Hires Upscale: Upscale by 2 using Latent at 0.5 denoising strength then an additional upscale of 2 using 4x-UltraSharp.
Feel free to share what it creates in a review.
I was trying to create something that suits my taste.
Through trial and error, I think I achieved a look that I think is both cool and fitting to my style.
This style may not be everyone's cup of tea, but it resonates with me, and I wanted to share it with others. It is a blend of dystopian, post-apocalyptic, fantasy, and sci-fi fantasy elements.
This model is a work in progress, I will try to updated it from time to time.
for a less realistic look and more anime looking check out my other model -- https://civitai.com/models/29578/jucyani666mix
This model is mixed with a ton of stuff, so I honestly do not remenber everything but, the last mode/Lora I merged this with was Inzaniak'sGraphic novel - style
(https://civitai.com/models/25317/graphic-novel-style ) -- It is an amazing Lora, thank you ♥
recommend (to get similar results)
easynegative - https://civitai.com/models/7808/easynegative
vae - https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt
Suggested negative tags: (worst quality, low quality, normal quality:1.7), lowres, normal quality, ((monochrome)), ((grayscale)), ((text, font, logo, copyright, watermark:1.6))
DPM++ 2M Karras works good (sampling steps: 25-40)
Hiresfix (step:15, denoise strength: 0.1-0.5)
if faces still look ew don't forget to use inpainting/img2img to fix them, and to also fix other details.
for sampling methods, use Euler a (best), NO (second best), or DPM++ DM Karras
Step 1: download SAFETENSORS and VAE files.
Step 2: put the SAFETENSORS file under "stable-diffusion-webui\models\Stable-diffusion"
Step 3: put the VAE file under "stable-diffusion-webui\models\VAE"
Step 4: Done! enjoy the model
-use a minimal negative prompt for best results
-use Euler A and 20-25 steps for best results
-use danboorutags
-I used a clip skip of 2 (optional)
-I also used the upscaler Latent (nearest-exact) with a highres step of 20 and denoise of 0.5 to improve image quality and detail (optional)
DO NOT USE A DENOISE STRENGTH BELOW 4.5 FOR BEST RESULTS
EXAMPLE PROMPT
((Masterpiece)), (best quality), (1girl), red hair, beautiful red eyes, medium breasts, classroom, black glasses, school uniform
The VAE is not required. you can use any VAE you like. I have found that VAE makes more vibrant and crisp images, but this is just my testing
for the VAE I used the same one as grapefruit
I will try to improve and update this model by adding other images, although I'm not too familiar with SD and training models, so I will most likely stick to only merging models.
_______
-Loras have not been tested yet, but they should most likely work
-use the upscaler Latent (nearest-exact)for best results
- I will try to update the model at least once per week
-Image generation on the A1111 WebUI normally took around ~10 seconds using a 3080 TI with 12 gigabytes of RAM
Download and place the file under "stable-diffusion-webui\models\Stable-diffusion"
CLICK 2.4 TAB For PHOTOREALISM MODEL - IT WAS ASKED FOR AND IVE DELIVERED (HOPEFULLY) - 2.5 (FULL SIZE ALSO - DREAMBOOTH1.CKPT ADDED) 16.5GB DONT COMPLAIN ABOUT THE SIZE OF 2.5 - ONLY FOR THOSE THAT REALLY WANT A POWERFUL MODEL AND DONT CARE ABOUT FILE SIZE
VERSION 2.3 HAS ARRIVED, PLEASE ENJOY (CONTROLNET BAKED IN FOR IMPROVED IMAGE GENERATION + KADINSKY MODEL ADDED FOR QUALITY BOOST). For 2.3 you will need to make sure both the safesensor and yaml config file are downloaded and installed in the same directory or you wont get the proper results. this is because of multi language support built into the model based on the alt-diffusion model - mostly for chinesse, french, spanish and a few other language specific models that can work with english and vise versa.
Easy Diffusion Can we downloaded here, they are currently looking for developers who can code up contributions to merge LORA files directly into main models in an easy way, something that should be possible but automatic1111 doesnt have yet. U can add a comment on the pull reques there cmdr2/stable-diffusion-ui: Easiest 1-click way to install and use Stable Diffusion on your own computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. (github.com) and download it too. Its easier to get into for newbies than other stable diffusion programs
Currently sitting at 1390 Model Merges, Im happy for other model users to use this model for mergers, it is a very powerful beast that gets results
Both fp16 versions can be found across the left tabs where versions are now listed on civitai.com after their recent website update. A 32 and 16fp sized file (yes they are big please do not clutter up comments about size , it is what it is if its to much then just pass on the model
works fine with easy diffusion UI as well - which is the program i use to run it, automatic1111 and stable diffusion WebUI are fine with the config file placed alongside the saftensor - config file will be in a light blue message on the upper right side of this page on the download screen.
To find other models search for "mega model" or search my username info.
Credit to all the model makers, merges and the community in general without which this wouldnt be possible. Hope u all enjoy it and feel free to merge it into your own models as well - im interested to see what people do with this (this is a general acknowledgement to all model producers here, because if i listed 1700 models that have been merged there wouldnt be enough space and there would be complaints about clutter) so the above is a general acknowledgement to all of civitai and huggingface model producers
My username is u/ollobrains on reddit if u have any questions u can drop me a line there - Donations can be made here Buy asmrgaming a Coffee.ko-fi.com/asmrgaming- Ko-fi ❤️ Where creators get support from fans through donations, memberships, shop sales and more! The original 'Buy Me a Coffee' Page.
None
Semi realistic model for art in 2.5D style.
(not completed yet)
My goal was to mix a checkpoint that would be similar to AOM3, but at the same time different from it.
Based on ChilloutMix and Anything V5
Prohibited for commercial use, as the creation used other models, which prohibit commercial use.
I recommend using negative embeddings
Prompt example:
(masterpiece,best quality),[[[[CG,wallpaper,HDR,high quality,high-definition,intricate details, cinematic, cold lighting, pastel]]]],1girl,
Example of a negative prompt:
(worst quality, low quality:1.3), (bad-hands-5 bad_prompt_version2 easynegative:0.6)
I like to use sampler dpm++ 2m karras , about 20 steps for the preview and over 30 steps for the final image.
I also recommend using Hires. fix
I prefer latent (nearest-exact) Denoising strength 0.5-0.6
This is a model trained on collage images.
Use the activation token collage style in your prompt (I recommend at the start)
I have the most fun with this model when I use simple prompts and let the model go crazy. If you want a model that strictly adheres to your prompt, this isn't that.
Trained from 1.5 with VAE.
SAFETENSORS DOWNLOAD LINK (huggingface)
Don't ask me to make it a LoRA, you have my permission to do so yourself and share it.
collage style
V 1.0 test model (using 30 training images)
VAE: stabilityai/sd-vae-ft-mse-original at main (huggingface.co)
Upscaler: JingyunLiang/SwinIR: SwinIR: Image Restoration Using Swin Transformer (official repository) (github.com)
smitsv
UPDATE: safetensor(float 16) and Pruned version file for PastelMixAlike
New Version: PastelMixSam
This version has SamDoesArts LoRa embedded in it. (Still need trigger word)
NOTE: If you're using PNG info from my old sample(Delete the old hash from "Override settings" as it disables the embedded LoRa and uses only the model.
However, the regular method of slapping the base LoRa on my model is still recommended as it has slightly better results ( bit hard to notice though) and is less complicated.
PastelMixAlike
Just my fine-tuned version of PastelMix
No vae is needed
Clip skip=2 and "quantization in K samplers" is enabled
Several Lora was used to test its compatibility:
SamDoesArts (Sam Yang) Style LoRA as the Base Lora:
https://civitai.com/models/6638/samdoesarts-sam-yang-style-lora
Compatible well with these Lora: (Need some prompt tuning)
Recommended settings are included within Sample images
Currently, there are purple artifacts that are affecting images. If anybody has a fix please comment down below.
sam yang
If you like my work, consider buyingme a coffee☕
V4.1 (experimental):
removed rev_animated
added realistic vision
fixed the model
Recommendation: clip skip 1 (clip skip 2 sometimes generate weird images)
2:3 aspect ratio (512x768 / 768x512) or 1:1 (512x512)
DPM++ 2M CFG 5-7
prompts that i always add: award winning photography, Bokeh, Depth of Field, HDR, bloom, Chromatic Aberration ,Photorealistic,extremely detailed, trending on artstation, trending on CGsociety, Intricate, High Detail, dramatic, art by midjourney, masterpiece, best quality, high quality,extremely detailed CG unity 8k wallpaper
V4.0:
added deliberate 2.0
added rev animated
fixed hands with basil_mix
V3.5: ADDED Pastel mix
https://civitai.com/models/5414/pastel-mix-stylized-anime-model
V3: ADDED AnythingV3 and fixed the model:https://civitai.com/models/66/anything-v3
RECOMMENDED VAE'S
kl-f8-anime2
Anything-V3.0
RECOMMENDED LORA'S FOR BETTER RESULTS
0.8 weight
V2: ADDED Deliberate: https://civitai.com/models/4823/deliberate
Roboetics: https://civitai.com/models/3738/roboetics-mix
Inkpunk: https://civitai.com/models/1087/inkpunk-diffusion
DreamShaper: https://civitai.com/models/4384/dreamshaper
ChromaV5: https://huggingface.co/SomaMythos/ChromaV5
H&A's 3DKX 1.1:https://civitai.com/models/2504/handas-3dkx-11
Analog diffusion: https://civitai.com/models/1265/analog-diffusion
The Ally's Mix: https://civitai.com/models/1202/the-allys-mix
Tokens: Analog style, chromav5, nvinkpunk
Chromav5 Keywords: Chromatic Aberration; Geometric Shapes; Bokeh; Depth of Field; Photorealistic; Cosmic; Detailed; Bloom; HDR
我融合的模型中觉得还行的一个
Among the models I fused, I think it's okay
this model is combined from anime models and semi realistic models but mostly anime models. it can create semi realistic style but also can create anime or cartoon unique artstyle.
Thank you so much for the feedback and examples of your work! It's very motivating.
Do you like what I do? Feel free to buy me a coffee ☕
My models:
The model is trained on fantasy battlemaps, so it works the best for medieval fantasy setting but not only limited to this.
You need to use width and height not less than resolution 1024 but you can make it larger and that works perfectly just producing bigger battlemap.
You can assume battlemaps dimensions as approximately 24x24 squares (120x120 feet) for normal dimensions of specific model, for example:
1024x1024 generation is 24x24 battlemap
1536x1024 generation is 36x24 battlemap
2048x1024 generation is 48x24 battlemap
And so on.
Also sometimes square gird is visible and you can rely on this.
Main trigger word is "Battlemap". You can try everything after this word but you can get best results generating something in medieval fantasy setting.
Two second trigger words are "Dungeon" and "Outdoor". Use the first one for closed space, usually underground (e.g. dungeons, mines, caves, castles, labyrinths, etc.) and second for open space (e.g. forest, city, sea)
Link to huggingface: https://huggingface.co/Zapper/battlemap-1024
battlemap
dungeon
outdoor
4 days of hard work
calibrated, configured, adjusted, to provide the final File that I hope you will enjoy.
The totality of the images uses its of my personal data bank, Confectionner under Photoshop,
I provide this file
but I refuse any person who would attribute my work or who would come to make money with it.
I give you full permission to use, combine, modify or merge them.
Any resemblance to a real character is totally unwanted,
Have fun
I provide some example of image made with my Baby ^^
merged fromAnythingv4.5, Minika mix,Night sky,Pastel-mix
Common Negative:
EasyNegative, (worst quality, low quality, normal quality:1.4), lowres, skin spots, acnes, skin blemishes, age spot, glans, extra fingers, fewer fingers, strange fingers, ((bad hand)), Hand grip, (lean), Extra ears, (Four ears), Strange eyes, (three arms), Many hands, (Many arms), ((watermarking))
All sample images using highrexfix.
Please check the prompt words in the example carefully.
Welcome to the AI Painting Model Museum QQ Group
①Group: 791222249
②Group: 599962550
The AI virtual model trained based on the prototype of Jzzyeonz, all the pictures appearing in the account are generated by AI software Stable Diffusion, and the characters in the pictures do not exist in reality. Any theft or reproduction of the pictures from this account for fraud or other commercial purposes is the responsibility of the infringer.
以 Jzzyeonz 为原型基础训练的 Ai虚拟模型,账号中所出现的所有图片均由ai软件Stable Diffusion生成,图片中人物非真实存在。任何盗用或搬运本账号图片用于诈骗或其它商业用途的,责任由盗用者本人承担。
JzzyeonAi_v5
This is an ancient Chinese style model, and it also belongs to the ink-ink model series
-
This is a model of ancient Chinese style, and it also belongs to the model series with ink and water orientation
===========
2023.4.11 Added version 2.0
2023.3.01 Added safetensors format
-
Added version 2.0 on April 11, 2022
2023.3.01 added safetensors format
===========
VAE:vae-ft-mse-840000
stabilityai/sd-vae-ft-mse-original at main (huggingface.co)
Negative word recommendation - Negative word recommendation:
(((simple background))),monochrome ,lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, lowres, bad anatomy, bad hands, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, ugly,pregnant,vore,duplicate,morbid,mut ilated,tran nsexual, hermaphrodite,long neck,mutated hands,poorly drawn hands,poorly drawn face,mutation,deformed,blurry,bad anatomy,bad proportions,malformed limbs,extra limbs,cloned face,disfigured,gross proportions, (((missing arms))),(((missing legs))), (((extra arms))),(((extra legs))),pubic hair, plump,bad legs,error legs,username,blurry,bad feet
The fleeting radiance, like a momentary brilliance in the fleetingness of life, passes by in a flash, yet leaves an indelible impression deep in the soul
I wrote it blindly with GPT, I feel that I have suffered from the second disease
The children are playing together, I hope you like it
FRMix 1.0 version(Attached with seven rendering images):
Hi,
This is my second merge model.
Please leave me a comment, and tell me your opinion.
If you like my work please help me do it better. https://ko-fi.com/alpharadix
PLEASE READ: this model doesn't do anything but improve anatomy and light. That's why I don't give prompts. You should compare YOUR 1.4 and 1.5 prompts with using this model, so you see if it gives you better results with any prompt. Also don't expect a miracle, but an improvement. This model doesn't do a particular esthetic or style.
I made this model merge for my own use and my patrons. The purpose of this is to improve the anatomy of portraits so I get:
Better hands
Less long neck problems
Better posing
This merge is for general purpose renderings, so itdoesn't force a style. Use it for everything with your favorite prompts, SFW or NSFW. In the examples, I am using my prompts and my styles (created by myself at my patreon page).
If you want tosupport me and encourage me to sharefuture updates, or just like my art, join my patreon page:(and get virtual female models, virtual painters, tutorials, prompts,and more)
https://www.patreon.com/intuitivearts/about
I would also love if you share your results using it here. I love to see what other artists do.
Thank you! :)
A Stable Diffusion model inspired by humanoid robots in the biomechanical style could be designed to generate images that appear both mechanical and organic, incorporating elements of robot design and the human body. The images could include metallic textures and connector and joint elements to evoke the construction of a robot, as well as organic patterns such as veins and muscles to create a sense of life and movement. Using a transfer learning approach, it would be possible to train on existing images of humanoid robots and gradually modify the model to create original images that incorporate both robotic and human design elements. To achieve better results, consider using Lora Biomechanic Biomechanicals (inspired by HR Giger) and do not hesitate to use the inpaint feature to improve resolution on faces and details.https://civitai.com/models/20846/biomechanicals-hr-giger
humanoid robot
➡️A model I made using some custom training and merge with other popular models. I wanted a model capable of easily generating good consistant 2.5D style images with simple prompts.
🔞NSFW capable.
👩One disadvantage of this model is that it might be less efficient for some specific characters.
📝The recipe is quite complex so I don't remember all the models used. Sorry. Most notable ones should be AbyssOrangeMix, and some reminiscences of ChilloutMix.
✔️Feel free to use the VAE you want. I personnaly use ClearVAE, and sometimes Blessed2 VAE.
realistic
3d face
lustrous skin
My attemp to interpolate models with unet 686 weights and text_encoder 197 weights seperatedly. The goal is to extract MeinaMix's style, extract Counterfeit's background and PastelMix's intricate detail (but exclude their messy).
Manually decide each of these weights are impossible, I've introduced "RMHF" or Reinforcement Model-merging from Human Feedback.
https://github.com/TkskKurumi/DiffusersFastAPI
I'm naming it like a machine learning method XD, but in fact the method very simple. (I know barely nothing about Reinforcement Learning)
Let's say the merging composition is W, meaning model-index i, weight-index [j] has W[i][j] contribution. In each iteration, generate a random (not purely random, scale is scheduled) vector ε has the same shape as W, show user the pair of W-ε and W+ε generated images, select either and update W = W±ε.