Koji aims to be a general purpose model with either a focus on sfw or nsfw. Model is finetuned/trained on hassaku with general to questionable images. Expect some exposed breasts or panty shots without corresponding negative prompts. Because it was trained on hassaku, it remains highly nsfw, but less realistic, with later iterations, it change more to sfw.
Sponsors:
PirateDiffusion.com is a proud sponsor of koji, preinstalled with 80+ full NSFW models. Render from ANY smartphone or PC. Join their community at https://piratediffusion.com/
Mage.space with its amazing creators program supports all kinds of creators like me! Preinstalled with 50+ high quality models, join their discord community here!
Supporters:
Thanks to my supporters Riyu and Alessandro on my patreon!
You can support me on my patreon, where you can get other models of me and early access to model versions.
_____________________________________________________
Using the model:
Use mostly danbooru tags. No extra vae needed. For better promting on it, use this LINK or LINK. But instead of {}, use (), stable-diffusion-webui use (). Use "masterpiece" and "best quality" in positive, "worst quality" and "low quality" in negative.
My negative ones are: (low quality, worst quality:1.4) with extra monochrome, signature, text or logo when needed.
Use clip skip 2 with sampler DPM++ 2M Karras or DDIM.
Don't use face restore and underscores _, type red eyes and not red_eyes.
_____________________________________________________
Every LoRA that is build to function on anyV3 or orangeMixes, works on koji too. Some can be found here, here or on civit by lottalewds, Trauter, Your_computer, ekune or lykon.
_____________________________________________________
Base model for koji is hassaku.
Black result fix (vae bug in web ui): Use --no-half-vae in your command line arguments
I use a Eta noise seed delta of 31337 or 0, with a clip skip of 2 for the example images. Model quality mostly proved with sampler DDIM and DPM++ SDE Karras. I love DDIM the most (because it is the fastest for model testing).
We are proud to announce that Eight Buffalo Media Group has shared our first text-to-image model!
Note that Version 1 of this model still has a few issues, so please be patient as we improve. Constructive feedback is always welcome.
This freely available model is a combination of many different models that have been mixed, merged, and specifically trained on a couple things like people in glass jars. The goal is a strong general model that allows control similar to many anime models, with a more realistic look and feel.
See images for prompt matrix testing.
This model does require a good deal of prompt crafting, the trade off is you have a lot of control on the images you create. I would recommend finding a prompt that generates the style and quality to your liking and saving it as a style.
Current known issues/limitations:
More people you add, the more likely you will get blurry/distorted faces
Tends to make women young looking. Add negative prompts to help this.
Tends to make men old looking. Add negative prompts to help this.
When putting people in jars, faces tend to be blurry/distorted. The more jars and more people, the more blurry/distorted. Prompts and and restore faces can help this a little.
DPM++SDE Karras sampling method tends to get the best consistent results, but experiment.
Going for a general purpose model with a fine level of control. Merged with a lot of different models, but don’t remember all of them. Links below to those that were remembered or provided inspiration for direct training.
Meinahentia - https://civitai.com/models/12606/meinahentai
Girls in Glass Jars (inspiration) - https://civitai.com/models/10453/girls-in-glass-jars
Shampoo-mix - https://civitai.com/models/33918/shampoo-mix
Stable Diffusion - https://huggingface.co/stabilityai/stable-diffusion-2-1
Please review and respect any related licenses. See https://huggingface.co/stabilityai/stable-diffusion-2-1 for the post know license related to known used models.
Common Negative Prompts Used:
(((worst quality, low quality, bar censor, distorted face))), (((jpeg artifacts, signature, watermark, text, username))), blurry, bad faces, blurry faces, bad eyes, bad anatomy, bad hands, error, extra limbs, missing fingers, extra digit, fewer digits, cropped, normal quality, missing fingers, extra digit, fewer digits, censor, black and white, monochrome, NSFW,
See our page on huggingface: https://huggingface.co/EightBuff/8Buff_Gen
Our Blog is now live! Check it out for the latest news and articles: https://eightbuff.blogspot.com/
Shop our RedBubble page and help our artists.
Subscribe to our Patreon Site for News and updates!
Enjoy!
person_in_Jar
Hello, the model was created as an artistic style, the model can do almost anything, the main thing is to follow the promt, hands and eyes looks good for the most cases
Model Information:
This model is generally designed for portraits and full-length anime style photos. Fantastic landscapes are quite decent. And it doesn't require kilometer-long queries to get a high-quality result.
Recommend: DPM++2M Karras, Clip skip 2 Sampler, Steps: 25-35+
This model would not have come out without XpucT's help, which made Deliberate
I hope you like it, thanks for the feedback
Like it? Rate it below, don't be shy, I'm so happy!
Get in 🤗Hugging Face: Hugging Face-Ojimi/anime-kawai-diffusion
Kawai v4-charm LTS version (snapshot 16-04-2023).
Thank you for supporting us all this time.
Feature:
Full of Kawai Diffusion v4.
Support long-term use.
High-end quality.
Limit:
Error, bad, bad hands, color loss.
⚠️Usage warning⚠️:
The output images may not be suitable for all ages, please use with additional safety measures. (violence, sexual)
Some parts of the model may not be suitable for you. Please use it with caution.
Recommend for Fine-tuning.
It's an AI art model for text to images, images to images, inpainting, and outpainting using Stable Diffusion.
The AI art model is developed with a focus on the ability to draw anime characters relatively well through fine-tuning using Dreambooth.
It can be used to upscale or render anime-style images from 3D modeling software (Blender).
Create an image from a sketch you created from a pure drawing program. (MS Paint)
The model is aimed at everyone and has limitless usage potential.
For 🧨Diffusers:
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("Ojimi/anime-kawai-diffusion")
pipe = pipe.to("cuda")
prompt = "1girl, cat ears, blush, nose blush, white hair, red eyes, masterpiece, best quality, small breasts"
image = pipe(prompt, negative_prompt="lowres, bad anatomy").images[0]
Try it in Google Colab!
Chat GPT with Kawai Diffusion (or any model if you like.)
Read the following instructions, and if you understand, say "I understand": Command prompt structure: includes descriptions of shape, perspective, posture, and landscape,... Keywords are written briefly in the form of tags. For example "1girl, blonde hair, sitting, dress, red eyes, small breasts, star, night sky, moon"
The `masterpiece` and `best quality` tags are not necessary, as it sometimes leads to contradictory results, but if it is distorted or discolored, add them now.
The CGF scale should be 7.5 and the step counts 28 for the best quality and best performance.
Use a sample photo for your idea. Interrogate DeepBooru and change the prompts to suit what you want.
You should use it as a supportive tool for creating works of art, and not rely on it completely.
Normal: Clip skip = 2.
As it is a version made only by myself and my small associates, the model will not be perfect and may differ from what people expect. Any contributions from everyone will be respected.
Want to support me? Thank you, please help me make it better. ❤️
Runwayml: Base model.
CompVis: VAE Trainer.
stabilityai: stabilityai/sd-vae-ft-mse-original · Hugging Face
d8ahazard: Dreambooth.
Automatic1111: Web UI.
Mikubill: Where my ideas started.
Chat-GPT: Help me do crazy things I never thought I would.
Novel AI, Anything Model, Abyss Orange Model, CentusMix Model, Grapefruit Model: Dataset images. An AI made me thousands of pictures without worrying about copyright or dispute.
Danbooru: Help me write the correct tag.
My friend and others: Get quality images.
And You 🫵❤️
This license allows anyone to copy, and modify the model, but please follow the terms of the CreativeML Open RAIL-M. You can learn more about the CreativeML Open RAIL-M here.
If any part of the model does not comply with the terms of the CreativeML Open RAIL-M, the copyright and other rights of the model will still be valid.
All AI-generated images are yours, you can do whatever you want, but please obey the laws of your country. We will not be responsible for any problems you cause.
We allow you to merge with another model, but if you share that merge model, don't forget to add me to the credits.
Don't forget me.
I have a hero, but I can't say his name and we've never met. But he was the one who laid the foundation for Kawai Diffusion. Although the model is not very popular, I love that hero very much. Thank you for your interest in my model. Thank you very much!
Buy me ko-fi: https://ko-fi.com/ojimi (≧∇≦)ノ
This model is my first Mix Model. This model was merged based on AOM3. It may be difficult for a woman to come out with clothes on. This is optimized for the nfsw.
Don't use my model commercially.
Technically more like V10 but its renamed as V3 to keep in line with my previous versions.
This is a merge of many many checkpoints , including the few dreambooth models i created myself
All credit goes to the other creators in helping me create this version.
It can and will create both SFW and NSFW images with the correct prompts
hotporn
Porn , Lots of Porn
hotporn
woman
I have created my first Checkpoint model that is suitable for mostly realistic styles, but I added some of my own trained cartoon style models for extra flares. I focused on refining the hand and feet issues as much as possible. Based on the images I have generated so far, it seems to be fine most of the time. I avoid using the word "dynamic" as it can cause the hand to go wild.
For the example images, I used the same prompts, and all the displayed images are completely raw without any additional support from Loras or Embeddings. However, using negative embeddings like "bad artist" or "bad hands" can greatly improve the result.
If you have any suggestions for improving my model, please leave them in the comments below. Thank you.
Prompts:
Pos: (European), beach, sea, (hand well), (finger well), (long braided hair), (bangs), with in frame, ultra-detailed, 8k, masterpiece, beautiful detailed face, (volumetric lighting), (beautiful detailed eyes), (ambient light), realistic shadows, (fullbody:1), (1girl), (cowboy shot), pattern shirts, pants, looking at viewer, perfect body, body well:1, leg well:1, arm well:1, dynamic pose
Neg: ((hands poor)), (fingers poor:1), (badquality), (bad anatomy), (nsfw), (legs poor), (inaccurate limb:1.2), (2girls), bad composition, inaccurate eyes, (extra hands:1.2), (inaccurate limb:1.2), (extra arms:1.2), (bad quality), bad face, (bad eyes), deformed, ((deformed fingers:1.2)), ((extra fingers, extra limbs)), bad hands, bad fingers, Asian
Pos: beach, sea, (hand well), (finger well), (long braided hair), (bangs), with in frame, ultra-detailed, 8k, masterpiece, beautiful detailed face, (volumetric lighting), (beautiful detailed eyes), (ambient light), realistic shadows, (fullbody:1), (1girl), (close up shot), pattern shirts, pants, looking at viewer, perfect body, body well:1, leg well:1, arm well:1
Neg: ((hands poor)), (fingers poor:1), (badquality), (bad anatomy), (nsfw), (legs poor), (inaccurate limb:1.2), (2girls), bad composition, inaccurate eyes, (extra hands:1.2), (inaccurate limb:1.2), (extra arms:1.2), (bad quality), bad face, (bad eyes), deformed, ((deformed fingers:1.2)), ((extra fingers, extra limbs)), bad hands, bad fingers
Pos: ((cartoon)), anime style, beach, sea, (hand well), (finger well), with in frame, ultra-detailed, 8k, masterpiece, beautiful detailed face, (volumetric lighting), (beautiful detailed eyes), (ambient light), realistic shadows, (fullbody:1), (1girl), (close up shot), pattern shirts, pants, looking at viewer, perfect body, body well:1, leg well:1, arm well:1, hand pose
Neg: ((hands poor)), (fingers poor:1), (badquality), (bad anatomy), (nsfw), (legs poor), (inaccurate limb:1.2), (2girls), bad composition, inaccurate eyes, (extra hands:1.2), (inaccurate limb:1.2), (extra arms:1.2), (bad quality), bad face, (bad eyes), deformed, ((deformed fingers:1.2)), ((extra fingers, extra limbs)), bad hands, bad fingers
7PAG + Counterfeit2.5 merged.
More better background then 7PAG.
Ether Blu Mix 3 is a semi-realistic anime styled model. It specializes in stylized character illustrations with vibrant colors and a painterly feel. I consider this my take on the "pastel-mix" style of models.
✨ Please Share Your Cool Creations Below! ✨
Since Ether Blu Mix is based on AOM3 it has strong biases in generating females and suggestive/NSFW content. If you're looking for a model with more flexibility please try Ether Real Mix linked above.
Ether Blu Mix 3 is the stylish evolution of the Blu mix. It leans further into its stylish roots while retaining some of the quirks of the older versions (the spicy parts 🌶️).
Weights were further adjusted to enhance the painterly, vibrant look and AyoniMix was updated to V6 as it better compliments the model.
If you prefer the more realistic look of the previous versions, please add "realistic" to the start of your prompts or try Ether Real Mix linked above.
Placing "NSFW" into the negative prompt is recommended if you're concerned about suggestive generations or rare instances of unprompted nudity. Due to AOM3 this mix is generally biased towards suggestive and NSFW imagery.
Use whatever sampler, steps, cfg you prefer.
Sample images were generated with
Sampler: DPM++ 2M Karras
Steps: 20 - 28
CFG: 7 - 11
Clip Skip: 1
Increasing CFG can lead to more vibrant results.
Placing "NSFW" into the negative prompt is recommended if you're concerned about suggestive generations or rare instances of unprompted nudity. Due to AOM3 this mix is generally biased towards suggestive and NSFW imagery.
Negative Prompt Example:
"NSFW, (worst quality, low quality:1.3), watermark, signature"
VAE Recommendation:
The sample images were generated using Waifu Diffusion VAE kl-f8-anime2. Please use it if you wish to achieve a similar look.
Sample images were further upscaled using AUTO1111 Hi-res fix. Please utilize it if you wish to achieve similar results.
Latent (nearest-exact)
Upscale by: 2
Denoising strength: 0.55
Are your results changing too much after Hi-res fix? Please try 4x-foolhardy-Remarci.
4x-foolhardy-Remarci
Upscale by: 2
Denoising strength: 0.45 - 0.55
As Ether Blu 3 leans heavier into the anime aesthetic I recommend using the suggested positive and negative prompts of AOM3.
masterpiece, best quality,
NSFW, (worst quality, low quality:1.3)
The model was mixed and tested using these prefixes and as such will give you the most consistent style results. Of course you're free to prompt however you'd like as danbooru-style models tend to be quite flexible in prompting.
I personally like to use short prompts mixing prose and danbooru-style tags. Experiment and give the AI some freedom to interpret your ideas!
If you look at the prompts included with the sample images they demonstrate that even with a very loose and vague style of prompting you can achieve fun, high quality images.
General guidelines:
Prompt in the style of anime/danbooru-style models, "masterpiece, best quality, ..... "
More anime style images
Prompt in the style of photography/non-anime models, "professional photograph of ..... "
More realistic images, higher style variety
I have no training or real knowledge of machine learning. I merged the model using the information provided on WarriorMama777's AOM model page. I very arbitrarily used the UBM weights given to merge and create AOM3 and its derivatives and modified the values with trial and error until I was visually happy with it.
I utilized the SuperMerger extension for AUTO1111 by hako-mikan.
▼ Merging
MODEL A: AOM3.safetensors
MODEL B: Counterfeit-V2.5_fp16.safetensors
Weight Sum merge - Use MBW
Alpha: 1
0,0.8,0.8,0.8,0.8,0.8,0,0,0,0,0.8,0.8,0.8,0.8,0.8,0.8,0.8,0.8,0.8,0.8,0.8,0.8,0.8,0.8,0,0
Model: AOM3-CounterfeitSumS2-08
▼
MODEL A: AOM3-CounterfeitSumS2-08
MODEL B: dalcefoPainting_2nd.safetensors
MODEL C: pastelmix-better-vae-fp16.safetensors
Weight Sum merge - Use MBW
Alpha: 1
0,0.9,0.9,0.9,0.9,0.9,0,0,0,0,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0,0
Beta: 1
0,0.9,0.9,0.9,0.9,0.9,0,0,0,0,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0.9,0,0
Model: AOM3-CounterfeitSumS2-08-Dalecfo2SumS2-09-PastelSumS2-09
▼
MODEL A: AOM3-CounterfeitSumS2-08-Dalecfo2SumS2-09-PastelSumS2-09
MODEL B: ayonimix_V6.safetensors
Weight Sum merge - Use MBW
Alpha: 1
0,0.4,0.4,0.4,0.4,0.4,0,0,0,0,0.4,0.1,0.4,0.4,0.4,0.4,0.4,0.3,0.1,0.1,0.4,0.4,0.2,0.4,0.4,0.4
Final Model: Ether Blu Mix 3.1
Models Used:
AOM3.safetensors
D124FC18F0232D7F0A2A70358CDB1288AF9E1EE8596200F50F0936BE59514F6D
Counterfeit-V2.5_fp16.safetensors
A074B8864E31B8681E40DB3DFDE0005DF7B5309FD2A2F592A2CAEE59E4591CAE
pastelmix-better-vae-fp16.safetensors
D01A68AE76F97506363F387F5F28BB564AD9E20924844FD5945E600B72D39E79
dalcefoPainting_2nd.safetensors
14647de2f8662a6c460d6b9bd47def2b0342adc8f8f3d7b89d5b397eb09014d5
ayonimix_V6.safetensors
F3A242FCAAF1D540A1C2D55602E83766CC2EFBDF64C2B70E62518CBF516BFCD3
Thank you to WarriorMama777 for providing AOM and other various mixes as well as detailing your workflow and inspiring me to try mixing my own models.
Thank you to hako-mikan for creating the SuperMerger extension allowing a quicker workflow for merging.
Thank you to gsdf, rqdwdw for creating Counterfeit.
Thank you to Dalcefo for DalcefoPainting2nd.
Thank you to andite for Pastel-Mix.
Thank you to Ayoni for AyoniMix.
Thank you to AUTO1111 for creating the web UI everyone uses.
Thank you to StabilityAI for starting everything with Stable Diffusion.
Thank you to the entire SD community for continuing to openly share and create.
This is a model related to the "Picture of the Week" contest on Stable Diffusion discord.
This version is trained on dedicated tokens per user, and doesn't work like you may be used to.
I try to make a model out of all the submissions, for people to continue enjoy the theme after the event, and see a little of their designs in other people's creations. The token stays "SDArt" and I balance the learning on the low side, so that it doesn't just replicate creations.
The pictures were tagged using the token "SDArt", and an arbitrary token given to the user that submitted it.
The dataset is available below and is composed of 47 pictures.
“Alice falls down the rabbit hole and enters a digital dream”
Create an image that rips off of Alice in Wonderland, but with a technological twist! How would Alice interact with the digital realm?
What kind of mind-bending experiences would she encounter? Will she find herself lost in a maze of code, or will she discover a new dimension in cyberspace?
Characters like Alice in Wonderland, Cheshire Cat, the Mad Hatter, or the Queen of Hearts– how would they appear and interact in the digital world? What sort of otherworldly environments would they inhabit? (i.e. Digital tea party, code tangled forests, holographic food)
Bring your skills and show out on twisting a timeless classic story into a new age tech haven!
SDArt
pln
bnp
aten
fcu
shai
peth
cous
aved
mth
elio
gani
opi
omd
kuro
asot
iru
onex
psst
irgc
buka
mds
pik
buon
yel
muc
byes
utm
dany
pafc
yler
zaki
oue
mss
guin
pbc
nasi
pgs
pkg
mako
inem
mlas
isch
phol
vedi
acu
pte
oxi
SDArt
high quality anime style model.
more info. https://huggingface.co/AerinK/NotSoXJB-Mix-1
EasyNegative(Negative Embedding) https://huggingface.co/datasets/gsdf/EasyNegative
badhandv4(Negative embedding)
https://civitai.com/models/16993/badhandv4-animeillustdiffusion
baked with vae. or you can use your own vae.
This is a model for making a zen landscape scene, and a zen landscape scene can be generated by describing the scene in words. It can also be used with some lora with themes such as architecture. The scene can be partially modified as zen landscape through img2img.
Please consider joining my Patreon! Advanced SD, and other Generative AI tutorials, guides, and tips, from a female content creator (me!) patreon.com/theally - it's popular! Over 200 Patrons are trusting me to explain how things work!
This is the current pinnacle of my merging "career"; a noise-offset enabled, realism-focused merge.
We started with TheAlly's Mix - SD1.4 + Waifu 1.3 + NovelAI. Hundreds of test mixes, and three release mixes later, we've arrived at the Holy Grail of TheAlly's Mix Merges (until version V).
This mix contains 75% TheAlly's Mix III, with the other 25% being a huge, crazy, block-merge mashup of realistic-focused models such as the excellent;
Uber Realistic Porn Merge (URPM) v1.1
QGO-10b
Realistic Vision V2.0
I tried my best to fix hands using some of the common block-merge hand-repair techniques, editing the IN00, IN01, IN02 blocks, and they've turned out not bad, have seen worse, still not great.
Everything! Wild results. Awesome portraits. Check out the example prompts and try them for yourself. I will say, if you want the best from it, you're going to have to work at your prompts.
Some of my sample images are created with ControlNet, to get poses and compositions from my previous generations - you'll never be able to reproduce those. Others, it's possible, but some do use Patron exclusive embeddings. The sample images are meant to show what's possible with a model. Use your imagination! I do have a tutorial on Patreon on recreating my images, the common settings I use, etc.
Its a merge with Hassaku and F222 models, creating anime with highly realistic elements like lingerie.
Support me in https://www.patreon.com/JakeRoseAIart if you like my work. Thanks
Illustration Artstyle - Mega model 2.7. This is the first illustration art style release for mega model. So please enjoy (it may do other styles but would need testing, so this is a very specific version just for illustration style) Buy asmrgaming a Coffee. ko-fi.com/asmrgaming - Ko-fi ❤️ Where creators get support from fans through donations, memberships, shop sales and more! The original 'Buy Me a Coffee' Page.
For 2. you will need to make sure both the safesensor and yaml config file are downloaded and installed in the same directory or you wont get the proper results. this is because of multi language support built into the model based on the alt-diffusion model - mostly for chinesse, french, spanish and a few other language specific models that can work with english and vise versa.
works fine with easy diffusion UI as well - which is the program i use to run it, automatic1111 and stable diffusion WebUI are fine with the config file placed alongside the saftensor - config file will be in a light blue message on the upper right side of this page on the download screen.
To find other models search for "mega model" or search my username info.
Credit to all the model makers, merges and the community in general without which this wouldnt be possible. Hope u all enjoy it and feel free to merge it into your own models as well - im interested to see what people do with this (this is a general acknowledgement to all model producers here, because if i listed 1700 models that have been merged there wouldnt be enough space and there would be complaints about clutter) so the above is a general acknowledgement to all of civitai and huggingface model producers
If you want to get the parameters, download the image and use the stable diffusion feature.
Any modern architectural form
Check my gallery on :
https://id.pinterest.com/vextoria_studio/
INFORMATIONS
The other upcoming Crescent Project will be released with a name instead of the number of the version.
3.CrescentHORIZON (Release)
Focus on hyperrealism, merged with EpiNoiseOffsetPY. so the ouput probably more darker than the previous Crescent
---------------------------------------------
4.CrescentORBIT (Coming Soon)
5.CrescentGAUNTLET (Coming Soon)
In case someone wishes to use the model I'm using here, it can be obtained here. This model is specifically designed for realistic photography and similar applications. It supports the Indonesian language, but it may require some English supplementation.
Just in case ada yang pengen nyobain model yang aku pake, aku upload disini. Model ini khusus dirancang untuk fotografi realistis. Model ini juga mendukung bahasa Indonesia, namun masih perlu didampingi dengan bahasa Inggris.
Goal of this project
Realistic and Detailed models
General use for photograhy
Support Indonesian Languages
Recommended setting for this model
VAE IS RECOMMENDED : https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main
Sampler : DPM++ 2M Karras / Euler A
Steps : 30 Steps / 15 is ok but not good
Hi-res fix (very recommended)
You can't use the model to deliberately produce nor share illegal or harmful outputs or content.
Semoga bermanfaat, Terima kasih telah mencoba & menggunakan model ini <3
Please FEEL FREE to comments about this model below, or if you have some recommendation that would be really appreciated for this project!
Feel free to mix or blend this model with your models, if you upload it, please let me know. I wanted to try yours too <3
Contact :
It's been a while, everyone!
Here is new 2D model
Please let me know if there is anything I need to improve!
高い汎用性と表現力を目指した2Dイラスト向けの階層マージモデルです。
Openjourney-v4とCounterfeit-V2.5が土台。VAEはお好みでどうぞ。
A MBW model created around Openjourney-v4 and Counterfeit-V2.5.
A series of merged model for 2D illustrations developed for versatility and expressiveness.
VAE is your choice.
1: HDRのような色彩感と精細な背景描写がウリ。
2: Latentとプロンプトを書くのが好きな人向け💕
3: AOMシリーズとAnythingシリーズは不使用。
1: HDR-like colors and detailed background depiction.
2: For people who like to write prompts and latent upscaler:)
3: AOM and Anything series are not used.
Recommended Setting:
Resolution: 512x512 ~ 768x768
Steps: 30 ~
Sampler: DPM++ 2M Karras
CFG scale: 7.5 ~ 11
Denoising strength: 0.55 ~ 0.6
Hires steps: 30 ~
Hires upscaler: Latent(nearest-exact)
Hires upscale: 2
Recommended Starter Template:
absurdres, highres, reflection, refraction:1.4, ultra detailed:1.0, BREAK
Negative Starter Template:
EasyNegative (worst quality, low quality:1.4) [:(badhandv4:1.5):27] simple background:1.0,
Merged models PRMJ, Classic Negative, 3DMDT1, fking_scifi_v2, A to Zovya RPG Artist's Tools, fking_civitai, Replicant-V1.0...
This checkpoint includes a config file, download and place it along side the checkpoint.
v3.0 Vs Midjourney
rev or revision: The concept of how the model generates images is likely to change as I see fit.
Animated: The model has the ability to create 2.5D like image generations. This model is a checkpoint merge, meaning it is a product of other models to create a product that derives from the originals.
Kind of generations:
Fantasy
Anime
semi-realistic
decent Landscape
LoRA friendly
It works best on these resolution dimensions:
512x512
512x768
768x512
Order matters - words near the front of your prompt are weighted more heavily than the things in the back of your prompt.
Prompt order - content type > description > style > composition
This model likes: ((best quality)), ((masterpiece)), (detailed) in beginning of prompt if you want anime-2.5D type
This model does great on PORTRAITS
Negative Prompt Embeddings:
Make use of weights in negative prompts (i.e (worst quality, low quality:1.4))
Olivio Sarikas - Why Is EVERYONE Using This Model?! - Rev Animated for Stable Diffusion / A1111
Olivio Sarikas - ULTRA SHARP Upscale! - Don't miss this Method!!! / A1111 - NEW Model
Do not sell this model on any website without permissions from creator (me)
Credit me if you use my model in your own merges
I do not authorize this model to be used on generative services
I have given you plentiful amount information and sources within this section, I will not answer redundant questions if it already exists here in the info section.
if you would like to support me:
https://ko-fi.com/s6yx0
UPDATE: safetensor(float 16) and Pruned version file for PastelMixAlike
New Version: PastelMixSam
This version has SamDoesArts LoRa embedded in it.
NOTE: If you're using PNG info from my old sample(Delete the old hash from "Override settings" as it disables the embedded LoRa and uses only the model.
UPDATE: After testing, I have concluded that PastelMixSam yield the same result as PastelMixAlike. Just remember to delete the hashes when copying model hashes from PNG info and you're all good.
And Trigger Word is not necessary for both versions. The result will change, for the better or worse (They all look good)
A new version is coming that will enhance the THICC and sexiness of the body. Stay tuned!
PastelMixAlike
Just my fine-tuned version of PastelMix
No vae is needed but nai.vae is the only vae that i think that are somewhat decent here.
Clip skip=2 and "quantization in K samplers" is enabled
Several Lora was used to test its compatibility:
SamDoesArts (Sam Yang) Style LoRA as the Base Lora:
https://civitai.com/models/6638/samdoesarts-sam-yang-style-lora
Compatible well with these Lora: (Need some prompt tuning)
Recommended settings are included within Sample images
Currently, there are purple artifacts that are affecting images. If anybody has a fix please comment down below.
sam yang
There is a dark wave, electro, gothic nightclub lost in time, long before the advent of social media, when images used to accumulate in galleries and pages of the web 1.0. This checkpoint was trained on images from the age of the first mass-market digital cameras 30 years ago until 15 years later. The source material has been a blurry mess of low-resolution images with lots of JPEG artefacts; they’ve all been pre-processed over several iterations with different AI upscaling and artefact removal as well as face restoration systems.
The training data from 13025 images have been blended during the training process as additions to the already present similar subjects of the SD 1.5 checkpoint. A subset of the images was already in the original low-resolution versions included in SD 1.5. The result is that this checkpoint is trained to visualise epic and surreal dark wave, electro, and gothic party scenes populated with guests and visitors wearing elaborate costumes and unique clothing with the right make-up to go along for the occasion.
The training has been set up to get the general art and a static look and feel of the dataset transferred into a versatile SD 1.5 checkpoint where everything else is preserved to allow for a wide range of subjects and compositions. The checkpoint is not trained to replicate the actual persons or a precise subset of the training data. The training took 50 epochs over 1630240 iterations.
Use “GothClub2K” in your prompts to invoke attention to the training data and dataset of the subjects alongside a detailed description of what you like to visualise.
GothClub2K