24 year old Belarusian nude model.
S028_KeiraBlue
follow me on Instagram, Youtube, Patreon and my website.
Suggested Weight: 1.0
professional portrait photo of hum4 as an astronaut, lipstick:0.3, skin texture, light bokeh
adding "attractive female" to the prompt can help
adding "lipstick:0.1 to 0.3" suggested
trained on ~30 images
Learning Rate: 0.005:1000, 0.001:2000, 0.0001:5000, 0.00005
Total Steps: 15,000
Batch Size: 1
Gradient Steps: 1
hum4
A commission of @Zorglub.
Christina Chong is a British actress of Chinese descent. She's known for many roles in TV shows and films, including Star Trek, Star Wars, Nightfall, Doctor Who and Black Mirror.
This is a 1000-step TI trained on a dataset of 18 images with my usual settings.
Appreciate my work? My TIs are free, but you can always buy me a coffee. :)
Curious about my work process? I have summarized it here.
You're obviously free to experiment, but bear in mind that my TIs are trained with a more or less fixed phrasing, that normally starts with:
"photo of EMBEDDING_NAME, a woman"
So I recommend always starting your prompt like that and then building the rest of the prompt from there. For instance, "photo of (chr1sch0ng:0.99), a woman, RAW, close portrait photo, sexy star trek uniform, trousers, pale skin, slim body, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 sharp focus, f 5.6, red lips".
chr1sch0ng
I trained this model with 10.000 steps, I'm no expert, but I hope, you like it.
Add into your prompt: "(ElviraMistress:0.75)"
Trained with 512x512 pixels.
ElviraMistress
Salma Hayek Pinault born Salma Valgarma Hayek Jiménez; September 2, 1966 is a Mexican and American actress and film producer.
You can invite me to a Ko-fi if you want!
s4lm4h4y3kv2
Monica Anna Maria Bellucci (Italianx born 30 September 1964) is an Italian actress and model. She began her career as a fashion model, modelling for Dolce & Gabbana, Cartier, and Dior, before transitioning to Italian films and later American and French films.
You can invite me to a Ko-fi if you like my work and you feel like it!
m0n1c4b3lucc1v2
A request of @TheEliteChief01.
Renata Valliulina is a Russian model and social media personality with around 14M followers on Tiktok.
1000-step TI trained on a dataset of 18 images with my usual settings.
Appreciate my work? My TIs are free, but you can always buy me a coffee. :)
Curious about my work process? I have summarized it here.
You're obviously free to experiment, but bear in mind that my TIs are trained with a more or less fixed phrasing, that normally starts with:
"photo of EMBEDDING_NAME, a woman"
So I recommend always starting your prompt like that and then building the rest of the prompt from there. For instance, "photo of (r3natavall:0.99), a woman, RAW, close portrait photo, sexy camo bra, long camo pants, pale skin, slim body, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 sharp focus, f 5.6, red lips, (eye shadow), (eyeliner), (rimmel), (heavy makeup), (long eyelashes), belt".
r3natavall
Currently, there are more and more negative embedding, while many are also very good and easy to use. However, almost all of them currently have a big problem... They change the main or initial artstyle of the used model. Best example would be my bad_prompt_version2 Negative Embedding. It helps enormously with the quality of an image, but drastically changes the artstyle of the model. That's not the point of it. For this reason, I have now trained my new Negative Embedding negative_hand!
An example of the issue:
negative_hand is supposed to fix the said issue. That means it should improve the quality of the image, but without changing the initial artstyle of the model.
Pro:
The artstyle of the model can be used without any problems and without possible artstyle changes.
The quality of the image and incorrect anatomy like hands are improved.
Con:
Since this embedding cannot drastically change the artstyle and composition of the image, not one hundred percent of any faulty anatomy can be improved.
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder.
Please put the embedding in the negative prompt to get the right results!
For special negative tags such as "malformed sword", you still need to add them yourself. The negative embedding is trained on a basic skeleton for the negative prompt, which should provide a high-resolution image as a result.
negative_hand
38 year old American former skier.
S027_LindseyVonn
This is a textual inversion trained on pictures of wrestler Stacy Keibler.
Really happy with the way it came out, hope you enjoy it, too.
Still learning and improving the progress of the training, any feedback is welcome.
Feel free to leave me a tip if you like what I am doing :)
St4K3101
Background:
This Textual Inversion embedding bears a striking resemblance to a famous JAV Actress with world-renowned assets.
Trained using base SD1.5 on 50 high quality 512x512 cropped images of the subject "modeling" with text removed from the images. The embedding is 16 tokens--that seems excessive, but it's the only way I got good results.
I'm very new to this, and I've found that training is, unfortunately, an art, not a science. So, this embedding isn't perfect and has some limitations. Please keep that in mind when you rate!
Note: Remove the ".zip" from the end of the file name and place in the embeddings folder. The default trigger is 1hit1.
Important Points:
For whatever reason, the embedding works worse for realistic generations and on the base SD1.5 model. Other models and artistic styles produce better results.
The results get much better with prompt engineering and, especially, lengthy prompts.
You definitely have to adjust the weight of the trigger word relative to the rest of the prompt.
Generations of the subject's assets can be prone to distortions, especially when they are exposed. This doesn't occur often, but does happen. I believe this was due to the fact that the extreme size of those assets made it impossible to fit both them and the subject's face in a close-up when cropping to a square image. This seems to be mitigated by using words in the prompt to describe those assets.
You will need to include words like naked, nude, and topless in the negative prompt to avoid accidental wardrobe malfunctions (we wouldn't want that, now would we?).
I've found that including open mouth and teeth in the negative prompt improves generations.
Unless you include strong and specific descriptions of your intended background scenery or setting in your prompts, generations will tend to incorporate the following elements: palm trees, desert plants, brick walls, distant buildings, sunny weather, and the interior of houses. Prompting for background scenery and setting elements seems to mitigate this effect.
For further tips & tricks, see the PNG info in the sample images. Yes, my prompting style is weird and complicated, but it works, right?
Samples:
These sample generations were done in Galaxy Time Machine Photo for You and Deliberate. The crappier looking ones were made using simple prompts; the better looking ones required extensive prompt engineering. I didn't use HiRes Fix, Face Restoration, ControlNet, Img2Img, Lycoris, or Negative Embeddings on any of these generations. I believe I used model LORAs on a few, but none for concepts or subjects. Your results will probably improve if you use any of those!
Note: The new religious image I uploaded uses a lot of tricks. Oh, also, the one demon picture uses a ton of stuff too. Just showing what it's possible to do with the embedding and some extreme engineering.
PREVIEW
I've been working on Version 2 and had a breakthrough--it was leaps and bounds better than Version 1... but, after playing around with it more, there were some inconsistencies with generation. When it worked, it worked much better, but it was way more difficult to prompt for. Basically, unless you prompted for specific "features" of the subject, they wouldn't show up well.
I'm going to try fixing what I think the issue was (likely captioning) and try training again sometime soon. My hope is that I can produce an embedding that gets results as good as the one below with simpler prompts and more consistency.
8hit8
This is a textual inversion trained on pictures of wrestler Julia Hart.
This ti was requested by user rimale.
I got some nice results with this ti, hopefully you will get them, too!
Still learning and improving the progress of the training, any feedback is welcome.
Feel free to leave me a tip if you like what I am doing :)
JuRoH401
Elle younger years. trained with 5000 steps and 71 images
elle
A commission of @onedoomeddude. (Hope it's the same nick here! ^^)
Sabrina Lynn is an American social media personality, with around 500K followers on Instagram. For her embedding, I've tried to capture her looks of her early era on social media, which was a request of @onedoomeddude.
1000-step TI trained on a dataset of 18 images with my usual settings.
Appreciate my work? My TIs are free, but you can always buy me a coffee. :)
Curious about my work process? I have summarized it here.
You're obviously free to experiment, but bear in mind that my TIs are trained with a more or less fixed phrasing, that normally starts with:
"photo of EMBEDDING_NAME, a woman"
So I recommend always starting your prompt like that and then building the rest of the prompt from there. For instance, "photo of (sablynn:0.99), a woman, RAW, close portrait photo, long brown coat, turtleneck, long haircut, pale skin, slim body, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 sharp focus, f 5.6"
sablynn