Extract the zip file to so vits svc raw and load the model to use. This is the initial version of the model, and the model trained based on 800 songs will be updated in the future.
The modal source and training code come from the Internet, and the model is only used for exchange of scientific research interests. If there is any infringement, please contact to delete, thank you.
图源和训练代码来源于网络,模型仅作为科研兴趣交流使用,如有侵权请联系删除,谢谢!
画像ソースとトレーニング コードはインターネットから取得され、モデルは科学研究の利益の交換にのみ使用されます.侵害がある場合は、連絡して削除してください。
It is prohibited to use this model for commercial purposes and any scenarios of illegal acts and purposes.
本人原则上不支持使用此模型生成任何成人内容。用户在使用本模型时,请务必遵守当地法律法规,勿侵犯他人的名誉权、隐私权或肖像权。
v1.2:
fixed pipeLoader having image output instead of clip
Added clip output to pipeKSampler
Added original pipe throughput to pipe>basic_pipe and pipe>detailer_pipe nodes
________________________________________________________________________________
added config.json for auto update (set to false by default) -
you will need to install the nodepack via git clone https://github.com/TinyTerra/ComfyUI_tinyterraNodes.git
from inside the ComfyUI\custom_nodes
folder, and set autoUpdate to true in the config file for auto update to work properly.
________________________________________________________________________________
Further updates will be on github only.
v1.1:
Added 'pipe > basic_pipe' and 'pipe to detailer_pipe' for better compatibility with ltdrdata's ImpactPack
v1:
Adds pipeLoader and pipeKSampler (Modified with merge of Efficiency nodes and Advanced Clip Text Encode) for even more efficient Loading and Sampling.
Along with pipe Utils (IN, OUT and MERGE)
GitHub Repo: https://github.com/TinyTerra/ComfyUI_tinyterraNodes
This was a requested guide on how to generated smooth Gifs using SD-Auto1111, this is overall done in 3 big steps:
Finding the right resources for the generation
Processing the generated data into Gif
post-Processing the Gif by applying interpolation (smoothing)
for this guide you'll need to download :
Img Slices To Gif script (placed in the Scripts folder on Auto1111, and used within Auto1111 )
Controlled Parameters Animation script (placed in the Scripts folder on Auto1111, and used within Auto1111 )
Flowframes software
there two types of resources you can use leading to different types of animations:
Frames grid LoRAs, very precise on the subject its trained on but tend to be less flexible, the best person to making is type of resource is aDDont 🔞
Weight progressing concept LoRAs, while vary flexible it doesn't cover a wide range of motion, and its precision is bit jittery at times, , the best person to making is type of resource is ntc .
as we established before we're covering 2 types of animations [Frames Animation] and [Parameter Animation] .
Frames Animation 🎞 :
for this you'll need to get a "Frames grid LoRA" and you'll the Img Slices To Gif script for cutting the generated result.
start by using the LoRA in "txt2img" at full weight in the resolution mention by its creator.the result will look distorted so dont worry.
then "send to img2img", and lower the LoRA to between 0.2-0.4, set the desnoising to between 0.28- 0.4
now can add whatever additional LoRA you want, set the resolution as high as you want (but dont break the aspect ratio).
now select the "Img Slices To Gif" in the script dropdown menu and, Enable, set the parameters based on the "Frames grid LoRA" used, example for 2x3 grid set XSlices=2 & YSlices=3.
Expand the Extra options if want to toggle playing back and forth.
Now Hit Generate ! and your can check the result in "stable-diffusion-webui\outputs\txt2img-images\txt2gif"
Parameter Animation 🎛 :
for this you'll need to get a "Weight progressing concept LoRA" and you'll need the Controlled Parameters Animation script for progressing the LoRA weight .
note that you can also try with this approach with normal LoRAs
now select the "Controlled Parameters Animation" in the script dropdown menu .
choose Parameter Type "LoRA"
add the LoRA Name you want to control, Exmaple for "<lora:MyTestLora_v10 :1>" to I'll add "MyTestLora_v10"
now specify the parameters like the stating Value (value does the LoRA starts moving form), the end Value (value where to LoRA stops progressing), and the step value (how much does to LoRA progresses each frame).
note that you set the LoRA to progress backward by making the stating Value greater then the end Value
lower step value like 0.01 makes for more generated frames and a much smoother transition
now add the inputed parameters by clicking on " Add Parameter Layer"
Expand the Extra options can hit "Estimate Output" to check how many images are going to get generated
you can also toggle playing back and forth by checking "Pingpong".
Now Hit Generate ! and your can check the result in "stable-diffusion-webui\outputs\txt2img-images\txt2gif"
for this you'll need the Flowframes software installed, when prompted to choose an interpolation model to download just choose the latest RIFE model (RIFE CUDA for NVIDIA).
the model file is a bit large and may fail to download at first, just keep on retrying by going to Setting/ Application/ Manage Downloaded Model Files / Open Model Downloader
Now in the interpolation Tab choose the Ai model that you downloaded,
click on browse Video and load the generated GIF from "stable-diffusion-webui\outputs\txt2img-images\txt2gif"
experiment with the output speed, I mostly use "x3 Speed" with "x2 Slowmo" for smooth 30fps GIFs
and finally set the output format to GIF
you might want to low the quality and color pallet for smaller sizes
Now Hit Interpolate! and you'll find the result in the same Directory as the input
these are a collection of upscalers that i use .
extract the files into your models\ESRGAN folder then close and relaunch automatic1111 or Vladdiffusion ( which ever you use ) . will work in collab too but i have no idea how to..
a simple too to automatically make gifs withinb auto 1111 from animation frames loras
Tips:
Install by moving it to the /Scripts folder.
The resulting GIF is not shown after generation; instead, you can find it in Outputs/txt2img-images/txt2gif.
Cozy Nest is a UI extension for Automatic's sd-webui.
https://github.com/Nevysha/Cozy-Nest
Go in the extenstion tab and search for "Cozy-Nest" or install manually by following those step :
Open your SD-Webui
Goto Extension Tab
add Extenion by pasting this URL, since this extension is not public in the repository, yet https://github.com/Nevysha/Cozy-Nest
Resizable panels
Full Screen Inpainting
Customizable tab menu position (top, left, centered)
Cozy look with dark or light theme (add ?__theme=light
in url or set --theme=light
in Auto1111 start arguments to switch to light theme)
Bypass Cozy Nest by adding CozyNest=No
in URL param (ie: http://localhost:8501/?CozyNest=No) - useful for mobile
Save resize bar position / panel ratio in local storage
Customize accent color
Add or remove accent to the generate buttons
Customize font size
Move settings in a dedicated collapsible and movable tab
Smaller bottom padding bar to get a bit more screen space
Setting to center the top menu tabs
Setting to remove the gap between checkpoint and other quicksetting
Setting to center quicksetting
Loading screen with estimated percentage based on previous loading time
make settings tab movable
Extra network in a dedicated tab:
Resizable side panel
Customizable card size
Drag and Drop tab button inside or outside a “tab container” to bring them or move them from/out main menu
Extra Networks left sided tab.
Close Extra Network tab with escape key
Fetch version from a desicated json file hosted directly in the repo to an easier view of update of Cozy Nest.
URL for extension's git repository
https://github.com/new-sankaku/stable-diffusion-webui-metadata-marker.git
--
It is an extension of tex2img.
The output image will have generation information rendered on it.
It is designed mainly for verification purposes.
For example, you can verify differences that arise from various prompts.
Additionally, it can be useful when you want to share generated information on Twitter.
This extension has the following features:
You can switch the output image settings.
You can set the information to be rendered on each item.
The font size automatically adjusts based on the image size.
If there is a feature you would like to add, please let us know.
I will accommodate your request if possible.
---
tex2imgのExtensionです。
出力する画像に生成時の情報を描画します。
主に検証用途で使われることを想定しています。
例えば、Promptの違いによる差を確認することができます。
あるいは、生成情報をツイッターで紹介したいとき、役に立つと思います。
このExtensionは以下の機能を持っています。
・画像の出力設定を切り替えられます。
・描画される情報を項目ごとに設定できます。
・画像サイズによってフォントサイズが自動で切り替わります。
追加したい機能があればリクエストしてください。
私に可能であれば対応します。