Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Add more GitHub shields to Readme * Add more GitHub shields to Readme * Update docs to include Pull Request warning, update requirements installer with latest PyTorch * Patch token limit warning, make tests fail after any error, update docs to include testing and update some pages with better wording and formatting * Enable loading of Checkpoint files / Safetensors, update docs, fix failing test for main * AIT Batch size patch, CKPT/Safetensors model loading, Stop PerfLoop from crashing, bump diffusers to latest ones on main (required for ckpt), better handle local models, requiremets installer now ignores git+http lines, suppress some UserWarnings from Torch 2.0 transfer * Patch psutil linux import that breaks windows * Deleted old diffusers, added settings page, new CI for testing yarn build, rebuilt frontend * Update yarn build CI * Update GitHub templates * AIT Image to Image support * New Config settings, Updated docs, xFormers for PT2.0 * Update feature flags in the docs * (internal) Prep for MultiModel loading, Real-ESRGAN * Add wsl install script * Fix WSL install sciript * Update WSL script, add docs for WSL setup * Fix incorrect link in the docs * Add new PyTorch tests * Pytorch optimizations (#48) * Pytorch VRAM optimizations * Little cleanup & fixed OL==4 * Clean up the code, expose the optLevel option in the main script, link it into the config * Fix AIT pipe optimization level * Reformat with black * Integration in the frontend, config saving, docs * Update default value in main.py to load from a config --------- Co-authored-by: Stax124 <[email protected]> * Update docs for local installation * Update default value for optLevel, xFormers fix, more opt logging * Patch config pointer overwrite bug * Raise error when trying to run ControlNet with opt=0 * (internal) AITControlnet * AIT ControlNet working * Updated dead link in readme, rebuild frontend * Fix incorrect Unet switching * New unit tests, cached AIT compile * Add extension recommendations for VSCode * Fix AIT Canny ControlNet, and new ControlNet options (depth, normal, segmentation, fix mlsd) * Update feature docs, fix AIT compile frontend websockets, Update docs name and description * Fix typo in docs * Do not raise on failed ControlNet Unet compile, as some models are not capable of it * Nuke cluster.py, UI for converting models, do not crash AIT on ControlNet Unet fail * Config naming revamp, initial work on SD Upscale, cleanup old code, update ImageOutput Vue class for better compatibility * Extract lwp into a new module, enable lwp in controlnet * Purge cache_dir variable, fix AIT path error, initial support for LoRA (no UI) * Pull Lora branch into experimental (#52) * Initial lora support * Autorefresh UI on model change, rebuilt frontend * Split optLevel into multiple parameters, make Loading UI more reliable, add UI word counter * LoRA UI working, Unit test for LoRA, more consistent model API responses * Add format script to yarn * set DPM M as the default scheduler, dynamic metadata write, version lock gpustat * Enable use of CPU * Update autoinstaller to support AMD cards * Better support for CPU and AMD users, init work on Rust Based install CLI * ONNX support (#49) * update gitignore * PyTorch -- Onnx conversion Basics done, _encode_prompt() should work as well, and so should aten::scaled_dot_product_attention conversion, but I'm not sure, needs some more testing. * Refactor a bit + comment * Rewrite conversion This should hopefully reduce model size and allow for unet quantization BREAKS LOADING COMPLETELY!!! * (Theoretically) Fix loading & txt2img In theory, this should have zero problems generating images. Didn't test it tho, so it's all theory * Try to fix inference (load works, inference does not) * Fix txt2img * Improve performance This was done in school so nothing is tested (probably broke everything) * Txt2img completely done This was tested on Windows with CUDAExecutionProvider, DmlExecutionProvider and CPUExecutionProvider. Cuda and DirectML provide (at least currently) about the same performance on my RTX 3080, however this may just be due to my subpar implementation. CPUExecutionProvider is faster on native Windows than on WSL. (by about 25%) In practice, on my Ryzen 9 5900X, WSL is 20s/it and Windows is 15s/it (may have been a fluke, who knows at this point) * Move functions to helpers / Basic img2img * FP16 fallback for failed quantization * Img2img working :) * Inpainting * Fix conversion, refactor & inpainting does not work * Reformat with Black --------- Co-authored-by: Stax124 <[email protected]> Co-authored-by: Stax124 <[email protected]> * Clean onnx.py * Rename file to fix import self errors * Rust based installer prep --------- Co-authored-by: Márton Kissik <[email protected]> * Fix Cache-control, initial work on S3/R2 bucket support * ONNX pipeline added to API * Use webp image format wherever possible * Fix Unit tests, generate random image * Fix ROCm UI * Move some scripts to package.json * Update dockerignore * Cloudflare R2 bucket support * add boto3 as requirement * Add Send To Compontent in the UI * AIT working with .ckpt/.safetensors * Update .env and docker-compose.yml, Installer renamed to Manager, CI for Manager * Update the workflow to include experimental * ToMeSD support (#53) * ToMeSD * requirements.txt * Fix typo * Frontend * Format with Black --------- Co-authored-by: Stax124 <[email protected]> * AIT VRAM savings, better ImageOutput UI, better SendTo UI * Affix diffusers and controlnet-aux as new versions were released * New Image Upload UI * UI show only Valid models * Update docs, UI fixed duped image * More Volta-Manager progress * Manager: remove some points where it can panic, update workflow to leave artifacts on GitHub * Fix typo in action * Fix typo * More work on Manager * Slowly wrap up work on Manger, main.py handles dotenv * HighRes Fix, AIT Frontend slider lock, Flags * Fix AIT property lock * Ignore AITemplate directory * Interrogators (#56) * try to quantize things * Start work on interrogators (Deepdanbooru is the only one tested, and it works) * finish interrogation work * Revert lwp_sd.py * Refactor * Partial Frontend for tagger * Fix requirement installer * Frontend functional --------- Co-authored-by: Stax124 <[email protected]> * Rebuild frontend * Correct number of steps for Img2img and HiRes * Add an option to install dependencies without running UI or checking for HF tokens (#59) * Add option to install deps only * Restructure --------- Co-authored-by: Stax124 <[email protected]> * Fix ControlNet Depth, Normal non functional due to breaking changes in controlnet-aux * Automatic memory cleanups * Rename Exit to Back in submenus for Manager * ControlNet and ControlNet preprocessor caching * Update Dockerfile with the latest changes * Add OutputStats to UI (seed, time taken) * Add Delete and Download buttons to Image Browser, enable SendTo functionality from Image Browser * Fix AIT Dimlock * ImageBrowser Delete Modal (+icons), more consistent UI spacing, Interrupt button will be disabled if not generating an image * Fix AITemplate UNet swapping * Fix UI Grid "jitter" when switching pages * validation error notification to the UI when API refuses the body * Disable TomeSD by default, update dockerignore --------- Co-authored-by: Márton Kissik <[email protected]> Co-authored-by: aaronsantiago <[email protected]>
- Loading branch information