Releases: bghira/SimpleTuner
v0.3.1 - resolution bugfix
This release is an important update. VAE cached latents were forced to 1024px previously.
Documentation 578c49c, 39aac5c, 767fc75
Update env example to fix terminal SNR parameters plus reorganise it 03e82fc
Validation resolution can be separate 167ab10
AWS: Strip the bucket name from file list 821e3bf
Update S3 downloader for better threshold defaults 1fde1e9
csv_to_s3: similarity bump 9058e14
VAECache: shuffle samples to allow multiple machines to assist in caching 5f91f81
revert ubuntu script changes a4e1cf8
updates 36c47ff
MultiAspectSampler: do not resize to 1024px unconditonally 8d554d9 , d446636
train_sdxl: fix typo in log line 401cf82
VAECache: fix init with resolution arg. image latents were being resized to 1024px always.
v0.3.0 - Cloudy edition
This version of the software will require re-seeding the cache directories.
It is incompatible with previous versions, and swapping between these for training runs will be difficult
Changelog:
🌟 New Features:
- Introducing DataBackend:
- A powerful new abstraction that brings flexibility to your data I/O operations.
- With the new DataBackend, you can now seamlessly switch between different storage solutions. This release introduces support for:
- Local Filesystem: Continue using your local storage just like before.
- S3-compatible storage: Scale up your operations and store data on S3 providers such as R2 or Wasabi, without changing your workflow.
- Progress bars, cleaner startup messaging.
- Use
env SIMPLETUNER_LOG_LEVEL=INFO
(or DEBUG, WARNING, ERROR) to change the verbosity.
- Use
- SDXL to CivitAI-compatible Safetensors Conversion Script:
- Making it easier than ever to convert your datasets to the SDXL format.
Local backend
- All of the old logic, abstracted away into a hidey-hole class.
- Still works as it used to! No changes needed for you.
S3 Data Backend (Experimental)
- Mostly a drop-in replacement for local filesystem operations
- Retrieve images from S3
- Store VAE latent cache in S3
- Store the aspect bucket cache inside the S3 bucket
- Compatible with S3-compatible providers, eg. Wasabi
- Does not currently make use of or support Prefixes
S3 Limitations
- Text embed cache is stored locally still, due to inefficiency of small files
- S3 metadata is not in use for storing image properties eg. size / luminance
- Some optimizations have not been made yet. Notably, egress costs will be higher than they should.
- VAE cache latents are stored in the bucket to make them portable between machines.
- This is perhaps as bandwidth-optimized as it can be, with the startup trying to read as few files as possible.
🔧 Improvements & Refinements:
- Enhanced logging capabilities for prompt embeddings, offering better insights into your operations.
- Improved data storage with UTF-8 encoding, ensuring compatibility and consistency across platforms.
🐛 Bug Fixes:
- Addressed an issue with the
use_captions
behavior in the training script. - Made minor fixes to ensure seamless interaction with the S3 storage backend.
- Fixed #60
- Fixed #59
- Fixed #58
Full Changelog: v0.2.3...v0.3.0
v0.3.0-rc1 - cloudy edition
Merge pull request #61 from bghira/main v0.3.0 changes
v0.2.3 - Geisha edition
Aspect bucketing changes
- More robust handling of
seen_images
, improving on v0.2.2 changes. - More efficient bucket changing mechanism. No longer considers exhausted buckets.
- Fix for running out of images early and looping forever.
- Running out of images in a single bucket is downgraded from a WARNING log to DEBUG, as this is a normal operating condition.
- Once ALL buckets are exhausted, we will log the training state, and bump
current_epoch
by one. - Fixed logging message so that
remaining
image count is correctly updated to reflect that it is thetotal
image count
Optimizers
- Added
--use_adafactor_optimizer
without the configurable flags to make it a drop-in for AdamW8Bit. Use with caution, as it is not tested. However, it could help with consumer GPU training support (24G).
Prompt library
- Adjusted some prompts, removed prompt weighting
- User prompt library can now be created and used in addition to, or in place of the SimpleTuner prompt library. See the option
--user_prompt_library
help and theuser_prompt_library.json.example
document for more information. - New prompts added to SimpleTuner library:
- portrait photography of a beautiful Japanese young geisha with make-up looking at the camera, intimate portrait composition, high detail, a hyper-realistic close-up portrait, symmetrical portrait, in-focus, portrait photography, hard edge lighting photography, essay, portrait photo man, photorealism, serious eyes, leica 50mm, DSLRs, f1. 8, artgerm, dramatic, moody lighting, post-processing highly detailed
- seems to be a really strong concept in SDXL. if it breaks, you are in danger.
- The Great Wave off Kanagawa
- a stunning portrait of a soviet television news show in a 1977 wes anderson style 70mm film shoot
- portrait photography of a beautiful Japanese young geisha with make-up looking at the camera, intimate portrait composition, high detail, a hyper-realistic close-up portrait, symmetrical portrait, in-focus, portrait photography, hard edge lighting photography, essay, portrait photo man, photorealism, serious eyes, leica 50mm, DSLRs, f1. 8, artgerm, dramatic, moody lighting, post-processing highly detailed
General changes
- Better logging, with less noise
- Trainer statistics are now printed on every epoch refresh, when SIMPLETUNER_LOG_LEVEL=INFO or higher
- Renamed seen_state.json to seen_images.json
- Added
--tracker_project_name
and adjusted default value of--tracker_run_name
- Adjusted value of
--learning_rate_end
for Polynomial scheduler to4e-7
from1e-7
- Refactored data loaders and bucket samplers to be more robust and maintainable
- Training state is now saved as a part of the checkpoint itself. Upon resume, the training state is resumed from the checkpoint. This is a breaking change.
- Fix save path for final checkpoint.
Pull requests:
Full Changelog: v0.2.2...v0.2.3
v0.2.2 - astronaut edition
SDXL v_prediction, (under)trained via SimpleTuner:
What's Changed
- Resolve an issue with over-sampling of images, increasing randomness
- Resolve post-training validation error
Full Changelog: v0.2.1...v0.2.2
v0.2.1
v0.2.0 - Experimental, beware!
What's Changed
- Next-gen data loader and aspect bucket infrastructure that is incompatible with previous releases
- Split the aspect bucket sampler and dataset code out so that it is more readable, more maintainable.
- VAE cache now uses image filenames instead of SHA hashes. This speeds up creation, and makes it easier to remove the cache entries at some future point.
- More efficient aspect bucketing, lower system memory use, higher CPU utilization, better saturation of disk I/O
- Aspect buckets are now rounded down to two decimals
- Prompt handler class for managing captions
- Working SDXL trainer, thanks to image normalization fix
Full Changelog: v0.1.10...v0.2.0
v0.1.11 - Min-SNR weighted loss, and dropout fixes
What's Changed
- Luminance tracking for WandB by @bghira in #31
- Track training luminance vs validation luminance
- Dropout: add SDXL caption dropout compatibility by @bghira in #32
- Logging luminance for training data
- Input perturbation for SDXL (Experimental)
- Caption dropout fix for SDXL
- and, Save 'seen' state for SDXL aspect bucketing by @bghira in #33
- Min-SNR weighted loss, helps converge more quickly. (Experimental)
Full Changelog: v0.1.8...v0.1.11
v0.1.9
What's Changed
- Import SimpleTuner prompt validation library via --validation_prompt_library by @bghira in #29
- New dataloader arguments for deleting problematic images by @bghira in #30
- Arguments: add terminal SNR parameters for tweaking, rather than being baked-in
- Make terminal SNR opt-in
Full Changelog: v0.1.8...v0.1.9