Compare commits

...

233 Commits

Author SHA1 Message Date
dependabot[bot] ae05379cc9
ci: bump actions/checkout from 4 to 5 (#4085)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-02 22:28:40 +02:00
Marvin M 59f183ab9b
Fix: Readme path (#3841)
* Readme path fix

* fix '/' -> '\'
2025-01-24 11:55:35 +01:00
Manuel Schmid 4b5021f8f6
docs: remove link to SimpleSDXL (#3837)
see https://github.com/lllyasviel/Fooocus/issues/3836
2025-01-14 06:14:45 +01:00
lllyasviel d7439b2d60
Update readme.md 2024-08-18 23:02:09 -07:00
lllyasviel 670d798332
Update Project Status 2024-08-18 22:42:25 -07:00
Manuel Schmid 8da1d3ff68
Merge pull request #3507 from lllyasviel/develop
Release 2.5.5
2024-08-12 08:11:06 +02:00
Manuel Schmid 710a9fa2c5
release: bump version to 2.5.5, update changelog 2024-08-12 08:10:20 +02:00
Manuel Schmid 251a130f06
fix: move import to resolve colab issue (#3506) 2024-08-12 07:59:00 +02:00
Manuel Schmid 0a87da7dc1
Merge pull request #3503 from lllyasviel/develop
feat: change code owner from @mashb1t to @lllyasviel
2024-08-11 20:31:05 +02:00
Manuel Schmid 1d98d1c760
feat: change code owner from @mashb1t to @lllyasviel 2024-08-11 20:29:35 +02:00
Manuel Schmid 1068d3fde4
Merge pull request #3499 from lllyasviel/develop
Release 2.5.4
2024-08-11 18:50:18 +02:00
Manuel Schmid 082a5262b0
release: bump version to 2.5.4, update changelog 2024-08-11 18:48:31 +02:00
Manuel Schmid 14895ebb13
hotfix: yield enhance_input_image to correctly preview debug masks (#3497)
sort images starts from index <images_to_enhance_count>, which is 1 if enhance_input_image has been provided
2024-08-11 17:05:24 +02:00
Manuel Schmid b0d16a3aa7
fix: check all dirs instead of only the first one (#3495)
* fix: check all checkpoint dirs instead of only the first one for models

* fix: use get_file_from_folder_list instead of manually iterating over lists

* refactor: code cleanup
2024-08-11 15:31:24 +02:00
Manuel Schmid fd74b57f56
Merge pull request #3472 from lllyasviel/develop
fix: adjust validation of config settings
2024-08-08 13:17:06 +02:00
Manuel Schmid 8bd9ea1dbf
fix: correctly validate default_inpaint_mask_sam_model 2024-08-08 13:15:15 +02:00
Manuel Schmid ee12d114c1
fix: add handling for default "None" value of default_ip_image_* 2024-08-08 13:15:04 +02:00
Manuel Schmid 2c78cec01d
Merge pull request #3436 from lllyasviel/develop
fix: change wrong label for in describe apply styles checkbox
2024-08-03 15:18:24 +02:00
Manuel Schmid ef0acca9f9
fix: change wrong label for in describe apply styles checkbox 2024-08-03 15:16:18 +02:00
Manuel Schmid 60af8d2d84
Merge pull request #3434 from lllyasviel/develop
Release 2.5.3 - fix changelog
2024-08-03 15:11:35 +02:00
Manuel Schmid 39d07bf0f3
release: fix changelog 2024-08-03 15:10:27 +02:00
Manuel Schmid f0dcf5a911
Merge pull request #3433 from lllyasviel/develop
Release 2.5.3
2024-08-03 15:08:34 +02:00
Manuel Schmid c4d5b160be
release: bump version to 2.5.3, update changelog 2024-08-03 15:07:14 +02:00
Manuel Schmid 2f08cb4360
feat: add checkbox and config to disable updating selected styles when describing an image (#3430)
* feat: add checkbox and config to disable updating selected styles when describing an image

* i18n: add translation for checkbox label

* feat: change describe content type from Radio to CheckboxGroup, add config

* fix: cast set to list when styles contains elements

* feat: sort styles after describe
2024-08-03 14:46:31 +02:00
Sergii Dymchenko da3d4d006f
Use weights_only for loading (#3427)
Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
2024-08-03 12:33:01 +02:00
Manuel Schmid c2dc17e883
Merge pull request #3384 from lllyasviel/develop
Release v2.5.2
2024-07-27 23:29:40 +02:00
Manuel Schmid 1a53e0676a
release: bump version to 2.5.2, update changelog 2024-07-27 23:26:42 +02:00
Manuel Schmid a5040f6218
feat: count image count index from 1 (#3383)
* docs: update numbering of basic debug procedure in issue template
2024-07-27 23:07:44 +02:00
Manuel Schmid 3f25b885a7
feat: extend config settings for image input (#3382)
* docs: update numbering of basic debug procedure in issue template

* feat: add config default_image_prompt_checkbox

* feat: add config for default_image_prompt_advanced_checkbox

* feat: add config for default_inpaint_advanced_masking_checkbox

* feat: add config for default_invert_mask_checkbox

* feat: add config for default_developer_debug_mode_checkbox

* refactor: regroup checkbox configs

* feat: add config for default_uov_method

* feat: add configs for controlnet

default_controlnet_image_count, ip_images, ip_stop_ats, ip_weights and ip_types

* feat: add config for selected tab, rename desc to describe
2024-07-27 23:03:21 +02:00
Manuel Schmid e36fa0b5f7
docs: update numbering of basic debug procedure in issue template (#3376) 2024-07-27 13:14:44 +02:00
Manuel Schmid 1be3c504ed
fix: add positive prompt if styles don't have a prompt placeholder (#3372)
fixes https://github.com/lllyasviel/Fooocus/issues/3367
2024-07-27 12:35:55 +02:00
Manuel Schmid c4ce2ce600
Merge pull request #3359 from lllyasviel/develop
Release v2.5.1
2024-07-25 16:00:08 +02:00
Manuel Schmid 03655fa5ea
release: bump version to 2.5.1, update changelog 2024-07-25 15:22:02 +02:00
Manuel Schmid a9248c8e46
feat: sort enhance images (mashb1t#62)
* feat: add checkbox, config and handling for saving only the final enhanced image

* feat: sort output of enhance feature

(cherry picked from commit 9d45c0e6ca)
2024-07-25 15:21:56 +02:00
Manuel Schmid 37360e95fe
feat: add checkbox, config and handling for saving only the final enhanced image (mashb1t#61)
(cherry picked from commit 829a6dc046)
2024-07-25 15:21:37 +02:00
Manuel Schmid 54985596e8
Merge remote-tracking branch 'upstream/main' into develop_upstream 2024-07-21 12:37:07 +02:00
Manuel Schmid 3a20e14ca0
docs: update attributes and add add inline prompt features section to readme (#3333)
* docs: update attributes and add add inline prompt features section to readme
* docs: update attributes to better show corresponding mutually exclusive groups
2024-07-21 12:36:54 +02:00
Manuel Schmid 2262061145
docs: update attributes to better show corresponding mutually exclusive groups 2024-07-21 12:36:04 +02:00
Manuel Schmid 56928b769b
docs: update attributes and add add inline prompt features section to readme 2024-07-21 12:31:17 +02:00
Manuel Schmid 2e8cff296e
fix: correctly debug preprocessor again (#3332)
fixes https://github.com/lllyasviel/Fooocus/issues/3327
as discussed in https://github.com/lllyasviel/Fooocus/discussions/3323

add missing inheritance for EarlyReturnException from BaseException to correctly throw and catch
2024-07-21 11:49:28 +02:00
Manuel Schmid f597bf1ab6
fix: allow reading of metadata from jpeg, jpg and webp again (#3301)
also massively improves metadata read speed by switching from filepath (tempfile) to pil, which allows direct processing
2024-07-17 23:30:51 +02:00
Manuel Schmid f97adafc09
Merge pull request #3292 from lllyasviel/develop
Release v2.5.0
2024-07-17 12:18:08 +02:00
Manuel Schmid 97a8475a62
feat: revert disabling persistent style sorting, code cleanup 2024-07-17 12:04:34 +02:00
Manuel Schmid 033cb90e6e
feat: revert adding issue templates 2024-07-17 11:52:20 +02:00
Manuel Schmid aed3240ccd
feat: revert adding audio tab 2024-07-17 11:45:27 +02:00
Manuel Schmid 4f12bbb02b
docs: add instructions how to manually update packages, update download URL in readme 2024-07-17 11:37:21 +02:00
Manuel Schmid 9f93cf6110
fix: resolve circular dependency for sha256, update files and init cache after initial model download
fixes https://github.com/lllyasviel/Fooocus/issues/2372

(cherry picked from commit 5c43a4bece)
2024-07-17 10:51:50 +02:00
Manuel Schmid 1f429ffeda
release: bump version to 2.5.0, update changelog 2024-07-17 10:30:58 +02:00
Manuel Schmid 8d67166dd1
chore: use opencv-contrib-python-headless
https://github.com/lllyasviel/Fooocus/pull/1964
(cherry picked from commit 1f32f9f4ab)
2024-07-16 19:56:39 +02:00
Manuel Schmid 3a86fa2f0d
chore: update packages #2 2024-07-16 16:31:15 +02:00
Manuel Schmid ef8dd27f91
chore: update packages
see https://github.com/lllyasviel/Fooocus/pull/2927
2024-07-16 16:30:47 +02:00
Manuel Schmid d46e47ab3d
feat: revert adding translate feature #2 2024-07-16 14:48:54 +02:00
Manuel Schmid 069bea534b
feat: change example audio file
(cherry picked from commit 02b06ccb33)
2024-07-16 13:59:51 +02:00
Manuel Schmid e0d3325894
i18n: rename document to documentation 2024-07-14 21:40:10 +02:00
Manuel Schmid 5a1003a726
docs: update link for enhance documentation 2024-07-14 21:31:59 +02:00
Manuel Schmid 5e8110e430
i18n: adjust translations to use proper english for plural tab titles 2024-07-14 21:07:12 +02:00
Manuel Schmid ee02643020
feat: revert adding detailed steps for each performance 2024-07-14 21:06:59 +02:00
Manuel Schmid e1f4b65fc9
feat: revert adding translate feature 2024-07-14 20:35:39 +02:00
Manuel Schmid f2a21900c6
Sync branch 'mashb1t_main' with develop_upstream 2024-07-14 20:28:38 +02:00
dependabot[bot] 5a71495822
build(deps): bump docker/build-push-action from 5 to 6 (#3223)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-01 20:03:25 +02:00
licyk 34f67c01a8
feat: add restart sampler (#3219) 2024-07-01 14:24:21 +02:00
Manuel Schmid 9178aa8ebb
feat: add vae to possible preset keys (#3177)
set default_vae in any preset to use it
2024-06-21 20:24:11 +02:00
Manuel Schmid 7c1a101c0f
hotfix: add missing method in performance enum (#3154) 2024-06-16 18:53:20 +02:00
Manuel Schmid 9d41c9521b
fix: add workaround for same value in Steps IntEnum (#3153) 2024-06-16 18:44:16 +02:00
Manuel Schmid 3e453501f7
fix: correctly identify and remove performance LoRA (#3150) 2024-06-16 16:52:58 +02:00
Manuel Schmid 55ef7608ea
feat: adjust playground_v2.5 preset (#3136)
* feat: reduce cfg of playground_v2.5 preset from 3 to 2 to prevent oversaturation

* feat: adjust default styles for playground_v2.5
2024-06-11 22:50:09 +02:00
Manuel Schmid ba77e7f706
release: bump version to 2.4.3, update changelog (#3109) 2024-06-06 19:34:44 +02:00
Manuel Schmid 5abae220c5
feat: parse env var strings to expected config value types (#3107)
* fix: add try_parse_bool for env var strings to enable config overrides of boolean values

* fix: fallback to given value if not parseable

* feat: extend eval to all valid types

* fix: remove return type

* fix: prevent strange type conversions by providing expected type

* feat: add tests
2024-06-06 19:29:08 +02:00
Manuel Schmid 04d764820e
fix: correctly set alphas_cumprod (#3106) 2024-06-06 13:42:26 +02:00
Manuel Schmid 350fdd9021
Merge pull request #3095 from lllyasviel/develop
release v2.4.2
2024-06-05 21:50:42 +02:00
Manuel Schmid 85a8deecee
release: bump version to 2.4.2, update changelog 2024-06-05 21:30:43 +02:00
Manuel Schmid b58bc7774e
fix: correct sampling when gamma is 0 (#3093) 2024-06-04 21:03:37 +02:00
Manuel Schmid 2d55a5f257
feat: add support for playground v2.5 (#3073)
* feat: add support for playground v2.5

* feat: add preset for playground v2.5

* feat: change URL to mashb1t

* feat: optimize playground v2.5 preset
2024-06-04 20:15:49 +02:00
Manuel Schmid cb24c686b0
Merge branch 'main_upstream' into develop_upstream 2024-06-04 20:11:42 +02:00
Manuel Schmid ab01104d42
feat: make textboxes (incl. positive prompt) resizable (#3074)
* feat: make textboxes (incl. positive prompt) resizable again

* wip: auto-resize positive prompt on new line

dirty approach as container is hidden and 1px padding is applied for border shadow to actually work

* feat: set row height to 84, exactly matching 3 lines for positive prompt

eliminate need for JS to resize positive prompt onUiLoaded
2024-06-02 13:40:42 +02:00
Manuel Schmid 3d43976e8e
feat: update cmd args (#3075) 2024-06-02 02:13:16 +02:00
Manuel Schmid 07c6c89edf
fix: chown files directly at copy (#3066) 2024-05-31 22:41:36 +02:00
Manuel Schmid 7899261755
fix: turbo scheduler loading issue (#3065)
* fix: correctly load ModelPatcher

* feat: do not load model at all, not needed
2024-05-31 22:24:19 +02:00
Manuel Schmid 64c29a8c43
feat: rework intermediate image display for restricted performances (#3050)
disable intermediate results for all performacnes with restricted features

make disable_intermediate_results interactive again even if performance has restricted features
users who want to disable this option should be able to do so, even if performance will be impacted
2024-05-30 16:17:36 +02:00
Manuel Schmid 4e658bb63a
feat: optimize performance lora filtering in metadata (#3048)
* feat: add remove_performance_lora method

* feat: use class PerformanceLoRA instead of strings in config

* refactor: cleanup flags, use __member__ to check if enums contains key

* feat: only filter lora of selected performance instead of all performance LoRAs

* fix: disable intermediate results for all restricted performances

too fast for Gradio, which becomes a bottleneck

* refactor: rename parse_json to to_json, rename parse_string to to_string

* feat: use speed steps as default instead of hardcoded 30

* feat: add method to_steps to Performance

* refactor: remove method ordinal_suffix, not needed anymore

* feat: only filter lora of selected performance instead of all performance LoRAs

both metadata and history log

* feat: do not filter LoRAs in metadata parser but rather in metadata load action
2024-05-30 16:14:28 +02:00
Manuel Schmid 3ef663c5b7
fix: do not set textContent on undefined when no translation was given #2 (#3046)
* fix: do not set textContent on undefined when no translation was given
2024-05-29 20:33:15 +02:00
Manuel Schmid bf70815a66
fix: use default vae name instead of None on file refresh (#3045) 2024-05-29 19:49:07 +02:00
Manuel Schmid 725bf05c31
release: bump version to 2.4.1, update changelog (#3027) 2024-05-28 01:10:45 +02:00
Manuel Schmid 4a070a9d61
feat: build docker image tagged "edge" on push to main branch (#3026)
* feat: build docker image on push to main branch

* feat: add tag "edge" for main when building the docker image

* feat: update name of build container workflow
2024-05-28 00:49:47 +02:00
Manuel Schmid 0e621ae34e
fix: add type check for undefined, use fallback when no translation for aspect ratios was given (#3025) 2024-05-28 00:09:39 +02:00
Manuel Schmid dfff9b7dcf
fix: adjust clip skip default value from 1 to 2 (#3011)
* Revert "Revert "feat: add clip skip handling (#2999)" (#3008)"

This reverts commit 989a1ad52b.

* feat: use clip skip 2 as default
2024-05-27 00:28:22 +02:00
Manuel Schmid 989a1ad52b
Revert "feat: add clip skip handling (#2999)" (#3008)
This reverts commit cc58fe5270.
2024-05-26 22:07:44 +02:00
Manuel Schmid de34023c79
fix: use translation for aspect ratios label (#3001)
use javascript code instead of python handling for updates for https://github.com/lllyasviel/Fooocus/pull/2590
2024-05-26 19:23:21 +02:00
Manuel Schmid 12dc2396f6
Merge pull request #3000 from lllyasviel/develop
Release 2.4.0
2024-05-26 18:18:53 +02:00
Manuel Schmid c227cf1f56
docs: update changelog 2024-05-26 18:16:18 +02:00
Alexdnk 57d2f2a0dd
feat: make ui settings more compact (#2590)
* Slightly more compact ui settings

Changed Radio to Dropdown.

* feat: change preset from option to select, add accordion for resolution

* feat: change title of aspect ratios accordion on load and update

* refactor: reorder image number slider, code cleanup

* fix: add missing scroll down for metadata tab

* fix: adjust indent

---------

Co-authored-by: Manuel Schmid <dev@mash1t.de>
Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
2024-05-26 18:10:29 +02:00
Manuel Schmid 67289dd0fe
release: bump version to 2.4.0, update changelog 2024-05-26 15:13:54 +02:00
Manuel Schmid cc58fe5270
feat: add clip skip handling (#2999) 2024-05-26 14:18:19 +02:00
Manuel Schmid 4e5509351f
feat: remove labels from most of the image input fields (#2998) 2024-05-26 11:47:33 +02:00
Manuel Schmid 1d1a4a3ebd
feat: add inpaint color picker (#2997)
Workaround as tool color-sketch applies changes directly to the image canvas and not the mask canvas.
Color picker is not correctly implemented in Gradio 3.41.2 => does always get displayed as separate containers and not merged with other elements
2024-05-26 11:40:15 +02:00
Alexdnk d850bca09f
feat: read value 'CFG Mimicking from TSNR' (adaptive_cfg) from presets (#2990) 2024-05-24 22:05:28 +02:00
Manuel Schmid 04f64ab0bc
feat: add translation for image size describe (#2992) 2024-05-24 21:58:17 +02:00
Manuel Schmid 7b70d27032
feat: configure line ending format LF for *.sh files (#2991) 2024-05-24 21:36:07 +02:00
xyny 4da5a68c10
feat: build and push container image for ghcr.io, update docker.md, and other related fixes (#2805)
* chore: update cuda version in container

* fix: use symlink to fix error libcuda.so: cannot open shared object file:

* fix: update docker entrypoint to use entry_with_update.py

* feat: add container build & push workflow

* fix: container action run conditions

* fix: container action versions

* fix: container action versions v2

* fix: docker action registry login and metadata

* docs: adjust docker documentation based on latest changes, add docs for podman and docker

* chore: replace image name env var with github.event.repository.name

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>

* chore: replace image name env var with github.event.repository.name (pt2)

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>

* fix: switch to semver versioning

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>

* fix: build only on versioned tags

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>

* fix: don't update in entrypoint

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>

* fix: remove dash in "docker-compose"

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>

* feat: sync pytorch for docker with version used in prepare_environment

* feat: update cuda to 12.4.1

* fix: correctly clone checked out version in builds, not always main

* refactor: remove irrelevant version in docker-compose.yml

---------

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
Co-authored-by: Manuel Schmid <dev@mash1t.de>
2024-05-23 00:19:54 +02:00
xhoxye 302bfdf855
feat: read size and ratio of an image and provide the recommended size (#2971)
* Add the information about the size and ratio of the read image

* feat: use available aspect ratios from config, move function to util, change default visibility of label

* refactor: extract sdxl aspect ratios to flags, use in describe

as discussed in
https://github.com/lllyasviel/Fooocus/pull/2971#discussion_r1608493765
https://github.com/lllyasviel/Fooocus/pull/2971#issuecomment-2123620595

---------

Co-authored-by: Manuel Schmid <dev@mash1t.de>
Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
2024-05-22 20:47:44 +02:00
Manuel Schmid 7537612bcc
feat: only use valid inline loras, add subfolder support (#2968) 2024-05-20 19:21:41 +02:00
Manuel Schmid ac14d9d03c
feat: change code owner from @lllyasviel to @mashb1t (#2948) 2024-05-20 17:33:12 +02:00
Manuel Schmid 65a8b25129
feat: inline lora optimisations (#2967)
* feat: add performance loras to the end of the loras array

* fix: resolve circular dependency for unit tests

* feat: allow multiple matches for each token, optimize and extract method cleanup_prompt

* fix: update unit tests

* feat: ignore custom wildcards
2024-05-20 17:31:51 +02:00
Manuel Schmid c995511705
feat: progress bar improvements (#2962)
* feat: align progress bar vertically

* feat: use fixed width for status text, remove ordinals

* refactor: align progress to actions
2024-05-19 20:43:11 +02:00
Manuel Schmid e94b97604f
release: bump version number to 2.4.0-rc2 2024-05-19 18:37:18 +02:00
Manuel Schmid 35b74dfa64
feat: optimize model management of image censoring (#2960)
now follows general Fooocus model management principles + includes code optimisations for reusability
2024-05-19 18:36:47 +02:00
Manuel Schmid dad228907e
fix: remove leftover code from hyper-sd8 testing (#2959) 2024-05-19 17:42:46 +02:00
Manuel Schmid 0466ff944c
release: bump version number to 2.4.0-rc1 2024-05-19 14:29:10 +02:00
Manuel Schmid 13599edb9b
feat: add performance hyper-sd based on 4step LoRA (#2812)
* feat: add performance hyper-sd based on 4step LoRA

* feat: use LoRA weight 0.8, sampler dpmpp_sde_gpu and scheduler_name karras

suggested in https://github.com/lllyasviel/Fooocus/discussions/2813#discussioncomment-9245251
results see https://github.com/lllyasviel/Fooocus/discussions/2813#discussioncomment-9275251

* feat: change ByteDance huggingface profile with mashb1t

* wip: add hyper-sd 8 step cfg lora with negative prompt support

* feat: remove hyper-sd8 performance

still waiting for the release of hyper-sd 4step CFG LoRA, not yet satisfied with any of the CFG LoRAs compared to non-cfg ones.
see https://huggingface.co/ByteDance/Hyper-SD
2024-05-19 13:23:08 +02:00
Manuel Schmid 2e2e8f851a
feat: add tcd sampler and discrete distilled tcd scheduler based on sgm_uniform (same as lcm) (#2907) 2024-05-19 13:08:33 +02:00
cantor-set 3bae73e23e
feat: add support for lora inline prompt references (#2323)
* Adding support to inline prompt references

* Added unittests

* Added an initial documentation for development guidelines

* Added a negative number

* renamed parameter

* removed wrongly committed file

* Code fixes

* Fixed circular reference

* Fixed typo. Added TODO

* Fixed merge

* Code cleanup

* Added missing refernce function

* Removed function from util.py... again...

* Update modules/async_worker.py

Implemented suggested change

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>

* Removed another circular reference

* Renamed module

* Addressed PR comments

* Added return type to function

* refactor: move apply_wildcards to module util

* refactor: code cleanup, unify usage of tuples in lora list

* docs: add instructions for running unittests on embedded python, code cleanup

* refactor: code cleanup, move makedirs_with_log back to util

---------

Co-authored-by: cantor-set <cantor-set@no-email.net>
Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
Co-authored-by: Manuel Schmid <dev@mash1t.de>
2024-05-18 17:19:46 +02:00
Manuel Schmid 3a55e7e391
feat: add AlignYourStepsScheduler (#2905) 2024-05-18 15:53:34 +02:00
Manuel Schmid 00d3d1b4b3
feat: add nsfw image censoring via config and checkbox (#958)
* add nsfw image censoring

activatable via config, uses CompVis/stable-diffusion-safety-checker

* fix progressbar call for nsfw output

* use config to set cache dir for safety checker

* add checkbox black_out_nsfw

makes both enabling via config and checkbox possible, where config overrides the checkbox value

* fix: add missing diffusers package

* feat: extract safety checker, remove dependency to diffusers

* feat: make code compatible again after merge with main

* feat: move censor to extras, optimize safety checker file handling

* refactor: rename folder safety_checker_models to safety_checker
2024-05-18 15:50:28 +02:00
Manuel Schmid 33fa175bd4
feat: automatically describe image on uov image upload (#1938)
* feat: automatically describe image on uov image upload if prompt is empty

* feat: add argument to disable automatic uov image description

* feat: rename argument, disable by default

this prevents computers with low hardware specifications from being unnecessary blocked
2024-05-17 18:25:08 +02:00
Manuel Schmid 1eb58fa366
Merge branch 'main_upstream' into develop_upstream 2024-05-17 18:22:55 +02:00
e52fa787 5e594685e1
fix: do not close meta tag in HTML header (#2740)
* fixed typo in HTML (extra </meta> tag)

* refactor: remove closing slash for meta tag

as of specification in https://html.com/tags/meta/, meta tagas are null elements:
This element must not contain any content, and does not need a closing tag.

---------

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
2024-05-17 17:25:56 +02:00
Vishvesh Khanvilkar 96bf89f782
fix: use correct border radius css property (#2845) 2024-05-17 17:18:45 +02:00
docppp bdd6b1a9b0
feat: add full raw prompt to history log (#1920)
* Update async_worker.py

* Update private_logger.py

* refactor: only show full prompt details in logs, exclude from image metadata

---------

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
Co-authored-by: Manuel Schmid <dev@mash1t.de>
2024-05-09 20:25:43 +02:00
Manuel Schmid 052393bb9b
refactor: rename label for reconnect button (#2893)
* feat: add button to reconnect UI without having to reload the page

* qa: add missing semicolon

* refactor: rename button label to "Reconnect"
2024-05-09 19:13:59 +02:00
Manuel Schmid 6308fb8b54
feat: update anime from animaPencilXL_v100 to animaPencilXL_v310 (#2454)
* feat: update anime from animaPencilXL_v100 to animaPencilXL_v200

* feat: update animaPencilXL from 2.0.0 to 2.6.0

* feat: update animaPencilXL from 2.6.0 to 3.1.0

* feat: reduce cfg as suggested by vendor from 3.0.0

https://civitai.com/models/261336?modelVersionId=435001
"recommend to decrease CFG scale." + all examples are in CFG 6
2024-05-09 19:03:30 +02:00
Manuel Schmid f54364fe4e
feat: add random style checkbox to styles selection (#2855)
* feat: add random style

* feat: rename random to random style, add translation

* feat: add preview image for random style
2024-05-09 19:02:04 +02:00
Manuel Schmid c32bc5e199
feat: add optional model VAE select (#2867)
* Revert "fix: use LF as line breaks for Docker entrypoint.sh (#2843)" (#2865)

False alarm, worked as intended before. Sorry for the fuzz.
This reverts commit d16a54edd6.

* feat: add VAE select

* feat: use different default label, add translation

* fix: do not reload model when VAE stays the same

* refactor: code cleanup

* feat: add metadata handling
2024-05-09 18:59:35 +02:00
Manuel Schmid 121f1e0a15
Merge branch 'main_upstream' into develop_upstream 2024-05-05 01:04:12 +02:00
Manuel Schmid c36e951781
Revert "fix: use LF as line breaks for Docker entrypoint.sh (#2843)" (#2865)
False alarm, worked as intended before. Sorry for the fuzz.
This reverts commit d16a54edd6.
2024-05-04 14:37:40 +02:00
Manuel Schmid 5b2d046b12
Merge branch 'main_upstream' into develop_upstream 2024-05-02 23:58:43 +02:00
Manuel Schmid d16a54edd6
fix: use LF as line breaks for Docker entrypoint.sh (#2843)
adjusted for Linux again, see https://github.com/lllyasviel/Fooocus/discussions/2836
2024-05-01 14:11:38 +02:00
Manuel Schmid dbf49d323e
feat: add button to reconnect UI without having to reload the page (#2727)
* feat: add button to reconnect UI without having to reload the page

* qa: add missing semicolon
2024-04-17 22:23:18 +02:00
Manuel Schmid e64130323a
Merge branch 'main_upstream' into develop 2024-04-10 22:08:01 +02:00
Manuel Schmid 1dff430d4c
feat: update interposer from v3.1 to v4.0 (#2717)
* fix: load image number from preset (#2611)

* fix: add default_image_number to preset handling

* fix: use minimum image number of preset and config to prevent UI overflow

* fix: use correct base dimensions for outpaint mask padding (#2612)

* fix: add Civitai compatibility for LoRAs in a1111 metadata scheme by switching schema (#2615)

* feat: update sha256 generation functions

29be1da7cf/modules/hashes.py

* feat: add compatibility for LoRAs in a1111 metadata scheme

* feat: add backwards compatibility

* refactor: extract remove_special_loras

* fix: correctly apply LoRA weight for legacy schema

* docs: bump version number to 2.3.1, add changelog (#2616)

* feat: update interposer vrom v3.1 to v4.0
2024-04-06 15:27:35 +02:00
delta_lt_0 5ada070d88
feat: support download of huggingface files from a mirror website (#2637)
* fix: load image number from preset (#2611)

* fix: add default_image_number to preset handling

* fix: use minimum image number of preset and config to prevent UI overflow

* fix: use correct base dimensions for outpaint mask padding (#2612)

* fix: add Civitai compatibility for LoRAs in a1111 metadata scheme by switching schema (#2615)

* feat: update sha256 generation functions

29be1da7cf/modules/hashes.py

* feat: add compatibility for LoRAs in a1111 metadata scheme

* feat: add backwards compatibility

* refactor: extract remove_special_loras

* fix: correctly apply LoRA weight for legacy schema

* docs: bump version number to 2.3.1, add changelog (#2616)

* feat:support download huggingface files from a  mirror site

---------

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
2024-04-06 15:25:19 +02:00
Manuel Schmid e2f9bcb11d
docs: bump version number to 2.3.1, add changelog (#2616) 2024-03-23 16:57:11 +01:00
Manuel Schmid 523ef5c70e
fix: add Civitai compatibility for LoRAs in a1111 metadata scheme by switching schema (#2615)
* feat: update sha256 generation functions

29be1da7cf/modules/hashes.py

* feat: add compatibility for LoRAs in a1111 metadata scheme

* feat: add backwards compatibility

* refactor: extract remove_special_loras

* fix: correctly apply LoRA weight for legacy schema
2024-03-23 16:37:18 +01:00
Manuel Schmid 9aaa400553
fix: use correct base dimensions for outpaint mask padding (#2612) 2024-03-23 13:10:21 +01:00
Manuel Schmid 7564dd5131
fix: load image number from preset (#2611)
* fix: add default_image_number to preset handling

* fix: use minimum image number of preset and config to prevent UI overflow
2024-03-23 12:49:20 +01:00
Manuel Schmid 978267f461 fix: correctly set preset config and loras in meta parser 2024-03-20 21:16:03 +01:00
Manuel Schmid e9bc5e50c6
Merge pull request #2576 from mashb1t/hotfix/default-max-lora-number-adjustments
fix: add enabled value to LoRA when setting default_max_lora_number
2024-03-19 23:10:03 +01:00
Manuel Schmid 856eb750ab
fix: add enabled value to LoRA when setting default_max_lora_number 2024-03-19 23:08:38 +01:00
Manuel Schmid 6b41af7140
Merge pull request #2571 from mashb1t/hotfix/remove-positive-prompt-from-anime-preset
fix: remove positive prompt from anime prefix
2024-03-19 19:11:53 +01:00
Manuel Schmid 532a6e2e67
fix: remove positive prompt from anime prefix
prevents the prompt from getting overridden when switching presets in browser
2024-03-19 19:10:37 +01:00
Manuel Schmid a1bda88aa3
Merge pull request #2558 from lllyasviel/develop
release 2.3.0
2024-03-18 18:33:27 +01:00
Manuel Schmid 3efce581ca
docs: add hint for colab preset timeout to readme 2024-03-18 18:13:15 +01:00
Manuel Schmid ee361715af
docs: bump version number to 2.3.0 2024-03-18 18:04:15 +01:00
Manuel Schmid c08518abae
feat: add backwards compatibility for presets without disable/enable LoRA boolean
https://github.com/lllyasviel/Fooocus/pull/2507
2024-03-18 17:40:37 +01:00
Manuel Schmid 6b44c101db
feat: update changelog and readme 2024-03-18 12:30:39 +01:00
Manuel Schmid 5bf96018fe
Merge branch 'main_upstream' into develop 2024-03-17 14:13:37 +01:00
Manuel Schmid d057f2fae9
fix: correctly handle empty lora array in a1111 metadata log scheme (#2551) 2024-03-17 14:01:10 +01:00
Manuel Schmid 86cba3f223
feat: add translation for unsupported image error (#2537) 2024-03-15 23:11:26 +01:00
David Sage 37274c652a
feat: improve anime preset by adding style Fooocus Semi Realistic (#2492)
* Add files via upload

In anime.json, at Line 36,
replace "Fooocus Negative" with "Fooocus Semi Realistic"

* Add files via upload

In sdxl_styles_fooocus.json, insert this text at Line 6:

    {
        "name": "Fooocus Semi Realistic",
        "negative_prompt": "(worst quality, low quality, normal quality, lowres, low details, oversaturated, undersaturated, overexposed, underexposed, grayscale, bw, bad photo, bad photography, bad art:1.4), (watermark, signature, text font, username, error, logo, words, letters, digits, autograph, trademark, name:1.2), (blur, blurry, grainy), morbid, ugly, asymmetrical, mutated malformed, mutilated, poorly lit, bad shadow, draft, cropped, out of frame, cut off, censored, jpeg artifacts, out of focus, glitch, duplicate, (bad hands, bad anatomy, bad body, bad face, bad teeth, bad arms, bad legs, deformities:1.3)"
    },

* Add files via upload

Popup image for the new "Fooocus Semi Realistic" style

* Update sdxl_styles_fooocus.json

Removed "grayscale, bw" from the proposed Fooocus Realistic entry at Line 6 of sdxl_styles_fooocus.json

* refactor: cleanup files

* feat: use default model to create thumbnail

juggernautv8, seed 0, 1024x1024, no LoRAs, only this style, positive prompt "cat"

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
Co-authored-by: Manuel Schmid <dev@mash1t.de>
2024-03-15 22:52:27 +01:00
Spencer Hayes-Laverdiere 55e23a9374
fix: add error output for unsupported images (#2537)
* Raise Error on bad decode

* Move task arg pop to try block

* fix: prevent empty task from getting queued

---------

Co-authored-by: Manuel Schmid <dev@mash1t.de>
2024-03-15 22:30:29 +01:00
Manuel Schmid 4a44be36fd
feat: add preset selection to Gradio UI (session based) (#1570)
* add preset selection

uses meta parsing to set presets in user session (UI elements only)

* add LoRA handling

* use default config as fallback value

* add preset refresh on "Refresh All Files" click

* add special handling for default_styles and default_aspect_ratio

* sort styles after preset change

* code cleanup

* download missing models from preset

* set default refiner to "None" in preset realistic

* use state_is_generating for preset selection change

* DRY output parameter handling

* feat: add argument --disable-preset-selection

useful for cloud provisioning to prevent model switches and keep models loaded

* feat: keep prompt when not set in preset, use more robust syntax

* fix: add default return values when preset download is disabled

https://github.com/mashb1t/Fooocus/issues/20

* feat: add translation for preset label

* refactor: unify preset loading methods in config

* refactor: code cleanup
2024-03-15 22:04:27 +01:00
Manuel Schmid 8baafcd79c
Merge branch 'main_upstream' into develop 2024-03-15 20:52:06 +01:00
Zxilly 0da614f7e1
feat: allow users to add custom preset without blocking automatic update (#2520) 2024-03-15 20:51:10 +01:00
Manuel Schmid 9cd0366d30
fix: parse seed as string to display correctly in metadata preview (#2536) 2024-03-15 20:38:21 +01:00
josephrocca f51e0138e6
feat: update xformers to 0.0.23 in Dockerfile (#2519) 2024-03-13 15:12:06 +01:00
Manuel Schmid 4363dbc303
fix: revert testing change to default lora activation 2024-03-13 00:32:54 +01:00
Manuel Schmid f7f0b51bab
Merge branch 'main_upstream' into develop 2024-03-13 00:31:41 +01:00
Manuel Schmid 6da0441cc7
fix: update xformers to 0.0.23 (#2517)
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
    PyTorch 2.0.1+cu118 with CUDA 1108 (you have 2.1.0+cu121)
    Python  3.10.11 (you have 3.10.9)
2024-03-12 23:13:38 +01:00
Manuel Schmid 57a01865b9
refactor: only use LoRA activate on handover to async worker, extract method 2024-03-11 23:49:45 +01:00
Giuseppe Speranza 532401df76
fix: prioritize VRAM over RAM in Colab, preventing out of memory issues (#1710)
* colab: balance the use of RAM

enables the use of VRAM memory so as not to saturate the system RAM

* feat: use --always-high-vram by default for Colab, adjust readme

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-03-11 19:58:25 +01:00
Manuel Schmid d57afc88a4
feat: merge webui css into one file 2024-03-11 18:26:04 +01:00
Manuel Schmid 39669453cd
feat: allow to add disabled LoRAs in config on application start (#2507)
add LoRA checkbox enable/disable handling to all necessary occurrences
2024-03-11 17:59:58 +01:00
hswlab 2831dc70a7
feat: use scrollable 2 column layout for styles (#1883)
* Styles Grouping/Sorting #1770

* Update css/style.css

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>

* Update javascript/script.js

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>

* feat: use standard padding again

---------

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-03-11 16:35:03 +01:00
Manuel Schmid 84e3124c37
i18n: add translation for lightning 2024-03-11 00:47:43 +01:00
xhoxye ead24c9361
feat: read wildcards in order 通配符增强,切换顺序读取。(#1761)
* 通配符增强,切换顺序读取

通配符增强,通过勾选切换通配符读取方法,默认不勾选为随机读取一行,勾选后为按顺序读取,并使用相同的种子。

* 代码来自刁璐璐

* update

* Update async_worker.py

* refactor: rename read_wildcard_in_order_checkbox to read_wildcard_in_order

* fix: use correct method call for interrupt_current_processing

actually achieves the same result, stopping the task

* refactor: move checkbox to developer debug mode, rename to plural

below disable seed increment

* refactor: code cleanup, separate code for disable_seed_increment

* i18n: add translation for checkbox text

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-03-10 23:18:36 +01:00
Manuel Schmid 5c7dc12470
Merge branch 'main_upstream' into develop 2024-03-10 23:14:52 +01:00
Manuel Schmid bc9c586082
fix: use correct method call for interrupt_current_processing (#2506)
actually achieves the same result, stopping the task
2024-03-10 23:13:09 +01:00
Cruxial f6117180d4
feat: scan wildcard subdirectories (#2466)
* Fix typo

* Scan wildcards recursively

Adds a method for getting the top-most occurrence of a given file in a directory tree

* Use already existing method for locating files

* Fix issue with incorrect files being loaded

When using the `name-filter` parameter in `get_model_filenames`, it doesn't guarantee the best match to be in the first index. This change adds a step to ensure the correct wildcard is being loaded.

* feat: make path for wildcards configurable, cache filenames on refresh files, rename button variable

* Fix formatting

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-03-10 21:35:41 +01:00
Manuel Schmid 400471f7af
feat: add config for temp path and temp path cleanup on launch (#1992)
* Added options to set the Gradio cache path and clear cache on launch.

* Renamed cache to temp

* clear temp

* feat: do not delete temp folder but only clean content

also use fallback to system temp dir
see 6683ab2589/gradio/utils.py (L1151)

* refactor: code cleanup

* feat: unify arg --temp-path and new temp_path config value

* feat: change default temp dir from gradio to fooocus

* refactor: move temp path method definition and configs

* feat: rename get_temp_path to init_temp_path

---------

Co-authored-by: Magee <koshms3@gmail.com>
Co-authored-by: steveyourcreativepeople <steve@yourcreativepeople.com>
Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-03-10 21:11:41 +01:00
Manuel Schmid 5409bfdb26
Revert "feat: add config for temp path and temp path cleanup on launch (#1992)" (#2502)
This reverts commit 85e8aa8ce2.
2024-03-10 21:08:55 +01:00
Magee 85e8aa8ce2
feat: add config for temp path and temp path cleanup on launch (#1992)
* Added options to set the Gradio cache path and  clear cache on launch.

* Renamed cache to temp

* clear temp

* feat: do not delete temp folder but only clean content

also use fallback to system temp dir
see 6683ab2589/gradio/utils.py (L1151)

* refactor: code cleanup

* feat: unify arg --temp-path and new temp_path config value

* feat: change default temp dir from gradio to fooocus

* refactor: move temp path method definition and configs

* feat: rename get_temp_path to init_temp_path

---------

Co-authored-by: steveyourcreativepeople <steve@yourcreativepeople.com>
Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-03-10 21:06:08 +01:00
xhoxye db7d2018ca
fix: change synthetic refiner switch from 0.5 to 0.8 (#2165)
* fix problem

1. In partial redrawing, when refiner is empty, enable use_synthetic_refiner. The default switching timing of 0.5 is too early, which is now modified to SDXL default of 0.8.
2. When using custom steps, the calculation of switching timing is wrong. Now it is modified to calculate "steps x timing" after custom steps are used.

* fix: parse width and height as int when applying metadata (#2452)

fixes an issue with A1111 metadata scheme where width and height are strings after splitting resolution

* fix: do not attempt to remove non-existing image grid file (#2456)

image grid is actually not an image here but a numpy array, as the grid isn't saved by default

* feat: add troubleshooting guide to bug report template again (#2489)

---------

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-03-10 14:42:03 +01:00
Manuel Schmid 4701b4f8f3
Merge branch 'main_upstream' into develop 2024-03-10 14:40:58 +01:00
Manuel Schmid 25650b4bc4
feat: add performance lightning with 4 step LoRA (#2415)
* feat: add performance sdxl lightning

based on https://huggingface.co/ByteDance/SDXL-Lightning/blob/main/sdxl_lightning_4step_lora.safetensors

* feat: add method for centralized restriction of features for specific performance modes

* feat: add lightning preset
2024-03-10 14:34:48 +01:00
Manuel Schmid b6e4bb86f4
feat: use jpeg instead of jpg, use enums instead of strings (#2453)
* fix: parse width and height as int when applying metadata (#2452)

fixes an issue with A1111 metadata scheme where width and height are strings after splitting resolution

* feat: use jpeg instead of jpg, use enums instead of strings
2024-03-09 16:00:25 +01:00
Manuel Schmid 831c6b93cc
feat: add troubleshooting guide to bug report template again (#2489) 2024-03-09 14:13:16 +01:00
Manuel Schmid 3a64fe3eb3
fix: do not attempt to remove non-existing image grid file (#2456)
image grid is actually not an image here but a numpy array, as the grid isn't saved by default
2024-03-05 21:16:21 +01:00
Manuel Schmid 6cfcc62000
fix: parse width and height as int when applying metadata (#2452)
fixes an issue with A1111 metadata scheme where width and height are strings after splitting resolution
2024-03-05 18:18:47 +01:00
Manuel Schmid 28cdc2f104
Merge pull request #2439 from lllyasviel/develop
release 2.2.1
2024-03-04 11:37:41 +01:00
Manuel Schmid ee96b854d9
docs: update version and changelog 2024-03-04 11:33:49 +01:00
Manuel Schmid 9155d94067
feat: match anything in array syntax, not only words and whitespace (#2438)
allows e.g. [[ (red:1.1), (blue:1.2) ]] and enables same seed checks for different prompt weight
2024-03-04 11:22:24 +01:00
nbs e54fb54f91
fix: typo in wildcards/animal.txt (#2433)
* Fix typo in animal wildcards

* Update animal.txt
2024-03-04 10:19:49 +01:00
eddyizm e965bfc39c
fix: add hint for png to metadata scheme selection (#2434) 2024-03-04 00:22:47 +01:00
Manuel Schmid e241c53f0e
feat: adjust width of lora_weight for firefox (#2431) 2024-03-03 21:15:42 +01:00
Manuel Schmid c3fd57acb9
feat: add metadata flag and steps override to history log (#2425)
* feat: add metadata hint to history log

* feat: add actual metadata_scheme to log instead of only boolean

* feat: add steps to log if they were overridden

* fix: pass copy of metadata

prevents LoRA file extension removal in history log caused by passing reference to meta_parser fooocus scheme
2024-03-03 19:34:38 +01:00
Manuel Schmid fb94394b10
fix: add fallback value for default_max_lora_number when default_loras is empty (#2430) 2024-03-03 18:46:26 +01:00
Manuel Schmid 4ea3baff50
fix: add handling for filepaths to image grid (#2414)
previously skipped due to not being in np.ndarray format but string
2024-03-03 00:21:59 +01:00
Manuel Schmid 90839430da
fix: adjust parameters for upscale fast 2x (#2411) 2024-03-02 19:05:11 +01:00
Manuel Schmid 4945fc9962
Merge pull request #2406 from lllyasviel/develop
release 2.2.0
2024-03-02 16:27:54 +01:00
Manuel Schmid 6db14acf8e
docs: update version and changelog 2024-03-02 16:25:31 +01:00
Gianluca Teti 41e88a4e8d
docs: fix typo in readme (#2368) 2024-02-29 16:10:34 +01:00
Manuel Schmid 4f4d23f4e3
fix: use filename instead of download function call for lcm lora
do not require lcm lora to be downloaded for metadata parsing
2024-02-26 21:14:44 +01:00
Manuel Schmid 9c30961efd
fix: add missing return statement in model_refresh_clicked 2024-02-26 21:12:27 +01:00
Manuel Schmid 692beadbdc
docs: bump version number to 2.2.0-rc1
easier debugging and issue handling
2024-02-26 17:41:29 +01:00
Manuel Schmid 4e526e255e
docs: add missing release notes for 2.1.865 2024-02-26 17:39:29 +01:00
whitehara f4a6350300
feat: add docker files (#1418)
* Add docker files

* Add python precompiled cache file in the image

* Add Notes in docker.md

* Create docker-publish.yml

* Modify docker-compose.yml not to use the bind mount

* Update torch version

* Change --share to --listen

* Update torch version

* Change '--share' to '--listen`

* adjust code comments

* Update requirements-docker.txt

* chore: code cleanup

- default_model env var isn't necessary as model is included in default preset, same for speed
- ENV CMDARGS --listen is now synched with docker-compose.yml file
- remove

* Change entry_with_update.py to launch.py in entrypoint.sh

* Change CMD in Dockerfile

* Change default CMDARGS to --listen in Dockerfile

* Modify CMD in Dockerfile

* Fix docker-compose.yml

* Import files from models,outputs

* docs: change wording in docker.md, change git clone URL, add quotes to port mapping

* docs: remove docker publish github action, remove pre-built image from docs

* Modify modules versions for linux/arm64

* docs: update docker readme

---------

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
Co-authored-by: Manuel Schmid <dev@mash1t.de>
Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-02-26 17:30:05 +01:00
Manuel Schmid b6d23670d8
feat: add jpg and webp support, add exif data handling for metadata (#1863)
* feature: added flag, config and ui update for image extension change #1789

* moved function to config module

* moved image extension to webui via async worker. Passing as parameter to log and get_current_html_path functions per feedback

* check flag before displaying image extension radio button

* disabled if image log flag is passed in

* fix: add missing image_extension parameter to log call

* refactor: change label

* feat: add webp to image_extensions

supported image extemsions: see https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html

* feat: use consistent file name in gradio

returns and uses filepaths instead of numpy image by saving to temp dir
uses double the temp dir file storage on disk as it saves to temp dir and gradio temp dir when displaying the image, but reuses logged output image

* feat: delete temp images after yielding to gradio

* feat: use args temp path if given

* chore: code cleanup, remove redundant if statement

* feat: always show image_extension element

this is now possible due to image extension support in gradio via https://github.com/lllyasviel/Fooocus/pull/1932

* refactor: rename image_extension to image_file_extension

* feat: use optimized jpg parameters when saving the image

quality=95
optimize=True
progressive=True

* refactor: rename image_file_extension to output_format

* feat: add exif handling

* refactor: code cleanup, remove items from metadata output

---------

Co-authored-by: Manuel Schmid <dev@mash1t.de>
Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
Co-authored by: eddyizm <wtfisup@hotmail.com>
2024-02-26 15:31:32 +01:00
Manuel Schmid ba9eadbcda
feat: add metadata to images (#1940)
* feat: add metadata logging for images

inspired by https://github.com/MoonRide303/Fooocus-MRE

* feat: add config and checkbox for save_metadata_to_images

* feat: add argument disable_metadata

* feat: add support for A1111 metadata schema

cf2772fab0/modules/processing.py (L672)

* feat: add model hash support for a1111

* feat: use resolved prompts with included expansion and styles for a1111 metadata

* fix: code cleanup and resolved prompt fixes

* feat: add config metadata_created_by

* fix: use stting isntead of quote wrap for A1111 created_by

* fix: correctlyy hide/show metadata schema on app start

* fix: do not generate hashes when arg --disable-metadata is used

* refactor: rename metadata_schema to metadata_scheme

* fix: use pnginfo "parameters" insteadf of "Comments"

see https://github.com/RupertAvery/DiffusionToolkit/issues/202 and cf2772fab0/modules/processing.py (L939)

* feat: add resolved prompts to metadata

* fix: use correct default value in metadata check for created_by

* wip: add metadata mapping, reading and writing

applying data after reading currently not functional for A1111

* feat: rename metadata tab and import button label

* feat: map basic information for scheme A1111

* wip: optimize handling for metadata in Gradio calls

* feat: add enums for Performance, Steps and StepsUOV

also move MetadataSchema enum to prevent circular dependency

* fix: correctly map resolution, use empty styles for A1111

* chore: code cleanup

* feat: add A1111 prompt style detection

only detects one style as Fooocus doesn't wrap {prompt} with the whole style, but has a separate prompt string for each style

* wip: add prompt style extraction for A1111 scheme

* feat: sort styles after metadata import

* refactor: use central flag for LoRA count

* refactor: use central flag for ControlNet image count

* fix: use correct LoRA mapping, add fallback for backwards compatibility

* feat: add created_by again

* feat: add prefix "Fooocus" to version

* wip: code cleanup, update todos

* fix: use correct order to read LoRA in meta parser

* wip: code cleanup, update todos

* feat: make sha256 with length 10 default

* feat: add lora handling to A1111 scheme

* feat: override existing LoRA values when importing, would cause images to differ

* fix: correctly extract prompt style when only prompt expansion is selected

* feat: allow model / LoRA loading from subfolders

* feat: code cleanup, do not queue metadata preview on image upload

* refactor: add flag for refiner_swap_method

* feat: add metadata handling for all non-img2img parameters

* refactor: code cleanup

* chore: use str as return type in calculate_sha256

* feat: add hash cache to metadata

* chore: code cleanup

* feat: add method get_scheme to Metadata

* fix: align handling for scheme Fooocus by removing lcm lora from json parsing

* refactor: add step before parsing to set data in parser

- add constructor for MetadataSchema class
- remove showable and copyable from log output
- add functional hash cache (model hashing takes about 5 seconds, only required once per model, using hash lazy loading)

* feat: sort metadata attributes before writing to image

* feat: add translations and hint for image prompt parameters

* chore: check and remove ToDo's

* refactor: merge metadata.py into meta_parser.py

* fix: add missing refiner in A1111 parse_json

* wip: add TODO for ultiline prompt style resolution

* fix: remove sorting for A1111, change performance key position

fixes https://github.com/lllyasviel/Fooocus/pull/1940#issuecomment-1924444633

* fix: add workaround for multiline prompts

* feat: add sampler mapping

* feat: prevent config reset by renaming metadata_scheme to match config options

* chore: remove remaining todos after analysis

refiner is added when set
restoring multiline prompts has been resolved by using separate parameters "raw_prompt" and "raw_negative_prompt"

* chore: specify too broad exception types

* feat: add mapping for _gpu samplers to cpu samplers

gpu samplers are less deterministic than cpu but in general similar, see https://www.reddit.com/r/comfyui/comments/15hayzo/comment/juqcpep/

* feat: add better handling for image import with empty metadata

* fix: parse adaptive_cfg as float instead of string

* chore: loosen strict type for parse_json, fix indent

* chore: make steps enums more strict

* feat: only override steps if metadata value is not in steps enum or in steps enum and performance is not the same

* fix: handle empty strings in metadata

e.g. raw negative prompt when none is set
2024-02-26 14:27:57 +01:00
Manuel Schmid d3113f5c3f
feat: use consistent file name in gradio (#1932)
* feat: use consistent file name in gradio

returns and uses filepaths instead of numpy image by saving to temp dir
uses double the temp dir file storage on disk as it saves to temp dir and gradio temp dir when displaying the image, but reuses logged output image

* feat: delete temp images after yielding to gradio

* feat: use args temp path if given

* chore: code cleanup, remove redundant if statement
2024-02-25 22:56:38 +01:00
Brian Flannery c898e6a4dc
feat: add array support on main prompt (#1503)
* prompt array support

* update change log

* update change log

* docs: remove 2.1.847 change log

* refactor: rename freeze_seed to disable_seed_increment, move to developer debug mode

* feat: add translation for new labels

* fix: use task_rng based on task_seed, not initial seed

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-02-25 22:22:49 +01:00
MindOfMatter 3be76ef8a3
feat: make lora min max weight editable in config (#2216)
* Initial commit

* Update README.md

* sync with original main Fooocus repo

* update with my gitignore setup

* add min max weight configs feature

* add max lora config feature

* Revert "add max lora config feature"

This reverts commit cfe7463fe2.

* Update README.md

* Update .gitignore

* update

* merge

* revert

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-02-25 21:36:25 +01:00
MindOfMatter 18f9f7dc31
feat: make lora number editable in config (#2215)
* Initial commit

* Update README.md

* sync with original main Fooocus repo

* update with my gitignore setup

* add max lora config feature

* Revert "add max lora config feature"

This reverts commit cfe7463fe2.

* add max loras config feature

* Update README.md

* Update .gitignore

* update

* merge

* revert

* refactor: rename default_loras_max_number to default_max_lora_number, validate config for int

* fix: add missing patch_all call and imports again

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-02-25 21:12:26 +01:00
MindOfMatter 468d704b29
feat: add button to enable LoRAs (#2210)
* Initial commit

* Update README.md

* sync with original main Fooocus repo

* update with my gitignore setup

* add max lora config feature

* Revert "add max lora config feature"

This reverts commit cfe7463fe2.

* add lora enabler feature

* Update README.md

* Update .gitignore

* update

* merge

* revert changes

* revert

* feat: change width of LoRA columns

* refactor: rename lora_enable to lora_enabled, optimize code

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-02-25 19:59:28 +01:00
Manuel Schmid eebd7752ab
fix: allow path_outputs to be outside of root dir (#2332)
allows Gradio to serve outputs when folder has been changed in the config
2024-02-25 18:44:28 +01:00
Manuel Schmid b5f019fb62
fix: correctly create directory for path_outputs if not existing (#1668)
* correctly create directory for outputs if not existing

* feat: add make_directory parameter checks for list, extract make_directory to util
2024-02-25 18:41:43 +01:00
Manuel Schmid 9c19300a3e
feat: improve bug report and feature request issue templates (#1631)
* refactor and improve bug report and feature request issue templates

* update operating system placeholder to Windows 10

most common usage i assume

* use already existing label "enhancement" instead of "feature"

* feat: add checkbox for latest version check, add triage to feature requests

* feat: add link to ask a question

* feat: use templates of stable-diffusion-webui-forge as basis

* feat: add optional hosting and operating system inputs
2024-02-25 18:04:46 +01:00
Maxim Saplin 4d34f31a72
feat: allow users to specify the number of threads when running on CPU (#1601)
* CPU_NUM_THREADS

* refactor: optimize code, type is already strict

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-02-25 17:14:17 +01:00
dooglewoogle ef1999c52c
feat: add ability to load checkpoints and loras from multiple locations (#1256)
* Add ability to load checkpoints and loras from multiple locations

* Found another location a default path is required

* feat: use array as default

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-02-25 12:47:14 +01:00
Manuel Schmid 7cfb5e742d
feat: add advanced parameter for disable_intermediate_results (progress_gallery) (#1013)
* add advanced parameter for disable_intermediate_results

prevents gradio frontend process from clogging image output and updates in high throughput scenarios such as LCM with image number >= 4

* update disable_intermediate_results correctly

based on default and selected performance

* chore: add missing translations
2024-02-25 11:31:00 +01:00
Manuel Schmid 965364cd80
feat: add list of 100 most popular animals to wildcards (#985) 2024-02-24 19:03:46 +01:00
Manuel Schmid 5b7ddf8b22
feat: advanced params refactoring + prevent users from skipping/stopping other users tasks in queue (#981)
* only make stop_button and skip_button interactive when rendering process starts

fix inconsistency in behaviour of stop_button and skip_button as it was possible to skip or stop other users processes while still being in queue

* use AsyncTask for last_stop handling instead of shared

* Revert "only make stop_button and skip_button interactive when rendering process starts"

This reverts commit d3f9156854.

* introduce state for task skipping/stopping

* fix return parameters of stop_clicked

* code cleanup, do not disable skip/stop on stop_clicked

* reset last_stop when skipping for further processing

* fix: replace fcbh with ldm_patched

* fix: use currentTask instead of ctrls after merging upstream

* feat: extract attribute disable_preview

* feat: extract attribute adm_scaler_positive

* feat: extract attribute adm_scaler_negative

* feat: extract attribute adm_scaler_end

* feat: extract attribute adaptive_cfg

* feat: extract attribute sampler_name

* feat: extract attribute scheduler_name

* feat: extract attribute generate_image_grid

* feat: extract attribute overwrite_step

* feat: extract attribute overwrite_switch

* feat: extract attribute overwrite_width

* feat: extract attribute overwrite_height

* feat: extract attribute overwrite_vary_strength

* feat: extract attribute overwrite_upscale_strength

* feat: extract attribute mixing_image_prompt_and_vary_upscale

* feat: extract attribute mixing_image_prompt_and_inpaint

* feat: extract attribute debugging_cn_preprocessor

* feat: extract attribute skipping_cn_preprocessor

* feat: extract attribute canny_low_threshold

* feat: extract attribute canny_high_threshold

* feat: extract attribute refiner_swap_method

* feat: extract freeu_ctrls attributes

freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2

* feat: extract inpaint_ctrls attributes

debugging_inpaint_preprocessor, inpaint_disable_initial_latent, inpaint_engine, inpaint_strength, inpaint_respective_field, inpaint_mask_upload_checkbox, invert_mask_checkbox, inpaint_erode_or_dilate

* wip: add TODOs

* chore: cleanup code

* feat: extract attribute controlnet_softness

* feat: extract remaining attributes, do not use globals in patch

* fix: resolve circular import, patch_all now in async_worker

* chore: cleanup pid code
2024-02-24 19:01:06 +01:00
Manuel Schmid 0ed01da4e4
Merge pull request #2313 from charliewilco/patch-1
chore: add .DS_Store to .gitignore
2024-02-22 21:53:36 +01:00
Charlie ⚡️ 187f4a76c6
Remove mac generated invisible files 2024-02-20 21:51:01 -05:00
Manuel Schmid f8ca04a406
feat: add early return for prompt expansion when no new tokens should be added
closes https://github.com/lllyasviel/Fooocus/issues/2278, also removes comma at the end added before tokenizer
2024-02-19 15:22:10 +01:00
Manuel Schmid a78f66ffb5
fix: sort with casefold, case insensitive
https://docs.python.org/3/library/stdtypes.html#str.casefold
2024-02-12 21:59:22 +01:00
Manuel Schmid 1c999be8c8
Merge pull request #2229 from lllyasviel/develop
Release 2.1.865
2024-02-11 15:20:27 +01:00
Manuel Schmid f4a8bf24cf
fix: correctly calculate refiner switch when overwrite_switch is > 0 (#2165)
When using custom steps, the calculation of switching timing is wrong. Now it is modified to calculate "steps x timing" after custom steps are used.
By @xhoxye
2024-02-11 15:13:20 +01:00
eddyizm 074b655dff
fix: implement output path argument (#2074)
* added function to check output path arg and override, other wise, use temp or fallback to config

* added function to check output path arg and override, other wise, use temp or fallback to config #2065

* Revert to 1bcbd650

* moved path output arg handling inside config start up

* Revert "added function to check output path arg and override, other wise, use temp or fallback to config"

This reverts commit fecb97b59c.

* Updated tag to uppercase

* updated docstring to standard double quotes.

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>

* removed extra check on image log flag per feedback

* feat: update config_dict value when overriding path_outputs, change message

---------

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-02-11 13:04:06 +01:00
Manuel Schmid ee3ce95566
docs: update version 2024-02-10 21:59:13 +01:00
Manuel Schmid 2037de3fcb
chore: fix typos and adjust wording (#1521, #1644, #1691, #1772) 2024-02-10 21:54:50 +01:00
hisk2323 eb3f4d745c
feat: add suffix ordinals (#845)
* add suffix ordinals with lambda

* delay importing of modules.config (#2195)

* refactor: use easier to read version to find matching ordinal suffix

---------

Co-authored-by: rsl8 <138326583+rsl8@users.noreply.github.com>
Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
2024-02-10 21:49:23 +01:00
Praveen Kumar Sridhar b9d7e77b0d
replaced the custom lcm function with math.lcm (#1122)
Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-02-10 19:28:10 +01:00
Evgenii c32b9bdc44
fix: replace regexp to support unicode chars (#1424) 2024-02-10 19:15:57 +01:00
Manuel Schmid 98ba1d5d47
fix: correctly sort files, display deepest dir level first (#1784) 2024-02-10 19:03:26 +01:00
Dr. Christoph Mittendorf 231956065f
Removing unnecessary comments / old code (#1905) 2024-02-10 18:51:03 +01:00
rsl8 e4929a9ed7
fix: do not overwrite $GRADIO_SERVER_PORT if it is already set (#1921) 2024-02-10 18:44:20 +01:00
Manuel Schmid b7715b0a0c
fix: prevents outdated history log link after midnight (#1979)
* feat: update history link date after each generation

prevents outdated date in link after midnight

* delay importing of modules.config (#2195)

* fix: disable queue for initial queue loading

---------

Co-authored-by: rsl8 <138326583+rsl8@users.noreply.github.com>
2024-02-10 18:33:28 +01:00
Roman Schmitz ac10e51364
add auth to --listen and readme (#2127)
* Update webui.py

* Update readme.md

* Update webui.py

Only enable AuthN for --listen and --share

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>

* docs: rephrase documentation changes for auth

---------

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-02-10 18:15:51 +01:00
rsl8 95f93a1f4b
delay importing of modules.config (#2195) 2024-02-10 17:51:38 +01:00
V1sionVerse d1a450c581
Fixed mistakes in HTML generation (#2187)
Added <!DOCTYPE html> declaration
<img/> instead of <img></img>
<br/> instead of </br>
2024-02-10 17:50:41 +01:00
rsl8 fdc4dc1d87
delay importing of modules.config (#2195) 2024-02-10 17:42:30 +01:00
Justin Dhillon 71eb040afc
Fix broken links (#2217)
* https://github.com/rlaphoenix/VSGAN/blob/master/vsgan/archs/esrgan.py

* https://github.com/huggingface/pytorch-image-models/blob/main/timm/layers/drop.py

* https://kornia.readthedocs.io/en/latest/
2024-02-10 17:36:56 +01:00
lllyasviel 1bcbd6501b fix config 2024-01-27 16:18:26 -08:00
lllyasviel 31fc99d2bc
fix (#2069) 2024-01-27 09:07:30 -08:00
112 changed files with 7198 additions and 1568 deletions

54
.dockerignore Normal file
View File

@ -0,0 +1,54 @@
__pycache__
*.ckpt
*.safetensors
*.pth
*.pt
*.bin
*.patch
*.backup
*.corrupted
*.partial
*.onnx
sorted_styles.json
/input
/cache
/language/default.json
/test_imgs
config.txt
config_modification_tutorial.txt
user_path_config.txt
user_path_config-deprecated.txt
/modules/*.png
/repositories
/fooocus_env
/venv
/tmp
/ui-config.json
/outputs
/config.json
/log
/webui.settings.bat
/embeddings
/styles.csv
/params.txt
/styles.csv.bak
/webui-user.bat
/webui-user.sh
/interrogate
/user.css
/.idea
/notification.ogg
/notification.mp3
/SwinIR
/textual_inversion
.vscode
/extensions
/test/stdout.txt
/test/stderr.txt
/cache.json*
/config_states/
/node_modules
/package-lock.json
/.coverage*
/auth.json
.DS_Store

3
.gitattributes vendored Normal file
View File

@ -0,0 +1,3 @@
# Ensure that shell scripts always use lf line endings, e.g. entrypoint.sh for docker
* text=auto
*.sh text eol=lf

View File

@ -1,18 +0,0 @@
---
name: Bug report
about: Describe a problem
title: ''
labels: ''
assignees: ''
---
**Read Troubleshoot**
[x] I admit that I have read the [Troubleshoot](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md) before making this issue.
**Describe the problem**
A clear and concise description of what the bug is.
**Full Console Log**
Paste **full** console log here. You will make our job easier if you give a **full** log.

107
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@ -0,0 +1,107 @@
name: Bug Report
description: You think something is broken in Fooocus
title: "[Bug]: "
labels: ["bug", "triage"]
body:
- type: markdown
attributes:
value: |
> The title of the bug report should be short and descriptive.
> Use relevant keywords for searchability.
> Do not leave it blank, but also do not put an entire error log in it.
- type: checkboxes
attributes:
label: Checklist
description: |
Please perform basic debugging to see if your configuration is the cause of the issue.
Basic debug procedure
 1. Update Fooocus - sometimes things just need to be updated
 2. Backup and remove your config.txt - check if the issue is caused by bad configuration
 3. Try a fresh installation of Fooocus in a different directory - see if a clean installation solves the issue
Before making a issue report please, check that the issue hasn't been reported recently.
options:
- label: The issue has not been resolved by following the [troubleshooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md)
- label: The issue exists on a clean installation of Fooocus
- label: The issue exists in the current version of Fooocus
- label: The issue has not been reported before recently
- label: The issue has been reported before but has not been fixed yet
- type: markdown
attributes:
value: |
> Please fill this form with as much information as possible. Don't forget to add information about "What browsers" and provide screenshots if possible
- type: textarea
id: what-did
attributes:
label: What happened?
description: Tell us what happened in a very clear and simple way
placeholder: |
image generation is not working as intended.
validations:
required: true
- type: textarea
id: steps
attributes:
label: Steps to reproduce the problem
description: Please provide us with precise step by step instructions on how to reproduce the bug
placeholder: |
1. Go to ...
2. Press ...
3. ...
validations:
required: true
- type: textarea
id: what-should
attributes:
label: What should have happened?
description: Tell us what you think the normal behavior should be
placeholder: |
Fooocus should ...
validations:
required: true
- type: dropdown
id: browsers
attributes:
label: What browsers do you use to access Fooocus?
multiple: true
options:
- Mozilla Firefox
- Google Chrome
- Brave
- Apple Safari
- Microsoft Edge
- Android
- iOS
- Other
- type: dropdown
id: hosting
attributes:
label: Where are you running Fooocus?
multiple: false
options:
- Locally
- Locally with virtualization (e.g. Docker)
- Cloud (Google Colab)
- Cloud (other)
- type: input
id: operating-system
attributes:
label: What operating system are you using?
placeholder: |
Windows 10
- type: textarea
id: logs
attributes:
label: Console logs
description: Please provide **full** cmd/terminal logs from the moment you started UI to the end of it, after the bug occured. If it's very long, provide a link to pastebin or similar service.
render: Shell
validations:
required: true
- type: textarea
id: misc
attributes:
label: Additional information
description: |
Please provide us with any relevant additional info or context.
Examples:
 I have updated my GPU driver recently.

5
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: Ask a question
url: https://github.com/lllyasviel/Fooocus/discussions/new?category=q-a
about: Ask the community for help

View File

@ -1,14 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the idea you'd like**
A clear and concise description of what you want to happen.

View File

@ -0,0 +1,40 @@
name: Feature request
description: Suggest an idea for this project
title: "[Feature Request]: "
labels: ["enhancement", "triage"]
body:
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: Please search to see if an issue already exists for the feature you want, and that it's not implemented in a recent build/commit.
options:
- label: I have searched the existing issues and checked the recent builds/commits
required: true
- type: markdown
attributes:
value: |
*Please fill this form with as much information as possible, provide screenshots and/or illustrations of the feature if possible*
- type: textarea
id: feature
attributes:
label: What would your feature do?
description: Tell us about your feature in a very clear and simple way, and what problem it would solve
validations:
required: true
- type: textarea
id: workflow
attributes:
label: Proposed workflow
description: Please provide us with step by step information on how you'd like the feature to be accessed and used
value: |
1. Go to ....
2. Press ....
3. ...
validations:
required: true
- type: textarea
id: misc
attributes:
label: Additional information
description: Add any other context or screenshots about the feature request here.

6
.github/dependabot.yml vendored Normal file
View File

@ -0,0 +1,6 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "monthly"

47
.github/workflows/build_container.yml vendored Normal file
View File

@ -0,0 +1,47 @@
name: Docker image build
on:
push:
branches:
- main
tags:
- v*
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v5
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository_owner }}/${{ github.event.repository.name }}
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=edge,branch=main
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
file: ./Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

2
.gitignore vendored
View File

@ -10,6 +10,7 @@ __pycache__
*.partial *.partial
*.onnx *.onnx
sorted_styles.json sorted_styles.json
hash_cache.txt
/input /input
/cache /cache
/language/default.json /language/default.json
@ -51,3 +52,4 @@ user_path_config-deprecated.txt
/package-lock.json /package-lock.json
/.coverage* /.coverage*
/auth.json /auth.json
.DS_Store

29
Dockerfile Normal file
View File

@ -0,0 +1,29 @@
FROM nvidia/cuda:12.4.1-base-ubuntu22.04
ENV DEBIAN_FRONTEND noninteractive
ENV CMDARGS --listen
RUN apt-get update -y && \
apt-get install -y curl libgl1 libglib2.0-0 python3-pip python-is-python3 git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY requirements_docker.txt requirements_versions.txt /tmp/
RUN pip install --no-cache-dir -r /tmp/requirements_docker.txt -r /tmp/requirements_versions.txt && \
rm -f /tmp/requirements_docker.txt /tmp/requirements_versions.txt
RUN pip install --no-cache-dir xformers==0.0.23 --no-dependencies
RUN curl -fsL -o /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2 https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64 && \
chmod +x /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2
RUN adduser --disabled-password --gecos '' user && \
mkdir -p /content/app /content/data
COPY entrypoint.sh /content/
RUN chown -R user:user /content
WORKDIR /content
USER user
COPY --chown=user:user . /content/app
RUN mv /content/app/models /content/app/models.org
CMD [ "sh", "-c", "/content/entrypoint.sh ${CMDARGS}" ]

View File

@ -1,8 +1,10 @@
import ldm_patched.modules.args_parser as args_parser import ldm_patched.modules.args_parser as args_parser
args_parser.parser.add_argument("--share", action='store_true', help="Set whether to share on Gradio.") args_parser.parser.add_argument("--share", action='store_true', help="Set whether to share on Gradio.")
args_parser.parser.add_argument("--preset", type=str, default=None, help="Apply specified UI preset.") args_parser.parser.add_argument("--preset", type=str, default=None, help="Apply specified UI preset.")
args_parser.parser.add_argument("--disable-preset-selection", action='store_true',
help="Disables preset selection in Gradio.")
args_parser.parser.add_argument("--language", type=str, default='default', args_parser.parser.add_argument("--language", type=str, default='default',
help="Translate UI using json files in [language] folder. " help="Translate UI using json files in [language] folder. "
@ -15,17 +17,29 @@ args_parser.parser.add_argument("--disable-offload-from-vram", action="store_tru
args_parser.parser.add_argument("--theme", type=str, help="launches the UI with light or dark theme", default=None) args_parser.parser.add_argument("--theme", type=str, help="launches the UI with light or dark theme", default=None)
args_parser.parser.add_argument("--disable-image-log", action='store_true', args_parser.parser.add_argument("--disable-image-log", action='store_true',
help="Prevent writing images and logs to hard drive.") help="Prevent writing images and logs to the outputs folder.")
args_parser.parser.add_argument("--disable-analytics", action='store_true', args_parser.parser.add_argument("--disable-analytics", action='store_true',
help="Disables analytics for Gradio", default=False) help="Disables analytics for Gradio.")
args_parser.parser.add_argument("--disable-metadata", action='store_true',
help="Disables saving metadata to images.")
args_parser.parser.add_argument("--disable-preset-download", action='store_true', args_parser.parser.add_argument("--disable-preset-download", action='store_true',
help="Disables downloading models for presets", default=False) help="Disables downloading models for presets", default=False)
args_parser.parser.add_argument("--disable-enhance-output-sorting", action='store_true',
help="Disables enhance output sorting for final image gallery.")
args_parser.parser.add_argument("--enable-auto-describe-image", action='store_true',
help="Enables automatic description of uov and enhance image when prompt is empty", default=False)
args_parser.parser.add_argument("--always-download-new-model", action='store_true', args_parser.parser.add_argument("--always-download-new-model", action='store_true',
help="Always download newer models", default=False) help="Always download newer models", default=False)
args_parser.parser.add_argument("--rebuild-hash-cache", help="Generates missing model and LoRA hashes.",
type=int, nargs="?", metavar="CPU_NUM_THREADS", const=-1)
args_parser.parser.set_defaults( args_parser.parser.set_defaults(
disable_cuda_malloc=True, disable_cuda_malloc=True,
in_browser=True, in_browser=True,
@ -40,6 +54,7 @@ args_parser.args.always_offload_from_vram = not args_parser.args.disable_offload
if args_parser.args.disable_analytics: if args_parser.args.disable_analytics:
import os import os
os.environ["GRADIO_ANALYTICS_ENABLED"] = "False" os.environ["GRADIO_ANALYTICS_ENABLED"] = "False"
if args_parser.args.disable_in_browser: if args_parser.args.disable_in_browser:
args_parser.args.in_browser = False args_parser.args.in_browser = False

View File

@ -1,5 +1,150 @@
/* based on https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/v1.6.0/style.css */ /* based on https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/v1.6.0/style.css */
.loader-container {
display: flex; /* Use flex to align items horizontally */
align-items: center; /* Center items vertically within the container */
white-space: nowrap; /* Prevent line breaks within the container */
}
.loader {
border: 8px solid #f3f3f3; /* Light grey */
border-top: 8px solid #3498db; /* Blue */
border-radius: 50%;
width: 30px;
height: 30px;
animation: spin 2s linear infinite;
}
@keyframes spin {
0% { transform: rotate(0deg); }
100% { transform: rotate(360deg); }
}
/* Style the progress bar */
progress {
appearance: none; /* Remove default styling */
height: 20px; /* Set the height of the progress bar */
border-radius: 5px; /* Round the corners of the progress bar */
background-color: #f3f3f3; /* Light grey background */
width: 100%;
vertical-align: middle !important;
}
/* Style the progress bar container */
.progress-container {
margin-left: 20px;
margin-right: 20px;
flex-grow: 1; /* Allow the progress container to take up remaining space */
}
/* Set the color of the progress bar fill */
progress::-webkit-progress-value {
background-color: #3498db; /* Blue color for the fill */
}
progress::-moz-progress-bar {
background-color: #3498db; /* Blue color for the fill in Firefox */
}
/* Style the text on the progress bar */
progress::after {
content: attr(value '%'); /* Display the progress value followed by '%' */
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
color: white; /* Set text color */
font-size: 14px; /* Set font size */
}
/* Style other texts */
.loader-container > span {
margin-left: 5px; /* Add spacing between the progress bar and the text */
}
.progress-bar > .generating {
display: none !important;
}
.progress-bar{
height: 30px !important;
}
.progress-bar span {
text-align: right;
width: 215px;
}
div:has(> #positive_prompt) {
border: none;
}
#positive_prompt {
padding: 1px;
background: var(--background-fill-primary);
}
.type_row {
height: 84px !important;
}
.type_row_half {
height: 34px !important;
}
.refresh_button {
border: none !important;
background: none !important;
font-size: none !important;
box-shadow: none !important;
}
.advanced_check_row {
width: 330px !important;
}
.min_check {
min-width: min(1px, 100%) !important;
}
.resizable_area {
resize: vertical;
overflow: auto !important;
}
.performance_selection label {
width: 140px !important;
}
.aspect_ratios label {
flex: calc(50% - 5px) !important;
}
.aspect_ratios label span {
white-space: nowrap !important;
}
.aspect_ratios label input {
margin-left: -5px !important;
}
.lora_enable label {
height: 100%;
}
.lora_enable label input {
margin: auto;
}
.lora_enable label span {
display: none;
}
@-moz-document url-prefix() {
.lora_weight input[type=number] {
width: 80px;
}
}
#context-menu{ #context-menu{
z-index:9999; z-index:9999;
position:absolute; position:absolute;
@ -218,3 +363,56 @@
#stylePreviewOverlay.lower-half { #stylePreviewOverlay.lower-half {
transform: translate(-140px, -140px); transform: translate(-140px, -140px);
} }
/* scrollable box for style selections */
.contain .tabs {
height: 100%;
}
.contain .tabs .tabitem.style_selections_tab {
height: 100%;
}
.contain .tabs .tabitem.style_selections_tab > div:first-child {
height: 100%;
}
.contain .tabs .tabitem.style_selections_tab .style_selections {
min-height: 200px;
height: 100%;
}
.contain .tabs .tabitem.style_selections_tab .style_selections .wrap[data-testid="checkbox-group"] {
position: absolute; /* remove this to disable scrolling within the checkbox-group */
overflow: auto;
padding-right: 2px;
max-height: 100%;
}
.contain .tabs .tabitem.style_selections_tab .style_selections .wrap[data-testid="checkbox-group"] label {
/* max-width: calc(35% - 15px) !important; */ /* add this to enable 3 columns layout */
flex: calc(50% - 5px) !important;
}
.contain .tabs .tabitem.style_selections_tab .style_selections .wrap[data-testid="checkbox-group"] label span {
/* white-space:nowrap; */ /* add this to disable text wrapping (better choice for 3 columns layout) */
overflow: hidden;
text-overflow: ellipsis;
}
/* styles preview tooltip */
.preview-tooltip {
background-color: #fff8;
font-family: monospace;
text-align: center;
border-radius: 5px 5px 0px 0px;
display: none; /* remove this to enable tooltip in preview image */
}
#inpaint_canvas .canvas-tooltip-info {
top: 2px;
}
#inpaint_brush_color input[type=color]{
background: none;
}

11
development.md Normal file
View File

@ -0,0 +1,11 @@
## Running unit tests
Native python:
```
python -m unittest tests/
```
Embedded python (Windows zip file installation method):
```
..\python_embeded\python.exe -m unittest
```

36
docker-compose.yml Normal file
View File

@ -0,0 +1,36 @@
volumes:
fooocus-data:
services:
app:
build: .
image: ghcr.io/lllyasviel/fooocus
ports:
- "7865:7865"
environment:
- CMDARGS=--listen # Arguments for launch.py.
- DATADIR=/content/data # Directory which stores models, outputs dir
- config_path=/content/data/config.txt
- config_example_path=/content/data/config_modification_tutorial.txt
- path_checkpoints=/content/data/models/checkpoints/
- path_loras=/content/data/models/loras/
- path_embeddings=/content/data/models/embeddings/
- path_vae_approx=/content/data/models/vae_approx/
- path_upscale_models=/content/data/models/upscale_models/
- path_inpaint=/content/data/models/inpaint/
- path_controlnet=/content/data/models/controlnet/
- path_clip_vision=/content/data/models/clip_vision/
- path_fooocus_expansion=/content/data/models/prompt_expansion/fooocus_expansion/
- path_outputs=/content/app/outputs/ # Warning: If it is not located under '/content/app', you can't see history log!
volumes:
- fooocus-data:/content/data
#- ./models:/import/models # Once you import files, you don't need to mount again.
#- ./outputs:/import/outputs # Once you import files, you don't need to mount again.
tty: true
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [compute, utility]

131
docker.md Normal file
View File

@ -0,0 +1,131 @@
# Fooocus on Docker
The docker image is based on NVIDIA CUDA 12.4 and PyTorch 2.1, see [Dockerfile](Dockerfile) and [requirements_docker.txt](requirements_docker.txt) for details.
## Requirements
- A computer with specs good enough to run Fooocus, and proprietary Nvidia drivers
- Docker, Docker Compose, or Podman
## Quick start
**More information in the [notes](#notes).**
### Running with Docker Compose
1. Clone this repository
2. Run the docker container with `docker compose up`.
### Running with Docker
```sh
docker run -p 7865:7865 -v fooocus-data:/content/data -it \
--gpus all \
-e CMDARGS=--listen \
-e DATADIR=/content/data \
-e config_path=/content/data/config.txt \
-e config_example_path=/content/data/config_modification_tutorial.txt \
-e path_checkpoints=/content/data/models/checkpoints/ \
-e path_loras=/content/data/models/loras/ \
-e path_embeddings=/content/data/models/embeddings/ \
-e path_vae_approx=/content/data/models/vae_approx/ \
-e path_upscale_models=/content/data/models/upscale_models/ \
-e path_inpaint=/content/data/models/inpaint/ \
-e path_controlnet=/content/data/models/controlnet/ \
-e path_clip_vision=/content/data/models/clip_vision/ \
-e path_fooocus_expansion=/content/data/models/prompt_expansion/fooocus_expansion/ \
-e path_outputs=/content/app/outputs/ \
ghcr.io/lllyasviel/fooocus
```
### Running with Podman
```sh
podman run -p 7865:7865 -v fooocus-data:/content/data -it \
--security-opt=no-new-privileges --cap-drop=ALL --security-opt label=type:nvidia_container_t --device=nvidia.com/gpu=all \
-e CMDARGS=--listen \
-e DATADIR=/content/data \
-e config_path=/content/data/config.txt \
-e config_example_path=/content/data/config_modification_tutorial.txt \
-e path_checkpoints=/content/data/models/checkpoints/ \
-e path_loras=/content/data/models/loras/ \
-e path_embeddings=/content/data/models/embeddings/ \
-e path_vae_approx=/content/data/models/vae_approx/ \
-e path_upscale_models=/content/data/models/upscale_models/ \
-e path_inpaint=/content/data/models/inpaint/ \
-e path_controlnet=/content/data/models/controlnet/ \
-e path_clip_vision=/content/data/models/clip_vision/ \
-e path_fooocus_expansion=/content/data/models/prompt_expansion/fooocus_expansion/ \
-e path_outputs=/content/app/outputs/ \
ghcr.io/lllyasviel/fooocus
```
When you see the message `Use the app with http://0.0.0.0:7865/` in the console, you can access the URL in your browser.
Your models and outputs are stored in the `fooocus-data` volume, which, depending on OS, is stored in `/var/lib/docker/volumes/` (or `~/.local/share/containers/storage/volumes/` when using `podman`).
## Building the container locally
Clone the repository first, and open a terminal in the folder.
Build with `docker`:
```sh
docker build . -t fooocus
```
Build with `podman`:
```sh
podman build . -t fooocus
```
## Details
### Update the container manually (`docker compose`)
When you are using `docker compose up` continuously, the container is not updated to the latest version of Fooocus automatically.
Run `git pull` before executing `docker compose build --no-cache` to build an image with the latest Fooocus version.
You can then start it with `docker compose up`
### Import models, outputs
If you want to import files from models or the outputs folder, you can add the following bind mounts in the [docker-compose.yml](docker-compose.yml) or your preferred method of running the container:
```
#- ./models:/import/models # Once you import files, you don't need to mount again.
#- ./outputs:/import/outputs # Once you import files, you don't need to mount again.
```
After running the container, your files will be copied into `/content/data/models` and `/content/data/outputs`
Since `/content/data` is a persistent volume folder, your files will be persisted even when you re-run the container without the above mounts.
### Paths inside the container
|Path|Details|
|-|-|
|/content/app|The application stored folder|
|/content/app/models.org|Original 'models' folder.<br> Files are copied to the '/content/app/models' which is symlinked to '/content/data/models' every time the container boots. (Existing files will not be overwritten.) |
|/content/data|Persistent volume mount point|
|/content/data/models|The folder is symlinked to '/content/app/models'|
|/content/data/outputs|The folder is symlinked to '/content/app/outputs'|
### Environments
You can change `config.txt` parameters by using environment variables.
**The priority of using the environments is higher than the values defined in `config.txt`, and they will be saved to the `config_modification_tutorial.txt`**
Docker specified environments are there. They are used by 'entrypoint.sh'
|Environment|Details|
|-|-|
|DATADIR|'/content/data' location.|
|CMDARGS|Arguments for [entry_with_update.py](entry_with_update.py) which is called by [entrypoint.sh](entrypoint.sh)|
|config_path|'config.txt' location|
|config_example_path|'config_modification_tutorial.txt' location|
|HF_MIRROR| huggingface mirror site domain|
You can also use the same json key names and values explained in the 'config_modification_tutorial.txt' as the environments.
See examples in the [docker-compose.yml](docker-compose.yml)
## Notes
- Please keep 'path_outputs' under '/content/app'. Otherwise, you may get an error when you open the history log.
- Docker on Mac/Windows still has issues in the form of slow volume access when you use "bind mount" volumes. Please refer to [this article](https://docs.docker.com/storage/volumes/#use-a-volume-with-docker-compose) for not using "bind mount".
- The MPS backend (Metal Performance Shaders, Apple Silicon M1/M2/etc.) is not yet supported in Docker, see https://github.com/pytorch/pytorch/issues/81224
- You can also use `docker compose up -d` to start the container detached and connect to the logs with `docker compose logs -f`. This way you can also close the terminal and keep the container running.

33
entrypoint.sh Executable file
View File

@ -0,0 +1,33 @@
#!/bin/bash
ORIGINALDIR=/content/app
# Use predefined DATADIR if it is defined
[[ x"${DATADIR}" == "x" ]] && DATADIR=/content/data
# Make persistent dir from original dir
function mklink () {
mkdir -p $DATADIR/$1
ln -s $DATADIR/$1 $ORIGINALDIR
}
# Copy old files from import dir
function import () {
(test -d /import/$1 && cd /import/$1 && cp -Rpn . $DATADIR/$1/)
}
cd $ORIGINALDIR
# models
mklink models
# Copy original files
(cd $ORIGINALDIR/models.org && cp -Rpn . $ORIGINALDIR/models/)
# Import old files
import models
# outputs
mklink outputs
# Import old files
import outputs
# Start application
python launch.py $*

View File

@ -0,0 +1,24 @@
# https://github.com/sail-sg/EditAnything/blob/main/sam2groundingdino_edit.py
import numpy as np
from PIL import Image
from extras.inpaint_mask import SAMOptions, generate_mask_from_image
original_image = Image.open('cat.webp')
image = np.array(original_image, dtype=np.uint8)
sam_options = SAMOptions(
dino_prompt='eye',
dino_box_threshold=0.3,
dino_text_threshold=0.25,
dino_erode_or_dilate=0,
dino_debug=False,
max_detections=2,
model_type='vit_b'
)
mask_image, _, _, _ = generate_mask_from_image(image, sam_options=sam_options)
merged_masks_img = Image.fromarray(mask_image)
merged_masks_img.show()

View File

@ -216,9 +216,9 @@ def is_url(url_or_filename):
def load_checkpoint(model,url_or_filename): def load_checkpoint(model,url_or_filename):
if is_url(url_or_filename): if is_url(url_or_filename):
cached_file = download_cached_file(url_or_filename, check_hash=False, progress=True) cached_file = download_cached_file(url_or_filename, check_hash=False, progress=True)
checkpoint = torch.load(cached_file, map_location='cpu') checkpoint = torch.load(cached_file, map_location='cpu', weights_only=True)
elif os.path.isfile(url_or_filename): elif os.path.isfile(url_or_filename):
checkpoint = torch.load(url_or_filename, map_location='cpu') checkpoint = torch.load(url_or_filename, map_location='cpu', weights_only=True)
else: else:
raise RuntimeError('checkpoint url or path is invalid') raise RuntimeError('checkpoint url or path is invalid')

View File

@ -78,9 +78,9 @@ def blip_nlvr(pretrained='',**kwargs):
def load_checkpoint(model,url_or_filename): def load_checkpoint(model,url_or_filename):
if is_url(url_or_filename): if is_url(url_or_filename):
cached_file = download_cached_file(url_or_filename, check_hash=False, progress=True) cached_file = download_cached_file(url_or_filename, check_hash=False, progress=True)
checkpoint = torch.load(cached_file, map_location='cpu') checkpoint = torch.load(cached_file, map_location='cpu', weights_only=True)
elif os.path.isfile(url_or_filename): elif os.path.isfile(url_or_filename):
checkpoint = torch.load(url_or_filename, map_location='cpu') checkpoint = torch.load(url_or_filename, map_location='cpu', weights_only=True)
else: else:
raise RuntimeError('checkpoint url or path is invalid') raise RuntimeError('checkpoint url or path is invalid')
state_dict = checkpoint['model'] state_dict = checkpoint['model']

View File

@ -0,0 +1,43 @@
batch_size = 1
modelname = "groundingdino"
backbone = "swin_T_224_1k"
position_embedding = "sine"
pe_temperatureH = 20
pe_temperatureW = 20
return_interm_indices = [1, 2, 3]
backbone_freeze_keywords = None
enc_layers = 6
dec_layers = 6
pre_norm = False
dim_feedforward = 2048
hidden_dim = 256
dropout = 0.0
nheads = 8
num_queries = 900
query_dim = 4
num_patterns = 0
num_feature_levels = 4
enc_n_points = 4
dec_n_points = 4
two_stage_type = "standard"
two_stage_bbox_embed_share = False
two_stage_class_embed_share = False
transformer_activation = "relu"
dec_pred_bbox_embed_share = True
dn_box_noise_scale = 1.0
dn_label_noise_ratio = 0.5
dn_label_coef = 1.0
dn_bbox_coef = 1.0
embed_init_tgt = True
dn_labelbook_size = 2000
max_text_len = 256
text_encoder_type = "bert-base-uncased"
use_text_enhancer = True
use_fusion_layer = True
use_checkpoint = True
use_transformer_ckpt = True
use_text_cross_attention = True
text_dropout = 0.0
fusion_dropout = 0.0
fusion_droppath = 0.1
sub_sentence_present = True

View File

@ -0,0 +1,100 @@
from typing import Tuple, List
import ldm_patched.modules.model_management as model_management
from ldm_patched.modules.model_patcher import ModelPatcher
from modules.config import path_inpaint
from modules.model_loader import load_file_from_url
import numpy as np
import supervision as sv
import torch
from groundingdino.util.inference import Model
from groundingdino.util.inference import load_model, preprocess_caption, get_phrases_from_posmap
class GroundingDinoModel(Model):
def __init__(self):
self.config_file = 'extras/GroundingDINO/config/GroundingDINO_SwinT_OGC.py'
self.model = None
self.load_device = torch.device('cpu')
self.offload_device = torch.device('cpu')
@torch.no_grad()
@torch.inference_mode()
def predict_with_caption(
self,
image: np.ndarray,
caption: str,
box_threshold: float = 0.35,
text_threshold: float = 0.25
) -> Tuple[sv.Detections, torch.Tensor, torch.Tensor, List[str]]:
if self.model is None:
filename = load_file_from_url(
url="https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth",
file_name='groundingdino_swint_ogc.pth',
model_dir=path_inpaint)
model = load_model(model_config_path=self.config_file, model_checkpoint_path=filename)
self.load_device = model_management.text_encoder_device()
self.offload_device = model_management.text_encoder_offload_device()
model.to(self.offload_device)
self.model = ModelPatcher(model, load_device=self.load_device, offload_device=self.offload_device)
model_management.load_model_gpu(self.model)
processed_image = GroundingDinoModel.preprocess_image(image_bgr=image).to(self.load_device)
boxes, logits, phrases = predict(
model=self.model,
image=processed_image,
caption=caption,
box_threshold=box_threshold,
text_threshold=text_threshold,
device=self.load_device)
source_h, source_w, _ = image.shape
detections = GroundingDinoModel.post_process_result(
source_h=source_h,
source_w=source_w,
boxes=boxes,
logits=logits)
return detections, boxes, logits, phrases
def predict(
model,
image: torch.Tensor,
caption: str,
box_threshold: float,
text_threshold: float,
device: str = "cuda"
) -> Tuple[torch.Tensor, torch.Tensor, List[str]]:
caption = preprocess_caption(caption=caption)
# override to use model wrapped by patcher
model = model.model.to(device)
image = image.to(device)
with torch.no_grad():
outputs = model(image[None], captions=[caption])
prediction_logits = outputs["pred_logits"].cpu().sigmoid()[0] # prediction_logits.shape = (nq, 256)
prediction_boxes = outputs["pred_boxes"].cpu()[0] # prediction_boxes.shape = (nq, 4)
mask = prediction_logits.max(dim=1)[0] > box_threshold
logits = prediction_logits[mask] # logits.shape = (n, 256)
boxes = prediction_boxes[mask] # boxes.shape = (n, 4)
tokenizer = model.tokenizer
tokenized = tokenizer(caption)
phrases = [
get_phrases_from_posmap(logit > text_threshold, tokenized, tokenizer).replace('.', '')
for logit
in logits
]
return boxes, logits.max(dim=1)[0], phrases
default_groundingdino = GroundingDinoModel().predict_with_caption

60
extras/censor.py Normal file
View File

@ -0,0 +1,60 @@
import os
import numpy as np
import torch
from transformers import CLIPConfig, CLIPImageProcessor
import ldm_patched.modules.model_management as model_management
import modules.config
from extras.safety_checker.models.safety_checker import StableDiffusionSafetyChecker
from ldm_patched.modules.model_patcher import ModelPatcher
safety_checker_repo_root = os.path.join(os.path.dirname(__file__), 'safety_checker')
config_path = os.path.join(safety_checker_repo_root, "configs", "config.json")
preprocessor_config_path = os.path.join(safety_checker_repo_root, "configs", "preprocessor_config.json")
class Censor:
def __init__(self):
self.safety_checker_model: ModelPatcher | None = None
self.clip_image_processor: CLIPImageProcessor | None = None
self.load_device = torch.device('cpu')
self.offload_device = torch.device('cpu')
def init(self):
if self.safety_checker_model is None and self.clip_image_processor is None:
safety_checker_model = modules.config.downloading_safety_checker_model()
self.clip_image_processor = CLIPImageProcessor.from_json_file(preprocessor_config_path)
clip_config = CLIPConfig.from_json_file(config_path)
model = StableDiffusionSafetyChecker.from_pretrained(safety_checker_model, config=clip_config)
model.eval()
self.load_device = model_management.text_encoder_device()
self.offload_device = model_management.text_encoder_offload_device()
model.to(self.offload_device)
self.safety_checker_model = ModelPatcher(model, load_device=self.load_device, offload_device=self.offload_device)
def censor(self, images: list | np.ndarray) -> list | np.ndarray:
self.init()
model_management.load_model_gpu(self.safety_checker_model)
single = False
if not isinstance(images, (list, np.ndarray)):
images = [images]
single = True
safety_checker_input = self.clip_image_processor(images, return_tensors="pt")
safety_checker_input.to(device=self.load_device)
checked_images, has_nsfw_concept = self.safety_checker_model.model(images=images,
clip_input=safety_checker_input.pixel_values)
checked_images = [image.astype(np.uint8) for image in checked_images]
if single:
checked_images = checked_images[0]
return checked_images
default_censor = Censor().censor

View File

@ -112,6 +112,9 @@ class FooocusExpansion:
max_token_length = 75 * int(math.ceil(float(current_token_length) / 75.0)) max_token_length = 75 * int(math.ceil(float(current_token_length) / 75.0))
max_new_tokens = max_token_length - current_token_length max_new_tokens = max_token_length - current_token_length
if max_new_tokens == 0:
return prompt[:-1]
# https://huggingface.co/blog/introducing-csearch # https://huggingface.co/blog/introducing-csearch
# https://huggingface.co/docs/transformers/generation_strategies # https://huggingface.co/docs/transformers/generation_strategies
features = self.model.generate(**tokenized_kwargs, features = self.model.generate(**tokenized_kwargs,

View File

@ -19,7 +19,7 @@ def init_detection_model(model_name, half=False, device='cuda', model_rootpath=N
url=model_url, model_dir='facexlib/weights', progress=True, file_name=None, save_dir=model_rootpath) url=model_url, model_dir='facexlib/weights', progress=True, file_name=None, save_dir=model_rootpath)
# TODO: clean pretrained model # TODO: clean pretrained model
load_net = torch.load(model_path, map_location=lambda storage, loc: storage) load_net = torch.load(model_path, map_location=lambda storage, loc: storage, weights_only=True)
# remove unnecessary 'module.' # remove unnecessary 'module.'
for k, v in deepcopy(load_net).items(): for k, v in deepcopy(load_net).items():
if k.startswith('module.'): if k.startswith('module.'):

View File

@ -17,7 +17,7 @@ def init_parsing_model(model_name='bisenet', half=False, device='cuda', model_ro
model_path = load_file_from_url( model_path = load_file_from_url(
url=model_url, model_dir='facexlib/weights', progress=True, file_name=None, save_dir=model_rootpath) url=model_url, model_dir='facexlib/weights', progress=True, file_name=None, save_dir=model_rootpath)
load_net = torch.load(model_path, map_location=lambda storage, loc: storage) load_net = torch.load(model_path, map_location=lambda storage, loc: storage, weights_only=True)
model.load_state_dict(load_net, strict=True) model.load_state_dict(load_net, strict=True)
model.eval() model.eval()
model = model.to(device) model = model.to(device)

130
extras/inpaint_mask.py Normal file
View File

@ -0,0 +1,130 @@
import sys
import modules.config
import numpy as np
import torch
from extras.GroundingDINO.util.inference import default_groundingdino
from extras.sam.predictor import SamPredictor
from rembg import remove, new_session
from segment_anything import sam_model_registry
from segment_anything.utils.amg import remove_small_regions
class SAMOptions:
def __init__(self,
# GroundingDINO
dino_prompt: str = '',
dino_box_threshold=0.3,
dino_text_threshold=0.25,
dino_erode_or_dilate=0,
dino_debug=False,
# SAM
max_detections=2,
model_type='vit_b'
):
self.dino_prompt = dino_prompt
self.dino_box_threshold = dino_box_threshold
self.dino_text_threshold = dino_text_threshold
self.dino_erode_or_dilate = dino_erode_or_dilate
self.dino_debug = dino_debug
self.max_detections = max_detections
self.model_type = model_type
def optimize_masks(masks: torch.Tensor) -> torch.Tensor:
"""
removes small disconnected regions and holes
"""
fine_masks = []
for mask in masks.to('cpu').numpy(): # masks: [num_masks, 1, h, w]
fine_masks.append(remove_small_regions(mask[0], 400, mode="holes")[0])
masks = np.stack(fine_masks, axis=0)[:, np.newaxis]
return torch.from_numpy(masks)
def generate_mask_from_image(image: np.ndarray, mask_model: str = 'sam', extras=None,
sam_options: SAMOptions | None = SAMOptions) -> tuple[np.ndarray | None, int | None, int | None, int | None]:
dino_detection_count = 0
sam_detection_count = 0
sam_detection_on_mask_count = 0
if image is None:
return None, dino_detection_count, sam_detection_count, sam_detection_on_mask_count
if extras is None:
extras = {}
if 'image' in image:
image = image['image']
if mask_model != 'sam' or sam_options is None:
result = remove(
image,
session=new_session(mask_model, **extras),
only_mask=True,
**extras
)
return result, dino_detection_count, sam_detection_count, sam_detection_on_mask_count
detections, boxes, logits, phrases = default_groundingdino(
image=image,
caption=sam_options.dino_prompt,
box_threshold=sam_options.dino_box_threshold,
text_threshold=sam_options.dino_text_threshold
)
H, W = image.shape[0], image.shape[1]
boxes = boxes * torch.Tensor([W, H, W, H])
boxes[:, :2] = boxes[:, :2] - boxes[:, 2:] / 2
boxes[:, 2:] = boxes[:, 2:] + boxes[:, :2]
sam_checkpoint = modules.config.download_sam_model(sam_options.model_type)
sam = sam_model_registry[sam_options.model_type](checkpoint=sam_checkpoint)
sam_predictor = SamPredictor(sam)
final_mask_tensor = torch.zeros((image.shape[0], image.shape[1]))
dino_detection_count = boxes.size(0)
if dino_detection_count > 0:
sam_predictor.set_image(image)
if sam_options.dino_erode_or_dilate != 0:
for index in range(boxes.size(0)):
assert boxes.size(1) == 4
boxes[index][0] -= sam_options.dino_erode_or_dilate
boxes[index][1] -= sam_options.dino_erode_or_dilate
boxes[index][2] += sam_options.dino_erode_or_dilate
boxes[index][3] += sam_options.dino_erode_or_dilate
if sam_options.dino_debug:
from PIL import ImageDraw, Image
debug_dino_image = Image.new("RGB", (image.shape[1], image.shape[0]), color="black")
draw = ImageDraw.Draw(debug_dino_image)
for box in boxes.numpy():
draw.rectangle(box.tolist(), fill="white")
return np.array(debug_dino_image), dino_detection_count, sam_detection_count, sam_detection_on_mask_count
transformed_boxes = sam_predictor.transform.apply_boxes_torch(boxes, image.shape[:2])
masks, _, _ = sam_predictor.predict_torch(
point_coords=None,
point_labels=None,
boxes=transformed_boxes,
multimask_output=False,
)
masks = optimize_masks(masks)
sam_detection_count = len(masks)
if sam_options.max_detections == 0:
sam_options.max_detections = sys.maxsize
sam_objects = min(len(logits), sam_options.max_detections)
for obj_ind in range(sam_objects):
mask_tensor = masks[obj_ind][0]
final_mask_tensor += mask_tensor
sam_detection_on_mask_count += 1
final_mask_tensor = (final_mask_tensor > 0).to('cpu').numpy()
mask_image = np.dstack((final_mask_tensor, final_mask_tensor, final_mask_tensor)) * 255
mask_image = np.array(mask_image, dtype=np.uint8)
return mask_image, dino_detection_count, sam_detection_count, sam_detection_on_mask_count

View File

@ -104,7 +104,7 @@ def load_ip_adapter(clip_vision_path, ip_negative_path, ip_adapter_path):
offload_device = torch.device('cpu') offload_device = torch.device('cpu')
use_fp16 = model_management.should_use_fp16(device=load_device) use_fp16 = model_management.should_use_fp16(device=load_device)
ip_state_dict = torch.load(ip_adapter_path, map_location="cpu") ip_state_dict = torch.load(ip_adapter_path, map_location="cpu", weights_only=True)
plus = "latents" in ip_state_dict["image_proj"] plus = "latents" in ip_state_dict["image_proj"]
cross_attention_dim = ip_state_dict["ip_adapter"]["1.to_k_ip.weight"].shape[1] cross_attention_dim = ip_state_dict["ip_adapter"]["1.to_k_ip.weight"].shape[1]
sdxl = cross_attention_dim == 2048 sdxl = cross_attention_dim == 2048

View File

@ -1,27 +1,26 @@
import cv2 import cv2
import numpy as np import numpy as np
import modules.advanced_parameters as advanced_parameters
def centered_canny(x: np.ndarray): def centered_canny(x: np.ndarray, canny_low_threshold, canny_high_threshold):
assert isinstance(x, np.ndarray) assert isinstance(x, np.ndarray)
assert x.ndim == 2 and x.dtype == np.uint8 assert x.ndim == 2 and x.dtype == np.uint8
y = cv2.Canny(x, int(advanced_parameters.canny_low_threshold), int(advanced_parameters.canny_high_threshold)) y = cv2.Canny(x, int(canny_low_threshold), int(canny_high_threshold))
y = y.astype(np.float32) / 255.0 y = y.astype(np.float32) / 255.0
return y return y
def centered_canny_color(x: np.ndarray): def centered_canny_color(x: np.ndarray, canny_low_threshold, canny_high_threshold):
assert isinstance(x, np.ndarray) assert isinstance(x, np.ndarray)
assert x.ndim == 3 and x.shape[2] == 3 assert x.ndim == 3 and x.shape[2] == 3
result = [centered_canny(x[..., i]) for i in range(3)] result = [centered_canny(x[..., i], canny_low_threshold, canny_high_threshold) for i in range(3)]
result = np.stack(result, axis=2) result = np.stack(result, axis=2)
return result return result
def pyramid_canny_color(x: np.ndarray): def pyramid_canny_color(x: np.ndarray, canny_low_threshold, canny_high_threshold):
assert isinstance(x, np.ndarray) assert isinstance(x, np.ndarray)
assert x.ndim == 3 and x.shape[2] == 3 assert x.ndim == 3 and x.shape[2] == 3
@ -31,7 +30,7 @@ def pyramid_canny_color(x: np.ndarray):
for k in [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]: for k in [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
Hs, Ws = int(H * k), int(W * k) Hs, Ws = int(H * k), int(W * k)
small = cv2.resize(x, (Ws, Hs), interpolation=cv2.INTER_AREA) small = cv2.resize(x, (Ws, Hs), interpolation=cv2.INTER_AREA)
edge = centered_canny_color(small) edge = centered_canny_color(small, canny_low_threshold, canny_high_threshold)
if acc_edge is None: if acc_edge is None:
acc_edge = edge acc_edge = edge
else: else:
@ -54,11 +53,11 @@ def norm255(x, low=4, high=96):
return x * 255.0 return x * 255.0
def canny_pyramid(x): def canny_pyramid(x, canny_low_threshold, canny_high_threshold):
# For some reasons, SAI's Control-lora Canny seems to be trained on canny maps with non-standard resolutions. # For some reasons, SAI's Control-lora Canny seems to be trained on canny maps with non-standard resolutions.
# Then we use pyramid to use all resolutions to avoid missing any structure in specific resolutions. # Then we use pyramid to use all resolutions to avoid missing any structure in specific resolutions.
color_canny = pyramid_canny_color(x) color_canny = pyramid_canny_color(x, canny_low_threshold, canny_high_threshold)
result = np.sum(color_canny, axis=2) result = np.sum(color_canny, axis=2)
return norm255(result, low=1, high=99).clip(0, 255).astype(np.uint8) return norm255(result, low=1, high=99).clip(0, 255).astype(np.uint8)

View File

@ -0,0 +1,171 @@
{
"_name_or_path": "clip-vit-large-patch14/",
"architectures": [
"SafetyChecker"
],
"initializer_factor": 1.0,
"logit_scale_init_value": 2.6592,
"model_type": "clip",
"projection_dim": 768,
"text_config": {
"_name_or_path": "",
"add_cross_attention": false,
"architectures": null,
"attention_dropout": 0.0,
"bad_words_ids": null,
"bos_token_id": 0,
"chunk_size_feed_forward": 0,
"cross_attention_hidden_size": null,
"decoder_start_token_id": null,
"diversity_penalty": 0.0,
"do_sample": false,
"dropout": 0.0,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": 2,
"exponential_decay_length_penalty": null,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"hidden_act": "quick_gelu",
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_factor": 1.0,
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 77,
"min_length": 0,
"model_type": "clip_text_model",
"no_repeat_ngram_size": 0,
"num_attention_heads": 12,
"num_beam_groups": 1,
"num_beams": 1,
"num_hidden_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": 1,
"prefix": null,
"problem_type": null,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"transformers_version": "4.21.0.dev0",
"typical_p": 1.0,
"use_bfloat16": false,
"vocab_size": 49408
},
"text_config_dict": {
"hidden_size": 768,
"intermediate_size": 3072,
"num_attention_heads": 12,
"num_hidden_layers": 12
},
"torch_dtype": "float32",
"transformers_version": null,
"vision_config": {
"_name_or_path": "",
"add_cross_attention": false,
"architectures": null,
"attention_dropout": 0.0,
"bad_words_ids": null,
"bos_token_id": null,
"chunk_size_feed_forward": 0,
"cross_attention_hidden_size": null,
"decoder_start_token_id": null,
"diversity_penalty": 0.0,
"do_sample": false,
"dropout": 0.0,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": null,
"exponential_decay_length_penalty": null,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"hidden_act": "quick_gelu",
"hidden_size": 1024,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"image_size": 224,
"initializer_factor": 1.0,
"initializer_range": 0.02,
"intermediate_size": 4096,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"min_length": 0,
"model_type": "clip_vision_model",
"no_repeat_ngram_size": 0,
"num_attention_heads": 16,
"num_beam_groups": 1,
"num_beams": 1,
"num_hidden_layers": 24,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": null,
"patch_size": 14,
"prefix": null,
"problem_type": null,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"transformers_version": "4.21.0.dev0",
"typical_p": 1.0,
"use_bfloat16": false
},
"vision_config_dict": {
"hidden_size": 1024,
"intermediate_size": 4096,
"num_attention_heads": 16,
"num_hidden_layers": 24,
"patch_size": 14
}
}

View File

@ -0,0 +1,20 @@
{
"crop_size": 224,
"do_center_crop": true,
"do_convert_rgb": true,
"do_normalize": true,
"do_resize": true,
"feature_extractor_type": "CLIPFeatureExtractor",
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"resample": 3,
"size": 224
}

View File

@ -0,0 +1,126 @@
# from https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py
# Copyright 2024 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import torch
import torch.nn as nn
from transformers import CLIPConfig, CLIPVisionModel, PreTrainedModel
from transformers.utils import logging
logger = logging.get_logger(__name__)
def cosine_distance(image_embeds, text_embeds):
normalized_image_embeds = nn.functional.normalize(image_embeds)
normalized_text_embeds = nn.functional.normalize(text_embeds)
return torch.mm(normalized_image_embeds, normalized_text_embeds.t())
class StableDiffusionSafetyChecker(PreTrainedModel):
config_class = CLIPConfig
main_input_name = "clip_input"
_no_split_modules = ["CLIPEncoderLayer"]
def __init__(self, config: CLIPConfig):
super().__init__(config)
self.vision_model = CLIPVisionModel(config.vision_config)
self.visual_projection = nn.Linear(config.vision_config.hidden_size, config.projection_dim, bias=False)
self.concept_embeds = nn.Parameter(torch.ones(17, config.projection_dim), requires_grad=False)
self.special_care_embeds = nn.Parameter(torch.ones(3, config.projection_dim), requires_grad=False)
self.concept_embeds_weights = nn.Parameter(torch.ones(17), requires_grad=False)
self.special_care_embeds_weights = nn.Parameter(torch.ones(3), requires_grad=False)
@torch.no_grad()
def forward(self, clip_input, images):
pooled_output = self.vision_model(clip_input)[1] # pooled_output
image_embeds = self.visual_projection(pooled_output)
# we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds).cpu().float().numpy()
cos_dist = cosine_distance(image_embeds, self.concept_embeds).cpu().float().numpy()
result = []
batch_size = image_embeds.shape[0]
for i in range(batch_size):
result_img = {"special_scores": {}, "special_care": [], "concept_scores": {}, "bad_concepts": []}
# increase this value to create a stronger `nfsw` filter
# at the cost of increasing the possibility of filtering benign images
adjustment = 0.0
for concept_idx in range(len(special_cos_dist[0])):
concept_cos = special_cos_dist[i][concept_idx]
concept_threshold = self.special_care_embeds_weights[concept_idx].item()
result_img["special_scores"][concept_idx] = round(concept_cos - concept_threshold + adjustment, 3)
if result_img["special_scores"][concept_idx] > 0:
result_img["special_care"].append({concept_idx, result_img["special_scores"][concept_idx]})
adjustment = 0.01
for concept_idx in range(len(cos_dist[0])):
concept_cos = cos_dist[i][concept_idx]
concept_threshold = self.concept_embeds_weights[concept_idx].item()
result_img["concept_scores"][concept_idx] = round(concept_cos - concept_threshold + adjustment, 3)
if result_img["concept_scores"][concept_idx] > 0:
result_img["bad_concepts"].append(concept_idx)
result.append(result_img)
has_nsfw_concepts = [len(res["bad_concepts"]) > 0 for res in result]
for idx, has_nsfw_concept in enumerate(has_nsfw_concepts):
if has_nsfw_concept:
if torch.is_tensor(images) or torch.is_tensor(images[0]):
images[idx] = torch.zeros_like(images[idx]) # black image
else:
images[idx] = np.zeros(images[idx].shape) # black image
if any(has_nsfw_concepts):
logger.warning(
"Potential NSFW content was detected in one or more images. A black image will be returned instead."
" Try again with a different prompt and/or seed."
)
return images, has_nsfw_concepts
@torch.no_grad()
def forward_onnx(self, clip_input: torch.Tensor, images: torch.Tensor):
pooled_output = self.vision_model(clip_input)[1] # pooled_output
image_embeds = self.visual_projection(pooled_output)
special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds)
cos_dist = cosine_distance(image_embeds, self.concept_embeds)
# increase this value to create a stronger `nsfw` filter
# at the cost of increasing the possibility of filtering benign images
adjustment = 0.0
special_scores = special_cos_dist - self.special_care_embeds_weights + adjustment
# special_scores = special_scores.round(decimals=3)
special_care = torch.any(special_scores > 0, dim=1)
special_adjustment = special_care * 0.01
special_adjustment = special_adjustment.unsqueeze(1).expand(-1, cos_dist.shape[1])
concept_scores = (cos_dist - self.concept_embeds_weights) + special_adjustment
# concept_scores = concept_scores.round(decimals=3)
has_nsfw_concepts = torch.any(concept_scores > 0, dim=1)
images[has_nsfw_concepts] = 0.0 # black image
return images, has_nsfw_concepts

288
extras/sam/predictor.py Normal file
View File

@ -0,0 +1,288 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import numpy as np
import torch
from ldm_patched.modules import model_management
from ldm_patched.modules.model_patcher import ModelPatcher
from segment_anything.modeling import Sam
from typing import Optional, Tuple
from segment_anything.utils.transforms import ResizeLongestSide
class SamPredictor:
def __init__(
self,
model: Sam,
load_device=model_management.text_encoder_device(),
offload_device=model_management.text_encoder_offload_device()
) -> None:
"""
Uses SAM to calculate the image embedding for an image, and then
allow repeated, efficient mask prediction given prompts.
Arguments:
model (Sam): The model to use for mask prediction.
"""
super().__init__()
self.load_device = load_device
self.offload_device = offload_device
# can't use model.half() here as slow_conv2d_cpu is not implemented for half
model.to(self.offload_device)
self.patcher = ModelPatcher(model, load_device=self.load_device, offload_device=self.offload_device)
self.transform = ResizeLongestSide(model.image_encoder.img_size)
self.reset_image()
def set_image(
self,
image: np.ndarray,
image_format: str = "RGB",
) -> None:
"""
Calculates the image embeddings for the provided image, allowing
masks to be predicted with the 'predict' method.
Arguments:
image (np.ndarray): The image for calculating masks. Expects an
image in HWC uint8 format, with pixel values in [0, 255].
image_format (str): The color format of the image, in ['RGB', 'BGR'].
"""
assert image_format in [
"RGB",
"BGR",
], f"image_format must be in ['RGB', 'BGR'], is {image_format}."
if image_format != self.patcher.model.image_format:
image = image[..., ::-1]
# Transform the image to the form expected by the model
input_image = self.transform.apply_image(image)
input_image_torch = torch.as_tensor(input_image, device=self.load_device)
input_image_torch = input_image_torch.permute(2, 0, 1).contiguous()[None, :, :, :]
self.set_torch_image(input_image_torch, image.shape[:2])
@torch.no_grad()
def set_torch_image(
self,
transformed_image: torch.Tensor,
original_image_size: Tuple[int, ...],
) -> None:
"""
Calculates the image embeddings for the provided image, allowing
masks to be predicted with the 'predict' method. Expects the input
image to be already transformed to the format expected by the model.
Arguments:
transformed_image (torch.Tensor): The input image, with shape
1x3xHxW, which has been transformed with ResizeLongestSide.
original_image_size (tuple(int, int)): The size of the image
before transformation, in (H, W) format.
"""
assert (
len(transformed_image.shape) == 4
and transformed_image.shape[1] == 3
and max(*transformed_image.shape[2:]) == self.patcher.model.image_encoder.img_size
), f"set_torch_image input must be BCHW with long side {self.patcher.model.image_encoder.img_size}."
self.reset_image()
self.original_size = original_image_size
self.input_size = tuple(transformed_image.shape[-2:])
model_management.load_model_gpu(self.patcher)
input_image = self.patcher.model.preprocess(transformed_image.to(self.load_device))
self.features = self.patcher.model.image_encoder(input_image)
self.is_image_set = True
def predict(
self,
point_coords: Optional[np.ndarray] = None,
point_labels: Optional[np.ndarray] = None,
box: Optional[np.ndarray] = None,
mask_input: Optional[np.ndarray] = None,
multimask_output: bool = True,
return_logits: bool = False,
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
Predict masks for the given input prompts, using the currently set image.
Arguments:
point_coords (np.ndarray or None): A Nx2 array of point prompts to the
model. Each point is in (X,Y) in pixels.
point_labels (np.ndarray or None): A length N array of labels for the
point prompts. 1 indicates a foreground point and 0 indicates a
background point.
box (np.ndarray or None): A length 4 array given a box prompt to the
model, in XYXY format.
mask_input (np.ndarray): A low resolution mask input to the model, typically
coming from a previous prediction iteration. Has form 1xHxW, where
for SAM, H=W=256.
multimask_output (bool): If true, the model will return three masks.
For ambiguous input prompts (such as a single click), this will often
produce better masks than a single prediction. If only a single
mask is needed, the model's predicted quality score can be used
to select the best mask. For non-ambiguous prompts, such as multiple
input prompts, multimask_output=False can give better results.
return_logits (bool): If true, returns un-thresholded masks logits
instead of a binary mask.
Returns:
(np.ndarray): The output masks in CxHxW format, where C is the
number of masks, and (H, W) is the original image size.
(np.ndarray): An array of length C containing the model's
predictions for the quality of each mask.
(np.ndarray): An array of shape CxHxW, where C is the number
of masks and H=W=256. These low resolution logits can be passed to
a subsequent iteration as mask input.
"""
if not self.is_image_set:
raise RuntimeError("An image must be set with .set_image(...) before mask prediction.")
# Transform input prompts
coords_torch, labels_torch, box_torch, mask_input_torch = None, None, None, None
if point_coords is not None:
assert (
point_labels is not None
), "point_labels must be supplied if point_coords is supplied."
point_coords = self.transform.apply_coords(point_coords, self.original_size)
coords_torch = torch.as_tensor(point_coords, dtype=torch.float, device=self.load_device)
labels_torch = torch.as_tensor(point_labels, dtype=torch.int, device=self.load_device)
coords_torch, labels_torch = coords_torch[None, :, :], labels_torch[None, :]
if box is not None:
box = self.transform.apply_boxes(box, self.original_size)
box_torch = torch.as_tensor(box, dtype=torch.float, device=self.load_device)
box_torch = box_torch[None, :]
if mask_input is not None:
mask_input_torch = torch.as_tensor(mask_input, dtype=torch.float, device=self.load_device)
mask_input_torch = mask_input_torch[None, :, :, :]
masks, iou_predictions, low_res_masks = self.predict_torch(
coords_torch,
labels_torch,
box_torch,
mask_input_torch,
multimask_output,
return_logits=return_logits,
)
masks = masks[0].detach().cpu().numpy()
iou_predictions = iou_predictions[0].detach().cpu().numpy()
low_res_masks = low_res_masks[0].detach().cpu().numpy()
return masks, iou_predictions, low_res_masks
@torch.no_grad()
def predict_torch(
self,
point_coords: Optional[torch.Tensor],
point_labels: Optional[torch.Tensor],
boxes: Optional[torch.Tensor] = None,
mask_input: Optional[torch.Tensor] = None,
multimask_output: bool = True,
return_logits: bool = False,
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
Predict masks for the given input prompts, using the currently set image.
Input prompts are batched torch tensors and are expected to already be
transformed to the input frame using ResizeLongestSide.
Arguments:
point_coords (torch.Tensor or None): A BxNx2 array of point prompts to the
model. Each point is in (X,Y) in pixels.
point_labels (torch.Tensor or None): A BxN array of labels for the
point prompts. 1 indicates a foreground point and 0 indicates a
background point.
box (np.ndarray or None): A Bx4 array given a box prompt to the
model, in XYXY format.
mask_input (np.ndarray): A low resolution mask input to the model, typically
coming from a previous prediction iteration. Has form Bx1xHxW, where
for SAM, H=W=256. Masks returned by a previous iteration of the
predict method do not need further transformation.
multimask_output (bool): If true, the model will return three masks.
For ambiguous input prompts (such as a single click), this will often
produce better masks than a single prediction. If only a single
mask is needed, the model's predicted quality score can be used
to select the best mask. For non-ambiguous prompts, such as multiple
input prompts, multimask_output=False can give better results.
return_logits (bool): If true, returns un-thresholded masks logits
instead of a binary mask.
Returns:
(torch.Tensor): The output masks in BxCxHxW format, where C is the
number of masks, and (H, W) is the original image size.
(torch.Tensor): An array of shape BxC containing the model's
predictions for the quality of each mask.
(torch.Tensor): An array of shape BxCxHxW, where C is the number
of masks and H=W=256. These low res logits can be passed to
a subsequent iteration as mask input.
"""
if not self.is_image_set:
raise RuntimeError("An image must be set with .set_image(...) before mask prediction.")
if point_coords is not None:
points = (point_coords.to(self.load_device), point_labels.to(self.load_device))
else:
points = None
# load
if boxes is not None:
boxes = boxes.to(self.load_device)
if mask_input is not None:
mask_input = mask_input.to(self.load_device)
model_management.load_model_gpu(self.patcher)
# Embed prompts
sparse_embeddings, dense_embeddings = self.patcher.model.prompt_encoder(
points=points,
boxes=boxes,
masks=mask_input,
)
# Predict masks
low_res_masks, iou_predictions = self.patcher.model.mask_decoder(
image_embeddings=self.features,
image_pe=self.patcher.model.prompt_encoder.get_dense_pe(),
sparse_prompt_embeddings=sparse_embeddings,
dense_prompt_embeddings=dense_embeddings,
multimask_output=multimask_output,
)
# Upscale the masks to the original image resolution
masks = self.patcher.model.postprocess_masks(low_res_masks, self.input_size, self.original_size)
if not return_logits:
masks = masks > self.patcher.model.mask_threshold
return masks, iou_predictions, low_res_masks
def get_image_embedding(self) -> torch.Tensor:
"""
Returns the image embeddings for the currently set image, with
shape 1xCxHxW, where C is the embedding dimension and (H,W) are
the embedding spatial dimension of SAM (typically C=256, H=W=64).
"""
if not self.is_image_set:
raise RuntimeError(
"An image must be set with .set_image(...) to generate an embedding."
)
assert self.features is not None, "Features must exist if an image has been set."
return self.features
@property
def device(self) -> torch.device:
return self.patcher.model.device
def reset_image(self) -> None:
"""Resets the currently set image."""
self.is_image_set = False
self.features = None
self.orig_h = None
self.orig_w = None
self.input_h = None
self.input_w = None

View File

@ -1,69 +1,85 @@
# https://github.com/city96/SD-Latent-Interposer/blob/main/interposer.py # https://github.com/city96/SD-Latent-Interposer/blob/main/interposer.py
import os import os
import torch
import safetensors.torch as sf
import torch.nn as nn
import ldm_patched.modules.model_management
import safetensors.torch as sf
import torch
import torch.nn as nn
import ldm_patched.modules.model_management
from ldm_patched.modules.model_patcher import ModelPatcher from ldm_patched.modules.model_patcher import ModelPatcher
from modules.config import path_vae_approx from modules.config import path_vae_approx
class Block(nn.Module): class ResBlock(nn.Module):
def __init__(self, size): """Block with residuals"""
def __init__(self, ch):
super().__init__() super().__init__()
self.join = nn.ReLU() self.join = nn.ReLU()
self.norm = nn.BatchNorm2d(ch)
self.long = nn.Sequential( self.long = nn.Sequential(
nn.Conv2d(size, size, kernel_size=3, stride=1, padding=1), nn.Conv2d(ch, ch, kernel_size=3, stride=1, padding=1),
nn.LeakyReLU(0.1), nn.SiLU(),
nn.Conv2d(size, size, kernel_size=3, stride=1, padding=1), nn.Conv2d(ch, ch, kernel_size=3, stride=1, padding=1),
nn.LeakyReLU(0.1), nn.SiLU(),
nn.Conv2d(size, size, kernel_size=3, stride=1, padding=1), nn.Conv2d(ch, ch, kernel_size=3, stride=1, padding=1),
nn.Dropout(0.1)
) )
def forward(self, x): def forward(self, x):
y = self.long(x) x = self.norm(x)
z = self.join(y + x) return self.join(self.long(x) + x)
return z
class Interposer(nn.Module): class ExtractBlock(nn.Module):
def __init__(self): """Increase no. of channels by [out/in]"""
def __init__(self, ch_in, ch_out):
super().__init__() super().__init__()
self.chan = 4 self.join = nn.ReLU()
self.hid = 128 self.short = nn.Conv2d(ch_in, ch_out, kernel_size=3, stride=1, padding=1)
self.long = nn.Sequential(
self.head_join = nn.ReLU() nn.Conv2d(ch_in, ch_out, kernel_size=3, stride=1, padding=1),
self.head_short = nn.Conv2d(self.chan, self.hid, kernel_size=3, stride=1, padding=1) nn.SiLU(),
self.head_long = nn.Sequential( nn.Conv2d(ch_out, ch_out, kernel_size=3, stride=1, padding=1),
nn.Conv2d(self.chan, self.hid, kernel_size=3, stride=1, padding=1), nn.SiLU(),
nn.LeakyReLU(0.1), nn.Conv2d(ch_out, ch_out, kernel_size=3, stride=1, padding=1),
nn.Conv2d(self.hid, self.hid, kernel_size=3, stride=1, padding=1), nn.Dropout(0.1)
nn.LeakyReLU(0.1),
nn.Conv2d(self.hid, self.hid, kernel_size=3, stride=1, padding=1),
)
self.core = nn.Sequential(
Block(self.hid),
Block(self.hid),
Block(self.hid),
)
self.tail = nn.Sequential(
nn.ReLU(),
nn.Conv2d(self.hid, self.chan, kernel_size=3, stride=1, padding=1)
) )
def forward(self, x): def forward(self, x):
y = self.head_join( return self.join(self.long(x) + self.short(x))
self.head_long(x) +
self.head_short(x)
class InterposerModel(nn.Module):
"""Main neural network"""
def __init__(self, ch_in=4, ch_out=4, ch_mid=64, scale=1.0, blocks=12):
super().__init__()
self.ch_in = ch_in
self.ch_out = ch_out
self.ch_mid = ch_mid
self.blocks = blocks
self.scale = scale
self.head = ExtractBlock(self.ch_in, self.ch_mid)
self.core = nn.Sequential(
nn.Upsample(scale_factor=self.scale, mode="nearest"),
*[ResBlock(self.ch_mid) for _ in range(blocks)],
nn.BatchNorm2d(self.ch_mid),
nn.SiLU(),
) )
self.tail = nn.Conv2d(self.ch_mid, self.ch_out, kernel_size=3, stride=1, padding=1)
def forward(self, x):
y = self.head(x)
z = self.core(y) z = self.core(y)
return self.tail(z) return self.tail(z)
vae_approx_model = None vae_approx_model = None
vae_approx_filename = os.path.join(path_vae_approx, 'xl-to-v1_interposer-v3.1.safetensors') vae_approx_filename = os.path.join(path_vae_approx, 'xl-to-v1_interposer-v4.0.safetensors')
def parse(x): def parse(x):
@ -72,7 +88,7 @@ def parse(x):
x_origin = x.clone() x_origin = x.clone()
if vae_approx_model is None: if vae_approx_model is None:
model = Interposer() model = InterposerModel()
model.eval() model.eval()
sd = sf.load_file(vae_approx_filename) sd = sf.load_file(vae_approx_filename)
model.load_state_dict(sd) model.load_state_dict(sd)

View File

@ -8,11 +8,11 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"!pip install pygit2==1.12.2\n", "!pip install pygit2==1.15.1\n",
"%cd /content\n", "%cd /content\n",
"!git clone https://github.com/lllyasviel/Fooocus.git\n", "!git clone https://github.com/lllyasviel/Fooocus.git\n",
"%cd /content/Fooocus\n", "%cd /content/Fooocus\n",
"!python entry_with_update.py --share\n" "!python entry_with_update.py --share --always-high-vram\n"
] ]
} }
], ],

View File

@ -1 +1 @@
version = '2.1.864' version = '2.5.5'

View File

@ -154,12 +154,8 @@ let cancelGenerateForever = function() {
let generateOnRepeatForButtons = function() { let generateOnRepeatForButtons = function() {
generateOnRepeat('#generate_button', '#stop_button'); generateOnRepeat('#generate_button', '#stop_button');
}; };
appendContextMenuOption('#generate_button', 'Generate forever', generateOnRepeatForButtons); appendContextMenuOption('#generate_button', 'Generate forever', generateOnRepeatForButtons);
// appendContextMenuOption('#stop_button', 'Generate forever', generateOnRepeatForButtons);
// appendContextMenuOption('#stop_button', 'Cancel generate forever', cancelGenerateForever);
// appendContextMenuOption('#generate_button', 'Cancel generate forever', cancelGenerateForever);
})(); })();
//End example Context Menu Items //End example Context Menu Items

View File

@ -80,6 +80,15 @@ function refresh_style_localization() {
processNode(document.querySelector('.style_selections')); processNode(document.querySelector('.style_selections'));
} }
function refresh_aspect_ratios_label(value) {
label = document.querySelector('#aspect_ratios_accordion div span');
translation = getTranslation("Aspect Ratios");
if (typeof translation == "undefined") {
translation = "Aspect Ratios";
}
label.textContent = translation + " " + htmlDecode(value);
}
function localizeWholePage() { function localizeWholePage() {
processNode(gradioApp()); processNode(gradioApp());

View File

@ -122,6 +122,43 @@ document.addEventListener("DOMContentLoaded", function() {
initStylePreviewOverlay(); initStylePreviewOverlay();
}); });
var onAppend = function(elem, f) {
var observer = new MutationObserver(function(mutations) {
mutations.forEach(function(m) {
if (m.addedNodes.length) {
f(m.addedNodes);
}
});
});
observer.observe(elem, {childList: true});
}
function addObserverIfDesiredNodeAvailable(querySelector, callback) {
var elem = document.querySelector(querySelector);
if (!elem) {
window.setTimeout(() => addObserverIfDesiredNodeAvailable(querySelector, callback), 1000);
return;
}
onAppend(elem, callback);
}
/**
* Show reset button on toast "Connection errored out."
*/
addObserverIfDesiredNodeAvailable(".toast-wrap", function(added) {
added.forEach(function(element) {
if (element.innerText.includes("Connection errored out.")) {
window.setTimeout(function() {
document.getElementById("reset_button").classList.remove("hidden");
document.getElementById("generate_button").classList.add("hidden");
document.getElementById("skip_button").classList.add("hidden");
document.getElementById("stop_button").classList.add("hidden");
});
}
});
});
/** /**
* Add a ctrl+enter as a shortcut to start a generation * Add a ctrl+enter as a shortcut to start a generation
*/ */
@ -150,6 +187,9 @@ function initStylePreviewOverlay() {
let overlayVisible = false; let overlayVisible = false;
const samplesPath = document.querySelector("meta[name='samples-path']").getAttribute("content") const samplesPath = document.querySelector("meta[name='samples-path']").getAttribute("content")
const overlay = document.createElement('div'); const overlay = document.createElement('div');
const tooltip = document.createElement('div');
tooltip.className = 'preview-tooltip';
overlay.appendChild(tooltip);
overlay.id = 'stylePreviewOverlay'; overlay.id = 'stylePreviewOverlay';
document.body.appendChild(overlay); document.body.appendChild(overlay);
document.addEventListener('mouseover', function (e) { document.addEventListener('mouseover', function (e) {
@ -165,6 +205,9 @@ function initStylePreviewOverlay() {
"fooocus_v2", "fooocus_v2",
name.toLowerCase().replaceAll(" ", "_") name.toLowerCase().replaceAll(" ", "_")
).replaceAll("\\", "\\\\")}")`; ).replaceAll("\\", "\\\\")}")`;
tooltip.textContent = name;
function onMouseLeave() { function onMouseLeave() {
overlayVisible = false; overlayVisible = false;
overlay.style.opacity = "0"; overlay.style.opacity = "0";
@ -213,3 +256,8 @@ function set_theme(theme) {
window.location.replace(gradioURL + '?__theme=' + theme); window.location.replace(gradioURL + '?__theme=' + theme);
} }
} }
function htmlDecode(input) {
var doc = new DOMParser().parseFromString(input, "text/html");
return doc.documentElement.textContent;
}

View File

@ -642,4 +642,5 @@ onUiLoaded(async() => {
} }
applyZoomAndPan("#inpaint_canvas"); applyZoomAndPan("#inpaint_canvas");
applyZoomAndPan("#inpaint_mask_canvas");
}); });

View File

@ -4,12 +4,22 @@
"Generate": "Generate", "Generate": "Generate",
"Skip": "Skip", "Skip": "Skip",
"Stop": "Stop", "Stop": "Stop",
"Reconnect": "Reconnect",
"Input Image": "Input Image", "Input Image": "Input Image",
"Advanced": "Advanced", "Advanced": "Advanced",
"Upscale or Variation": "Upscale or Variation", "Upscale or Variation": "Upscale or Variation",
"Image Prompt": "Image Prompt", "Image Prompt": "Image Prompt",
"Inpaint or Outpaint (beta)": "Inpaint or Outpaint (beta)", "Inpaint or Outpaint": "Inpaint or Outpaint",
"Drag above image to here": "Drag above image to here", "Outpaint Direction": "Outpaint Direction",
"Enable Advanced Masking Features": "Enable Advanced Masking Features",
"Method": "Method",
"Describe": "Describe",
"Content Type": "Content Type",
"Photograph": "Photograph",
"Art/Anime": "Art/Anime",
"Apply Styles": "Apply Styles",
"Describe this Image into Prompt": "Describe this Image into Prompt",
"Image Size and Recommended Size": "Image Size and Recommended Size",
"Upscale or Variation:": "Upscale or Variation:", "Upscale or Variation:": "Upscale or Variation:",
"Disabled": "Disabled", "Disabled": "Disabled",
"Vary (Subtle)": "Vary (Subtle)", "Vary (Subtle)": "Vary (Subtle)",
@ -17,7 +27,7 @@
"Upscale (1.5x)": "Upscale (1.5x)", "Upscale (1.5x)": "Upscale (1.5x)",
"Upscale (2x)": "Upscale (2x)", "Upscale (2x)": "Upscale (2x)",
"Upscale (Fast 2x)": "Upscale (Fast 2x)", "Upscale (Fast 2x)": "Upscale (Fast 2x)",
"\ud83d\udcd4 Document": "\uD83D\uDCD4 Document", "\ud83d\udcd4 Documentation": "\uD83D\uDCD4 Documentation",
"Image": "Image", "Image": "Image",
"Stop At": "Stop At", "Stop At": "Stop At",
"Weight": "Weight", "Weight": "Weight",
@ -36,11 +46,17 @@
"Top": "Top", "Top": "Top",
"Bottom": "Bottom", "Bottom": "Bottom",
"* \"Inpaint or Outpaint\" is powered by the sampler \"DPMPP Fooocus Seamless 2M SDE Karras Inpaint Sampler\" (beta)": "* \"Inpaint or Outpaint\" is powered by the sampler \"DPMPP Fooocus Seamless 2M SDE Karras Inpaint Sampler\" (beta)", "* \"Inpaint or Outpaint\" is powered by the sampler \"DPMPP Fooocus Seamless 2M SDE Karras Inpaint Sampler\" (beta)": "* \"Inpaint or Outpaint\" is powered by the sampler \"DPMPP Fooocus Seamless 2M SDE Karras Inpaint Sampler\" (beta)",
"Setting": "Setting", "Advanced options": "Advanced options",
"Generate mask from image": "Generate mask from image",
"Settings": "Settings",
"Style": "Style", "Style": "Style",
"Styles": "Styles",
"Preset": "Preset",
"Performance": "Performance", "Performance": "Performance",
"Speed": "Speed", "Speed": "Speed",
"Quality": "Quality", "Quality": "Quality",
"Extreme Speed": "Extreme Speed",
"Lightning": "Lightning",
"Aspect Ratios": "Aspect Ratios", "Aspect Ratios": "Aspect Ratios",
"width \u00d7 height": "width \u00d7 height", "width \u00d7 height": "width \u00d7 height",
"Image Number": "Image Number", "Image Number": "Image Number",
@ -48,9 +64,18 @@
"Describing what you do not want to see.": "Describing what you do not want to see.", "Describing what you do not want to see.": "Describing what you do not want to see.",
"Random": "Random", "Random": "Random",
"Seed": "Seed", "Seed": "Seed",
"Disable seed increment": "Disable seed increment",
"Disable automatic seed increment when image number is > 1.": "Disable automatic seed increment when image number is > 1.",
"Read wildcards in order": "Read wildcards in order",
"Black Out NSFW": "Black Out NSFW",
"Use black image if NSFW is detected.": "Use black image if NSFW is detected.",
"Save only final enhanced image": "Save only final enhanced image",
"Save Metadata to Images": "Save Metadata to Images",
"Adds parameters to generated images allowing manual regeneration.": "Adds parameters to generated images allowing manual regeneration.",
"\ud83d\udcda History Log": "\uD83D\uDCDA History Log", "\ud83d\udcda History Log": "\uD83D\uDCDA History Log",
"Image Style": "Image Style", "Image Style": "Image Style",
"Fooocus V2": "Fooocus V2", "Fooocus V2": "Fooocus V2",
"Random Style": "Random Style",
"Default (Slightly Cinematic)": "Default (Slightly Cinematic)", "Default (Slightly Cinematic)": "Default (Slightly Cinematic)",
"Fooocus Masterpiece": "Fooocus Masterpiece", "Fooocus Masterpiece": "Fooocus Masterpiece",
"Fooocus Photograph": "Fooocus Photograph", "Fooocus Photograph": "Fooocus Photograph",
@ -262,7 +287,7 @@
"Volumetric Lighting": "Volumetric Lighting", "Volumetric Lighting": "Volumetric Lighting",
"Watercolor 2": "Watercolor 2", "Watercolor 2": "Watercolor 2",
"Whimsical And Playful": "Whimsical And Playful", "Whimsical And Playful": "Whimsical And Playful",
"Model": "Model", "Models": "Models",
"Base Model (SDXL only)": "Base Model (SDXL only)", "Base Model (SDXL only)": "Base Model (SDXL only)",
"sd_xl_base_1.0_0.9vae.safetensors": "sd_xl_base_1.0_0.9vae.safetensors", "sd_xl_base_1.0_0.9vae.safetensors": "sd_xl_base_1.0_0.9vae.safetensors",
"bluePencilXL_v009.safetensors": "bluePencilXL_v009.safetensors", "bluePencilXL_v009.safetensors": "bluePencilXL_v009.safetensors",
@ -303,6 +328,8 @@
"vae": "vae", "vae": "vae",
"CFG Mimicking from TSNR": "CFG Mimicking from TSNR", "CFG Mimicking from TSNR": "CFG Mimicking from TSNR",
"Enabling Fooocus's implementation of CFG mimicking for TSNR (effective when real CFG > mimicked CFG).": "Enabling Fooocus's implementation of CFG mimicking for TSNR (effective when real CFG > mimicked CFG).", "Enabling Fooocus's implementation of CFG mimicking for TSNR (effective when real CFG > mimicked CFG).": "Enabling Fooocus's implementation of CFG mimicking for TSNR (effective when real CFG > mimicked CFG).",
"CLIP Skip": "CLIP Skip",
"Bypass CLIP layers to avoid overfitting (use 1 to not skip any layers, 2 is recommended).": "Bypass CLIP layers to avoid overfitting (use 1 to not skip any layers, 2 is recommended).",
"Sampler": "Sampler", "Sampler": "Sampler",
"dpmpp_2m_sde_gpu": "dpmpp_2m_sde_gpu", "dpmpp_2m_sde_gpu": "dpmpp_2m_sde_gpu",
"Only effective in non-inpaint mode.": "Only effective in non-inpaint mode.", "Only effective in non-inpaint mode.": "Only effective in non-inpaint mode.",
@ -333,6 +360,8 @@
"sgm_uniform": "sgm_uniform", "sgm_uniform": "sgm_uniform",
"simple": "simple", "simple": "simple",
"ddim_uniform": "ddim_uniform", "ddim_uniform": "ddim_uniform",
"VAE": "VAE",
"Default (model)": "Default (model)",
"Forced Overwrite of Sampling Step": "Forced Overwrite of Sampling Step", "Forced Overwrite of Sampling Step": "Forced Overwrite of Sampling Step",
"Set as -1 to disable. For developer debugging.": "Set as -1 to disable. For developer debugging.", "Set as -1 to disable. For developer debugging.": "Set as -1 to disable. For developer debugging.",
"Forced Overwrite of Refiner Switch Step": "Forced Overwrite of Refiner Switch Step", "Forced Overwrite of Refiner Switch Step": "Forced Overwrite of Refiner Switch Step",
@ -342,10 +371,18 @@
"Forced Overwrite of Denoising Strength of \"Vary\"": "Forced Overwrite of Denoising Strength of \"Vary\"", "Forced Overwrite of Denoising Strength of \"Vary\"": "Forced Overwrite of Denoising Strength of \"Vary\"",
"Set as negative number to disable. For developer debugging.": "Set as negative number to disable. For developer debugging.", "Set as negative number to disable. For developer debugging.": "Set as negative number to disable. For developer debugging.",
"Forced Overwrite of Denoising Strength of \"Upscale\"": "Forced Overwrite of Denoising Strength of \"Upscale\"", "Forced Overwrite of Denoising Strength of \"Upscale\"": "Forced Overwrite of Denoising Strength of \"Upscale\"",
"Disable Preview": "Disable Preview",
"Disable preview during generation.": "Disable preview during generation.",
"Disable Intermediate Results": "Disable Intermediate Results",
"Disable intermediate results during generation, only show final gallery.": "Disable intermediate results during generation, only show final gallery.",
"Debug Inpaint Preprocessing": "Debug Inpaint Preprocessing",
"Debug GroundingDINO": "Debug GroundingDINO",
"Used for SAM object detection and box generation": "Used for SAM object detection and box generation",
"GroundingDINO Box Erode or Dilate": "GroundingDINO Box Erode or Dilate",
"Inpaint Engine": "Inpaint Engine", "Inpaint Engine": "Inpaint Engine",
"v1": "v1", "v1": "v1",
"Version of Fooocus inpaint model": "Version of Fooocus inpaint model",
"v2.5": "v2.5", "v2.5": "v2.5",
"v2.6": "v2.6",
"Control Debug": "Control Debug", "Control Debug": "Control Debug",
"Debug Preprocessors": "Debug Preprocessors", "Debug Preprocessors": "Debug Preprocessors",
"Mixing Image Prompt and Vary/Upscale": "Mixing Image Prompt and Vary/Upscale", "Mixing Image Prompt and Vary/Upscale": "Mixing Image Prompt and Vary/Upscale",
@ -361,12 +398,88 @@
"B2": "B2", "B2": "B2",
"S1": "S1", "S1": "S1",
"S2": "S2", "S2": "S2",
"Extreme Speed": "Extreme Speed",
"\uD83D\uDD0E Type here to search styles ...": "\uD83D\uDD0E Type here to search styles ...", "\uD83D\uDD0E Type here to search styles ...": "\uD83D\uDD0E Type here to search styles ...",
"Type prompt here.": "Type prompt here.", "Type prompt here.": "Type prompt here.",
"Outpaint Expansion Direction:": "Outpaint Expansion Direction:", "Outpaint Expansion Direction:": "Outpaint Expansion Direction:",
"* Powered by Fooocus Inpaint Engine (beta)": "* Powered by Fooocus Inpaint Engine (beta)", "* Powered by Fooocus Inpaint Engine (beta)": "* Powered by Fooocus Inpaint Engine (beta)",
"Fooocus Enhance": "Fooocus Enhance", "Fooocus Enhance": "Fooocus Enhance",
"Fooocus Cinematic": "Fooocus Cinematic", "Fooocus Cinematic": "Fooocus Cinematic",
"Fooocus Sharp": "Fooocus Sharp" "Fooocus Sharp": "Fooocus Sharp",
"For images created by Fooocus": "For images created by Fooocus",
"Metadata": "Metadata",
"Apply Metadata": "Apply Metadata",
"Metadata Scheme": "Metadata Scheme",
"Image Prompt parameters are not included. Use png and a1111 for compatibility with Civitai.": "Image Prompt parameters are not included. Use png and a1111 for compatibility with Civitai.",
"fooocus (json)": "fooocus (json)",
"a1111 (plain text)": "a1111 (plain text)",
"Unsupported image type in input": "Unsupported image type in input",
"Enhance": "Enhance",
"Detection prompt": "Detection prompt",
"Detection Prompt Quick List": "Detection Prompt Quick List",
"Maximum number of detections": "Maximum number of detections",
"Use with Enhance, skips image generation": "Use with Enhance, skips image generation",
"Order of Processing": "Order of Processing",
"Use before to enhance small details and after to enhance large areas.": "Use before to enhance small details and after to enhance large areas.",
"Before First Enhancement": "Before First Enhancement",
"After Last Enhancement": "After Last Enhancement",
"Prompt Type": "Prompt Type",
"Choose which prompt to use for Upscale or Variation.": "Choose which prompt to use for Upscale or Variation.",
"Original Prompts": "Original Prompts",
"Last Filled Enhancement Prompts": "Last Filled Enhancement Prompts",
"Enable": "Enable",
"Describe what you want to detect.": "Describe what you want to detect.",
"Enhancement positive prompt": "Enhancement positive prompt",
"Uses original prompt instead if empty.": "Uses original prompt instead if empty.",
"Enhancement negative prompt": "Enhancement negative prompt",
"Uses original negative prompt instead if empty.": "Uses original negative prompt instead if empty.",
"Detection": "Detection",
"u2net": "u2net",
"u2netp": "u2netp",
"u2net_human_seg": "u2net_human_seg",
"u2net_cloth_seg": "u2net_cloth_seg",
"silueta": "silueta",
"isnet-general-use": "isnet-general-use",
"isnet-anime": "isnet-anime",
"sam": "sam",
"Mask generation model": "Mask generation model",
"Cloth category": "Cloth category",
"Use singular whenever possible": "Use singular whenever possible",
"full": "full",
"upper": "upper",
"lower": "lower",
"SAM Options": "SAM Options",
"SAM model": "SAM model",
"vit_b": "vit_b",
"vit_l": "vit_l",
"vit_h": "vit_h",
"Box Threshold": "Box Threshold",
"Text Threshold": "Text Threshold",
"Set to 0 to detect all": "Set to 0 to detect all",
"Inpaint": "Inpaint",
"Inpaint or Outpaint (default)": "Inpaint or Outpaint (default)",
"Improve Detail (face, hand, eyes, etc.)": "Improve Detail (face, hand, eyes, etc.)",
"Modify Content (add objects, change background, etc.)": "Modify Content (add objects, change background, etc.)",
"Disable initial latent in inpaint": "Disable initial latent in inpaint",
"Version of Fooocus inpaint model. If set, use performance Quality or Speed (no performance LoRAs) for best results.": "Version of Fooocus inpaint model. If set, use performance Quality or Speed (no performance LoRAs) for best results.",
"Inpaint Denoising Strength": "Inpaint Denoising Strength",
"Same as the denoising strength in A1111 inpaint. Only used in inpaint, not used in outpaint. (Outpaint always use 1.0)": "Same as the denoising strength in A1111 inpaint. Only used in inpaint, not used in outpaint. (Outpaint always use 1.0)",
"Inpaint Respective Field": "Inpaint Respective Field",
"The area to inpaint. Value 0 is same as \"Only Masked\" in A1111. Value 1 is same as \"Whole Image\" in A1111. Only used in inpaint, not used in outpaint. (Outpaint always use 1.0)": "The area to inpaint. Value 0 is same as \"Only Masked\" in A1111. Value 1 is same as \"Whole Image\" in A1111. Only used in inpaint, not used in outpaint. (Outpaint always use 1.0)",
"Mask Erode or Dilate": "Mask Erode or Dilate",
"Positive value will make white area in the mask larger, negative value will make white area smaller. (default is 0, always processed before any mask invert)": "Positive value will make white area in the mask larger, negative value will make white area smaller. (default is 0, always processed before any mask invert)",
"Invert Mask When Generating": "Invert Mask When Generating",
"Debug Enhance Masks": "Debug Enhance Masks",
"Show enhance masks in preview and final results": "Show enhance masks in preview and final results",
"Use GroundingDINO boxes instead of more detailed SAM masks": "Use GroundingDINO boxes instead of more detailed SAM masks",
"highly detailed face": "highly detailed face",
"detailed girl face": "detailed girl face",
"detailed man face": "detailed man face",
"detailed hand": "detailed hand",
"beautiful eyes": "beautiful eyes",
"face": "face",
"eye": "eye",
"mouth": "mouth",
"hair": "hair",
"hand": "hand",
"body": "body"
} }

View File

@ -1,6 +1,6 @@
import os import os
import sys
import ssl import ssl
import sys
print('[System ARGV] ' + str(sys.argv)) print('[System ARGV] ' + str(sys.argv))
@ -10,19 +10,17 @@ os.chdir(root)
os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
os.environ["PYTORCH_MPS_HIGH_WATERMARK_RATIO"] = "0.0" os.environ["PYTORCH_MPS_HIGH_WATERMARK_RATIO"] = "0.0"
if "GRADIO_SERVER_PORT" not in os.environ:
os.environ["GRADIO_SERVER_PORT"] = "7865" os.environ["GRADIO_SERVER_PORT"] = "7865"
ssl._create_default_https_context = ssl._create_unverified_context ssl._create_default_https_context = ssl._create_unverified_context
import platform import platform
import fooocus_version import fooocus_version
from build_launcher import build_launcher from build_launcher import build_launcher
from modules.launch_util import is_installed, run, python, run_pip, requirements_met from modules.launch_util import is_installed, run, python, run_pip, requirements_met, delete_folder_content
from modules.model_loader import load_file_from_url from modules.model_loader import load_file_from_url
from modules import config
REINSTALL_ALL = False REINSTALL_ALL = False
TRY_INSTALL_XFORMERS = False TRY_INSTALL_XFORMERS = False
@ -42,7 +40,7 @@ def prepare_environment():
if TRY_INSTALL_XFORMERS: if TRY_INSTALL_XFORMERS:
if REINSTALL_ALL or not is_installed("xformers"): if REINSTALL_ALL or not is_installed("xformers"):
xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.20') xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.23')
if platform.system() == "Windows": if platform.system() == "Windows":
if platform.python_version().startswith("3.10"): if platform.python_version().startswith("3.10"):
run_pip(f"install -U -I --no-deps {xformers_package}", "xformers", live=True) run_pip(f"install -U -I --no-deps {xformers_package}", "xformers", live=True)
@ -64,8 +62,8 @@ def prepare_environment():
vae_approx_filenames = [ vae_approx_filenames = [
('xlvaeapp.pth', 'https://huggingface.co/lllyasviel/misc/resolve/main/xlvaeapp.pth'), ('xlvaeapp.pth', 'https://huggingface.co/lllyasviel/misc/resolve/main/xlvaeapp.pth'),
('vaeapp_sd15.pth', 'https://huggingface.co/lllyasviel/misc/resolve/main/vaeapp_sd15.pt'), ('vaeapp_sd15.pth', 'https://huggingface.co/lllyasviel/misc/resolve/main/vaeapp_sd15.pt'),
('xl-to-v1_interposer-v3.1.safetensors', ('xl-to-v1_interposer-v4.0.safetensors',
'https://huggingface.co/lllyasviel/misc/resolve/main/xl-to-v1_interposer-v3.1.safetensors') 'https://huggingface.co/mashb1t/misc/resolve/main/xl-to-v1_interposer-v4.0.safetensors')
] ]
@ -78,13 +76,33 @@ prepare_environment()
build_launcher() build_launcher()
args = ini_args() args = ini_args()
if args.gpu_device_id is not None: if args.gpu_device_id is not None:
os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu_device_id) os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu_device_id)
print("Set device to:", args.gpu_device_id) print("Set device to:", args.gpu_device_id)
if args.hf_mirror is not None:
os.environ['HF_MIRROR'] = str(args.hf_mirror)
print("Set hf_mirror to:", args.hf_mirror)
from modules import config
from modules.hash_cache import init_cache
os.environ["U2NET_HOME"] = config.path_inpaint
os.environ['GRADIO_TEMP_DIR'] = config.temp_path
if config.temp_path_cleanup_on_launch:
print(f'[Cleanup] Attempting to delete content of temp dir {config.temp_path}')
result = delete_folder_content(config.temp_path, '[Cleanup] ')
if result:
print("[Cleanup] Cleanup successful")
else:
print(f"[Cleanup] Failed to delete content of temp dir.")
def download_models(default_model, previous_default_models, checkpoint_downloads, embeddings_downloads, lora_downloads, vae_downloads):
from modules.util import get_file_from_folder_list
def download_models():
for file_name, url in vae_approx_filenames: for file_name, url in vae_approx_filenames:
load_file_from_url(url=url, model_dir=config.path_vae_approx, file_name=file_name) load_file_from_url(url=url, model_dir=config.path_vae_approx, file_name=file_name)
@ -96,31 +114,39 @@ def download_models():
if args.disable_preset_download: if args.disable_preset_download:
print('Skipped model download.') print('Skipped model download.')
return return default_model, checkpoint_downloads
if not args.always_download_new_model: if not args.always_download_new_model:
if not os.path.exists(os.path.join(config.path_checkpoints, config.default_base_model_name)): if not os.path.isfile(get_file_from_folder_list(default_model, config.paths_checkpoints)):
for alternative_model_name in config.previous_default_models: for alternative_model_name in previous_default_models:
if os.path.exists(os.path.join(config.path_checkpoints, alternative_model_name)): if os.path.isfile(get_file_from_folder_list(alternative_model_name, config.paths_checkpoints)):
print(f'You do not have [{config.default_base_model_name}] but you have [{alternative_model_name}].') print(f'You do not have [{default_model}] but you have [{alternative_model_name}].')
print(f'Fooocus will use [{alternative_model_name}] to avoid downloading new models, ' print(f'Fooocus will use [{alternative_model_name}] to avoid downloading new models, '
f'but you are not using latest models.') f'but you are not using the latest models.')
print('Use --always-download-new-model to avoid fallback and always get new models.') print('Use --always-download-new-model to avoid fallback and always get new models.')
config.checkpoint_downloads = {} checkpoint_downloads = {}
config.default_base_model_name = alternative_model_name default_model = alternative_model_name
break break
for file_name, url in config.checkpoint_downloads.items(): for file_name, url in checkpoint_downloads.items():
load_file_from_url(url=url, model_dir=config.path_checkpoints, file_name=file_name) model_dir = os.path.dirname(get_file_from_folder_list(file_name, config.paths_checkpoints))
for file_name, url in config.embeddings_downloads.items(): load_file_from_url(url=url, model_dir=model_dir, file_name=file_name)
for file_name, url in embeddings_downloads.items():
load_file_from_url(url=url, model_dir=config.path_embeddings, file_name=file_name) load_file_from_url(url=url, model_dir=config.path_embeddings, file_name=file_name)
for file_name, url in config.lora_downloads.items(): for file_name, url in lora_downloads.items():
load_file_from_url(url=url, model_dir=config.path_loras, file_name=file_name) model_dir = os.path.dirname(get_file_from_folder_list(file_name, config.paths_loras))
load_file_from_url(url=url, model_dir=model_dir, file_name=file_name)
for file_name, url in vae_downloads.items():
load_file_from_url(url=url, model_dir=config.path_vae, file_name=file_name)
return return default_model, checkpoint_downloads
download_models() config.default_base_model_name, config.checkpoint_downloads = download_models(
config.default_base_model_name, config.previous_default_models, config.checkpoint_downloads,
config.embeddings_downloads, config.lora_downloads, config.vae_downloads)
config.update_files()
init_cache(config.model_filenames, config.paths_checkpoints, config.lora_filenames, config.paths_loras)
from webui import * from webui import *

View File

@ -0,0 +1,55 @@
# https://github.com/comfyanonymous/ComfyUI/blob/master/nodes.py
#from: https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/howto.html
import numpy as np
import torch
def loglinear_interp(t_steps, num_steps):
"""
Performs log-linear interpolation of a given array of decreasing numbers.
"""
xs = np.linspace(0, 1, len(t_steps))
ys = np.log(t_steps[::-1])
new_xs = np.linspace(0, 1, num_steps)
new_ys = np.interp(new_xs, xs, ys)
interped_ys = np.exp(new_ys)[::-1].copy()
return interped_ys
NOISE_LEVELS = {"SD1": [14.6146412293, 6.4745760956, 3.8636745985, 2.6946151520, 1.8841921177, 1.3943805092, 0.9642583904, 0.6523686016, 0.3977456272, 0.1515232662, 0.0291671582],
"SDXL":[14.6146412293, 6.3184485287, 3.7681790315, 2.1811480769, 1.3405244945, 0.8620721141, 0.5550693289, 0.3798540708, 0.2332364134, 0.1114188177, 0.0291671582],
"SVD": [700.00, 54.5, 15.886, 7.977, 4.248, 1.789, 0.981, 0.403, 0.173, 0.034, 0.002]}
class AlignYourStepsScheduler:
@classmethod
def INPUT_TYPES(s):
return {"required":
{"model_type": (["SD1", "SDXL", "SVD"], ),
"steps": ("INT", {"default": 10, "min": 10, "max": 10000}),
"denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
}
}
RETURN_TYPES = ("SIGMAS",)
CATEGORY = "sampling/custom_sampling/schedulers"
FUNCTION = "get_sigmas"
def get_sigmas(self, model_type, steps, denoise):
total_steps = steps
if denoise < 1.0:
if denoise <= 0.0:
return (torch.FloatTensor([]),)
total_steps = round(steps * denoise)
sigmas = NOISE_LEVELS[model_type][:]
if (steps + 1) != len(sigmas):
sigmas = loglinear_interp(sigmas, steps + 1)
sigmas = sigmas[-(total_steps + 1):]
sigmas[-1] = 0
return (torch.FloatTensor(sigmas), )
NODE_CLASS_MAPPINGS = {
"AlignYourStepsScheduler": AlignYourStepsScheduler,
}

View File

@ -78,7 +78,7 @@ def spatial_gradient(input, normalized: bool = True):
Return: Return:
the derivatives of the input feature map. with shape :math:`(B, C, 2, H, W)`. the derivatives of the input feature map. with shape :math:`(B, C, 2, H, W)`.
.. note:: .. note::
See a working example `here <https://kornia-tutorials.readthedocs.io/en/latest/ See a working example `here <https://kornia.readthedocs.io/en/latest/
filtering_edges.html>`__. filtering_edges.html>`__.
Examples: Examples:
>>> input = torch.rand(1, 3, 4, 4) >>> input = torch.rand(1, 3, 4, 4)
@ -120,7 +120,7 @@ def rgb_to_grayscale(image, rgb_weights = None):
grayscale version of the image with shape :math:`(*,1,H,W)`. grayscale version of the image with shape :math:`(*,1,H,W)`.
.. note:: .. note::
See a working example `here <https://kornia-tutorials.readthedocs.io/en/latest/ See a working example `here <https://kornia.readthedocs.io/en/latest/
color_conversions.html>`__. color_conversions.html>`__.
Example: Example:
@ -176,7 +176,7 @@ def canny(
- the canny edge magnitudes map, shape of :math:`(B,1,H,W)`. - the canny edge magnitudes map, shape of :math:`(B,1,H,W)`.
- the canny edge detection filtered by thresholds and hysteresis, shape of :math:`(B,1,H,W)`. - the canny edge detection filtered by thresholds and hysteresis, shape of :math:`(B,1,H,W)`.
.. note:: .. note::
See a working example `here <https://kornia-tutorials.readthedocs.io/en/latest/ See a working example `here <https://kornia.readthedocs.io/en/latest/
canny.html>`__. canny.html>`__.
Example: Example:
>>> input = torch.rand(5, 3, 4, 4) >>> input = torch.rand(5, 3, 4, 4)

View File

@ -107,8 +107,7 @@ class SDTurboScheduler:
def get_sigmas(self, model, steps, denoise): def get_sigmas(self, model, steps, denoise):
start_step = 10 - int(10 * denoise) start_step = 10 - int(10 * denoise)
timesteps = torch.flip(torch.arange(1, 11) * 100 - 1, (0,))[start_step:start_step + steps] timesteps = torch.flip(torch.arange(1, 11) * 100 - 1, (0,))[start_step:start_step + steps]
ldm_patched.modules.model_management.load_models_gpu([model]) sigmas = model.model_sampling.sigma(timesteps)
sigmas = model.model.model_sampling.sigma(timesteps)
sigmas = torch.cat([sigmas, sigmas.new_zeros([1])]) sigmas = torch.cat([sigmas, sigmas.new_zeros([1])])
return (sigmas, ) return (sigmas, )
@ -230,6 +229,25 @@ class SamplerDPMPP_SDE:
sampler = ldm_patched.modules.samplers.ksampler(sampler_name, {"eta": eta, "s_noise": s_noise, "r": r}) sampler = ldm_patched.modules.samplers.ksampler(sampler_name, {"eta": eta, "s_noise": s_noise, "r": r})
return (sampler, ) return (sampler, )
class SamplerTCD:
@classmethod
def INPUT_TYPES(s):
return {
"required": {
"eta": ("FLOAT", {"default": 0.3, "min": 0.0, "max": 1.0, "step": 0.01}),
}
}
RETURN_TYPES = ("SAMPLER",)
CATEGORY = "sampling/custom_sampling/samplers"
FUNCTION = "get_sampler"
def get_sampler(self, eta=0.3):
sampler = ldm_patched.modules.samplers.ksampler("tcd", {"eta": eta})
return (sampler, )
class SamplerCustom: class SamplerCustom:
@classmethod @classmethod
def INPUT_TYPES(s): def INPUT_TYPES(s):
@ -292,6 +310,7 @@ NODE_CLASS_MAPPINGS = {
"KSamplerSelect": KSamplerSelect, "KSamplerSelect": KSamplerSelect,
"SamplerDPMPP_2M_SDE": SamplerDPMPP_2M_SDE, "SamplerDPMPP_2M_SDE": SamplerDPMPP_2M_SDE,
"SamplerDPMPP_SDE": SamplerDPMPP_SDE, "SamplerDPMPP_SDE": SamplerDPMPP_SDE,
"SamplerTCD": SamplerTCD,
"SplitSigmas": SplitSigmas, "SplitSigmas": SplitSigmas,
"FlipSigmas": FlipSigmas, "FlipSigmas": FlipSigmas,
} }

View File

@ -70,7 +70,7 @@ class ModelSamplingDiscrete:
@classmethod @classmethod
def INPUT_TYPES(s): def INPUT_TYPES(s):
return {"required": { "model": ("MODEL",), return {"required": { "model": ("MODEL",),
"sampling": (["eps", "v_prediction", "lcm"],), "sampling": (["eps", "v_prediction", "lcm", "tcd"]),
"zsnr": ("BOOLEAN", {"default": False}), "zsnr": ("BOOLEAN", {"default": False}),
}} }}
@ -90,6 +90,9 @@ class ModelSamplingDiscrete:
elif sampling == "lcm": elif sampling == "lcm":
sampling_type = LCM sampling_type = LCM
sampling_base = ModelSamplingDiscreteDistilled sampling_base = ModelSamplingDiscreteDistilled
elif sampling == "tcd":
sampling_type = ldm_patched.modules.model_sampling.EPS
sampling_base = ModelSamplingDiscreteDistilled
class ModelSamplingAdvanced(sampling_base, sampling_type): class ModelSamplingAdvanced(sampling_base, sampling_type):
pass pass
@ -105,7 +108,7 @@ class ModelSamplingContinuousEDM:
@classmethod @classmethod
def INPUT_TYPES(s): def INPUT_TYPES(s):
return {"required": { "model": ("MODEL",), return {"required": { "model": ("MODEL",),
"sampling": (["v_prediction", "eps"],), "sampling": (["v_prediction", "edm_playground_v2.5", "eps"],),
"sigma_max": ("FLOAT", {"default": 120.0, "min": 0.0, "max": 1000.0, "step":0.001, "round": False}), "sigma_max": ("FLOAT", {"default": 120.0, "min": 0.0, "max": 1000.0, "step":0.001, "round": False}),
"sigma_min": ("FLOAT", {"default": 0.002, "min": 0.0, "max": 1000.0, "step":0.001, "round": False}), "sigma_min": ("FLOAT", {"default": 0.002, "min": 0.0, "max": 1000.0, "step":0.001, "round": False}),
}} }}
@ -118,17 +121,25 @@ class ModelSamplingContinuousEDM:
def patch(self, model, sampling, sigma_max, sigma_min): def patch(self, model, sampling, sigma_max, sigma_min):
m = model.clone() m = model.clone()
latent_format = None
sigma_data = 1.0
if sampling == "eps": if sampling == "eps":
sampling_type = ldm_patched.modules.model_sampling.EPS sampling_type = ldm_patched.modules.model_sampling.EPS
elif sampling == "v_prediction": elif sampling == "v_prediction":
sampling_type = ldm_patched.modules.model_sampling.V_PREDICTION sampling_type = ldm_patched.modules.model_sampling.V_PREDICTION
elif sampling == "edm_playground_v2.5":
sampling_type = ldm_patched.modules.model_sampling.EDM
sigma_data = 0.5
latent_format = ldm_patched.modules.latent_formats.SDXL_Playground_2_5()
class ModelSamplingAdvanced(ldm_patched.modules.model_sampling.ModelSamplingContinuousEDM, sampling_type): class ModelSamplingAdvanced(ldm_patched.modules.model_sampling.ModelSamplingContinuousEDM, sampling_type):
pass pass
model_sampling = ModelSamplingAdvanced(model.model.model_config) model_sampling = ModelSamplingAdvanced(model.model.model_config)
model_sampling.set_sigma_range(sigma_min, sigma_max) model_sampling.set_parameters(sigma_min, sigma_max, sigma_data)
m.add_object_patch("model_sampling", model_sampling) m.add_object_patch("model_sampling", model_sampling)
if latent_format is not None:
m.add_object_patch("latent_format", latent_format)
return (m, ) return (m, )
class RescaleCFG: class RescaleCFG:

View File

@ -752,7 +752,6 @@ def sample_lcm(model, x, sigmas, extra_args=None, callback=None, disable=None, n
return x return x
@torch.no_grad() @torch.no_grad()
def sample_heunpp2(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.): def sample_heunpp2(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
# From MIT licensed: https://github.com/Carzit/sd-webui-samplers-scheduler/ # From MIT licensed: https://github.com/Carzit/sd-webui-samplers-scheduler/
@ -808,3 +807,102 @@ def sample_heunpp2(model, x, sigmas, extra_args=None, callback=None, disable=Non
d_prime = w1 * d + w2 * d_2 + w3 * d_3 d_prime = w1 * d + w2 * d_2 + w3 * d_3
x = x + d_prime * dt x = x + d_prime * dt
return x return x
@torch.no_grad()
def sample_tcd(model, x, sigmas, extra_args=None, callback=None, disable=None, noise_sampler=None, eta=0.3):
extra_args = {} if extra_args is None else extra_args
noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
s_in = x.new_ones([x.shape[0]])
model_sampling = model.inner_model.inner_model.model_sampling
timesteps_s = torch.floor((1 - eta) * model_sampling.timestep(sigmas)).to(dtype=torch.long).detach().cpu()
timesteps_s[-1] = 0
alpha_prod_s = model_sampling.alphas_cumprod[timesteps_s]
beta_prod_s = 1 - alpha_prod_s
for i in trange(len(sigmas) - 1, disable=disable):
denoised = model(x, sigmas[i] * s_in, **extra_args) # predicted_original_sample
eps = (x - denoised) / sigmas[i]
denoised = alpha_prod_s[i + 1].sqrt() * denoised + beta_prod_s[i + 1].sqrt() * eps
if callback is not None:
callback({"x": x, "i": i, "sigma": sigmas[i], "sigma_hat": sigmas[i], "denoised": denoised})
x = denoised
if eta > 0 and sigmas[i + 1] > 0:
noise = noise_sampler(sigmas[i], sigmas[i + 1])
x = x / alpha_prod_s[i+1].sqrt() + noise * (sigmas[i+1]**2 + 1 - 1/alpha_prod_s[i+1]).sqrt()
else:
x *= torch.sqrt(1.0 + sigmas[i + 1] ** 2)
return x
@torch.no_grad()
def sample_restart(model, x, sigmas, extra_args=None, callback=None, disable=None, s_noise=1., restart_list=None):
"""Implements restart sampling in Restart Sampling for Improving Generative Processes (2023)
Restart_list format: {min_sigma: [ restart_steps, restart_times, max_sigma]}
If restart_list is None: will choose restart_list automatically, otherwise will use the given restart_list
"""
extra_args = {} if extra_args is None else extra_args
s_in = x.new_ones([x.shape[0]])
step_id = 0
def heun_step(x, old_sigma, new_sigma, second_order=True):
nonlocal step_id
denoised = model(x, old_sigma * s_in, **extra_args)
d = to_d(x, old_sigma, denoised)
if callback is not None:
callback({'x': x, 'i': step_id, 'sigma': new_sigma, 'sigma_hat': old_sigma, 'denoised': denoised})
dt = new_sigma - old_sigma
if new_sigma == 0 or not second_order:
# Euler method
x = x + d * dt
else:
# Heun's method
x_2 = x + d * dt
denoised_2 = model(x_2, new_sigma * s_in, **extra_args)
d_2 = to_d(x_2, new_sigma, denoised_2)
d_prime = (d + d_2) / 2
x = x + d_prime * dt
step_id += 1
return x
steps = sigmas.shape[0] - 1
if restart_list is None:
if steps >= 20:
restart_steps = 9
restart_times = 1
if steps >= 36:
restart_steps = steps // 4
restart_times = 2
sigmas = get_sigmas_karras(steps - restart_steps * restart_times, sigmas[-2].item(), sigmas[0].item(), device=sigmas.device)
restart_list = {0.1: [restart_steps + 1, restart_times, 2]}
else:
restart_list = {}
restart_list = {int(torch.argmin(abs(sigmas - key), dim=0)): value for key, value in restart_list.items()}
step_list = []
for i in range(len(sigmas) - 1):
step_list.append((sigmas[i], sigmas[i + 1]))
if i + 1 in restart_list:
restart_steps, restart_times, restart_max = restart_list[i + 1]
min_idx = i + 1
max_idx = int(torch.argmin(abs(sigmas - restart_max), dim=0))
if max_idx < min_idx:
sigma_restart = get_sigmas_karras(restart_steps, sigmas[min_idx].item(), sigmas[max_idx].item(), device=sigmas.device)[:-1]
while restart_times > 0:
restart_times -= 1
step_list.extend(zip(sigma_restart[:-1], sigma_restart[1:]))
last_sigma = None
for old_sigma, new_sigma in tqdm(step_list, disable=disable):
if last_sigma is None:
last_sigma = old_sigma
elif last_sigma < old_sigma:
x = x + torch.randn_like(x) * s_noise * (old_sigma ** 2 - last_sigma ** 2) ** 0.5
x = heun_step(x, old_sigma, new_sigma)
last_sigma = new_sigma
return x

View File

@ -8,7 +8,7 @@ class CLIPEmbeddingNoiseAugmentation(ImageConcatWithNoiseAugmentation):
if clip_stats_path is None: if clip_stats_path is None:
clip_mean, clip_std = torch.zeros(timestep_dim), torch.ones(timestep_dim) clip_mean, clip_std = torch.zeros(timestep_dim), torch.ones(timestep_dim)
else: else:
clip_mean, clip_std = torch.load(clip_stats_path, map_location="cpu") clip_mean, clip_std = torch.load(clip_stats_path, map_location="cpu", weights_only=True)
self.register_buffer("data_mean", clip_mean[None, :], persistent=False) self.register_buffer("data_mean", clip_mean[None, :], persistent=False)
self.register_buffer("data_std", clip_std[None, :], persistent=False) self.register_buffer("data_std", clip_std[None, :], persistent=False)
self.time_embed = Timestep(timestep_dim) self.time_embed = Timestep(timestep_dim)

View File

@ -37,6 +37,7 @@ parser.add_argument("--listen", type=str, default="127.0.0.1", metavar="IP", nar
parser.add_argument("--port", type=int, default=8188) parser.add_argument("--port", type=int, default=8188)
parser.add_argument("--disable-header-check", type=str, default=None, metavar="ORIGIN", nargs="?", const="*") parser.add_argument("--disable-header-check", type=str, default=None, metavar="ORIGIN", nargs="?", const="*")
parser.add_argument("--web-upload-size", type=float, default=100) parser.add_argument("--web-upload-size", type=float, default=100)
parser.add_argument("--hf-mirror", type=str, default=None)
parser.add_argument("--external-working-path", type=str, default=None, metavar="PATH", nargs='+', action='append') parser.add_argument("--external-working-path", type=str, default=None, metavar="PATH", nargs='+', action='append')
parser.add_argument("--output-path", type=str, default=None) parser.add_argument("--output-path", type=str, default=None)
@ -100,8 +101,7 @@ vram_group.add_argument("--always-high-vram", action="store_true")
vram_group.add_argument("--always-normal-vram", action="store_true") vram_group.add_argument("--always-normal-vram", action="store_true")
vram_group.add_argument("--always-low-vram", action="store_true") vram_group.add_argument("--always-low-vram", action="store_true")
vram_group.add_argument("--always-no-vram", action="store_true") vram_group.add_argument("--always-no-vram", action="store_true")
vram_group.add_argument("--always-cpu", action="store_true") vram_group.add_argument("--always-cpu", type=int, nargs="?", metavar="CPU_NUM_THREADS", const=-1)
parser.add_argument("--always-offload-from-vram", action="store_true") parser.add_argument("--always-offload-from-vram", action="store_true")
parser.add_argument("--pytorch-deterministic", action="store_true") parser.add_argument("--pytorch-deterministic", action="store_true")

View File

@ -3,8 +3,6 @@ import math
import ldm_patched.modules.utils import ldm_patched.modules.utils
def lcm(a, b): #TODO: eventually replace by math.lcm (added in python3.9)
return abs(a*b) // math.gcd(a, b)
class CONDRegular: class CONDRegular:
def __init__(self, cond): def __init__(self, cond):
@ -41,7 +39,7 @@ class CONDCrossAttn(CONDRegular):
if s1[0] != s2[0] or s1[2] != s2[2]: #these 2 cases should not happen if s1[0] != s2[0] or s1[2] != s2[2]: #these 2 cases should not happen
return False return False
mult_min = lcm(s1[1], s2[1]) mult_min = math.lcm(s1[1], s2[1])
diff = mult_min // min(s1[1], s2[1]) diff = mult_min // min(s1[1], s2[1])
if diff > 4: #arbitrary limit on the padding because it's probably going to impact performance negatively if it's too much if diff > 4: #arbitrary limit on the padding because it's probably going to impact performance negatively if it's too much
return False return False
@ -52,7 +50,7 @@ class CONDCrossAttn(CONDRegular):
crossattn_max_len = self.cond.shape[1] crossattn_max_len = self.cond.shape[1]
for x in others: for x in others:
c = x.cond c = x.cond
crossattn_max_len = lcm(crossattn_max_len, c.shape[1]) crossattn_max_len = math.lcm(crossattn_max_len, c.shape[1])
conds.append(c) conds.append(c)
out = [] out = []

View File

@ -1,3 +1,4 @@
import torch
class LatentFormat: class LatentFormat:
scale_factor = 1.0 scale_factor = 1.0
@ -34,6 +35,70 @@ class SDXL(LatentFormat):
] ]
self.taesd_decoder_name = "taesdxl_decoder" self.taesd_decoder_name = "taesdxl_decoder"
class SDXL_Playground_2_5(LatentFormat):
def __init__(self):
self.scale_factor = 0.5
self.latents_mean = torch.tensor([-1.6574, 1.886, -1.383, 2.5155]).view(1, 4, 1, 1)
self.latents_std = torch.tensor([8.4927, 5.9022, 6.5498, 5.2299]).view(1, 4, 1, 1)
self.latent_rgb_factors = [
# R G B
[ 0.3920, 0.4054, 0.4549],
[-0.2634, -0.0196, 0.0653],
[ 0.0568, 0.1687, -0.0755],
[-0.3112, -0.2359, -0.2076]
]
self.taesd_decoder_name = "taesdxl_decoder"
def process_in(self, latent):
latents_mean = self.latents_mean.to(latent.device, latent.dtype)
latents_std = self.latents_std.to(latent.device, latent.dtype)
return (latent - latents_mean) * self.scale_factor / latents_std
def process_out(self, latent):
latents_mean = self.latents_mean.to(latent.device, latent.dtype)
latents_std = self.latents_std.to(latent.device, latent.dtype)
return latent * latents_std / self.scale_factor + latents_mean
class SD_X4(LatentFormat): class SD_X4(LatentFormat):
def __init__(self): def __init__(self):
self.scale_factor = 0.08333 self.scale_factor = 0.08333
self.latent_rgb_factors = [
[-0.2340, -0.3863, -0.3257],
[ 0.0994, 0.0885, -0.0908],
[-0.2833, -0.2349, -0.3741],
[ 0.2523, -0.0055, -0.1651]
]
class SC_Prior(LatentFormat):
def __init__(self):
self.scale_factor = 1.0
self.latent_rgb_factors = [
[-0.0326, -0.0204, -0.0127],
[-0.1592, -0.0427, 0.0216],
[ 0.0873, 0.0638, -0.0020],
[-0.0602, 0.0442, 0.1304],
[ 0.0800, -0.0313, -0.1796],
[-0.0810, -0.0638, -0.1581],
[ 0.1791, 0.1180, 0.0967],
[ 0.0740, 0.1416, 0.0432],
[-0.1745, -0.1888, -0.1373],
[ 0.2412, 0.1577, 0.0928],
[ 0.1908, 0.0998, 0.0682],
[ 0.0209, 0.0365, -0.0092],
[ 0.0448, -0.0650, -0.1728],
[-0.1658, -0.1045, -0.1308],
[ 0.0542, 0.1545, 0.1325],
[-0.0352, -0.1672, -0.2541]
]
class SC_B(LatentFormat):
def __init__(self):
self.scale_factor = 1.0 / 0.43
self.latent_rgb_factors = [
[ 0.1121, 0.2006, 0.1023],
[-0.2093, -0.0222, -0.0195],
[-0.3087, -0.1535, 0.0366],
[ 0.0290, -0.1574, -0.4078]
]

View File

@ -60,6 +60,9 @@ except:
pass pass
if args.always_cpu: if args.always_cpu:
if args.always_cpu > 0:
torch.set_num_threads(args.always_cpu)
print(f"Running on {torch.get_num_threads()} CPU threads")
cpu_state = CPUState.CPU cpu_state = CPUState.CPU
def is_intel_xpu(): def is_intel_xpu():

View File

@ -1,7 +1,7 @@
import torch import torch
import numpy as np
from ldm_patched.ldm.modules.diffusionmodules.util import make_beta_schedule from ldm_patched.ldm.modules.diffusionmodules.util import make_beta_schedule
import math import math
import numpy as np
class EPS: class EPS:
def calculate_input(self, sigma, noise): def calculate_input(self, sigma, noise):
@ -12,12 +12,28 @@ class EPS:
sigma = sigma.view(sigma.shape[:1] + (1,) * (model_output.ndim - 1)) sigma = sigma.view(sigma.shape[:1] + (1,) * (model_output.ndim - 1))
return model_input - model_output * sigma return model_input - model_output * sigma
def noise_scaling(self, sigma, noise, latent_image, max_denoise=False):
if max_denoise:
noise = noise * torch.sqrt(1.0 + sigma ** 2.0)
else:
noise = noise * sigma
noise += latent_image
return noise
def inverse_noise_scaling(self, sigma, latent):
return latent
class V_PREDICTION(EPS): class V_PREDICTION(EPS):
def calculate_denoised(self, sigma, model_output, model_input): def calculate_denoised(self, sigma, model_output, model_input):
sigma = sigma.view(sigma.shape[:1] + (1,) * (model_output.ndim - 1)) sigma = sigma.view(sigma.shape[:1] + (1,) * (model_output.ndim - 1))
return model_input * self.sigma_data ** 2 / (sigma ** 2 + self.sigma_data ** 2) - model_output * sigma * self.sigma_data / (sigma ** 2 + self.sigma_data ** 2) ** 0.5 return model_input * self.sigma_data ** 2 / (sigma ** 2 + self.sigma_data ** 2) - model_output * sigma * self.sigma_data / (sigma ** 2 + self.sigma_data ** 2) ** 0.5
class EDM(V_PREDICTION):
def calculate_denoised(self, sigma, model_output, model_input):
sigma = sigma.view(sigma.shape[:1] + (1,) * (model_output.ndim - 1))
return model_input * self.sigma_data ** 2 / (sigma ** 2 + self.sigma_data ** 2) + model_output * sigma * self.sigma_data / (sigma ** 2 + self.sigma_data ** 2) ** 0.5
class ModelSamplingDiscrete(torch.nn.Module): class ModelSamplingDiscrete(torch.nn.Module):
def __init__(self, model_config=None): def __init__(self, model_config=None):
@ -42,8 +58,7 @@ class ModelSamplingDiscrete(torch.nn.Module):
else: else:
betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s) betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s)
alphas = 1. - betas alphas = 1. - betas
alphas_cumprod = torch.tensor(np.cumprod(alphas, axis=0), dtype=torch.float32) alphas_cumprod = torch.cumprod(alphas, dim=0)
# alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
timesteps, = betas.shape timesteps, = betas.shape
self.num_timesteps = int(timesteps) self.num_timesteps = int(timesteps)
@ -55,11 +70,16 @@ class ModelSamplingDiscrete(torch.nn.Module):
# self.register_buffer('alphas_cumprod_prev', torch.tensor(alphas_cumprod_prev, dtype=torch.float32)) # self.register_buffer('alphas_cumprod_prev', torch.tensor(alphas_cumprod_prev, dtype=torch.float32))
sigmas = ((1 - alphas_cumprod) / alphas_cumprod) ** 0.5 sigmas = ((1 - alphas_cumprod) / alphas_cumprod) ** 0.5
alphas_cumprod = torch.tensor(np.cumprod(alphas, axis=0), dtype=torch.float32)
self.set_sigmas(sigmas) self.set_sigmas(sigmas)
self.set_alphas_cumprod(alphas_cumprod.float())
def set_sigmas(self, sigmas): def set_sigmas(self, sigmas):
self.register_buffer('sigmas', sigmas) self.register_buffer('sigmas', sigmas.float())
self.register_buffer('log_sigmas', sigmas.log()) self.register_buffer('log_sigmas', sigmas.log().float())
def set_alphas_cumprod(self, alphas_cumprod):
self.register_buffer("alphas_cumprod", alphas_cumprod.float())
@property @property
def sigma_min(self): def sigma_min(self):
@ -94,8 +114,6 @@ class ModelSamplingDiscrete(torch.nn.Module):
class ModelSamplingContinuousEDM(torch.nn.Module): class ModelSamplingContinuousEDM(torch.nn.Module):
def __init__(self, model_config=None): def __init__(self, model_config=None):
super().__init__() super().__init__()
self.sigma_data = 1.0
if model_config is not None: if model_config is not None:
sampling_settings = model_config.sampling_settings sampling_settings = model_config.sampling_settings
else: else:
@ -103,9 +121,11 @@ class ModelSamplingContinuousEDM(torch.nn.Module):
sigma_min = sampling_settings.get("sigma_min", 0.002) sigma_min = sampling_settings.get("sigma_min", 0.002)
sigma_max = sampling_settings.get("sigma_max", 120.0) sigma_max = sampling_settings.get("sigma_max", 120.0)
self.set_sigma_range(sigma_min, sigma_max) sigma_data = sampling_settings.get("sigma_data", 1.0)
self.set_parameters(sigma_min, sigma_max, sigma_data)
def set_sigma_range(self, sigma_min, sigma_max): def set_parameters(self, sigma_min, sigma_max, sigma_data):
self.sigma_data = sigma_data
sigmas = torch.linspace(math.log(sigma_min), math.log(sigma_max), 1000).exp() sigmas = torch.linspace(math.log(sigma_min), math.log(sigma_max), 1000).exp()
self.register_buffer('sigmas', sigmas) #for compatibility with some schedulers self.register_buffer('sigmas', sigmas) #for compatibility with some schedulers
@ -134,3 +154,56 @@ class ModelSamplingContinuousEDM(torch.nn.Module):
log_sigma_min = math.log(self.sigma_min) log_sigma_min = math.log(self.sigma_min)
return math.exp((math.log(self.sigma_max) - log_sigma_min) * percent + log_sigma_min) return math.exp((math.log(self.sigma_max) - log_sigma_min) * percent + log_sigma_min)
class StableCascadeSampling(ModelSamplingDiscrete):
def __init__(self, model_config=None):
super().__init__()
if model_config is not None:
sampling_settings = model_config.sampling_settings
else:
sampling_settings = {}
self.set_parameters(sampling_settings.get("shift", 1.0))
def set_parameters(self, shift=1.0, cosine_s=8e-3):
self.shift = shift
self.cosine_s = torch.tensor(cosine_s)
self._init_alpha_cumprod = torch.cos(self.cosine_s / (1 + self.cosine_s) * torch.pi * 0.5) ** 2
#This part is just for compatibility with some schedulers in the codebase
self.num_timesteps = 10000
sigmas = torch.empty((self.num_timesteps), dtype=torch.float32)
for x in range(self.num_timesteps):
t = (x + 1) / self.num_timesteps
sigmas[x] = self.sigma(t)
self.set_sigmas(sigmas)
def sigma(self, timestep):
alpha_cumprod = (torch.cos((timestep + self.cosine_s) / (1 + self.cosine_s) * torch.pi * 0.5) ** 2 / self._init_alpha_cumprod)
if self.shift != 1.0:
var = alpha_cumprod
logSNR = (var/(1-var)).log()
logSNR += 2 * torch.log(1.0 / torch.tensor(self.shift))
alpha_cumprod = logSNR.sigmoid()
alpha_cumprod = alpha_cumprod.clamp(0.0001, 0.9999)
return ((1 - alpha_cumprod) / alpha_cumprod) ** 0.5
def timestep(self, sigma):
var = 1 / ((sigma * sigma) + 1)
var = var.clamp(0, 1.0)
s, min_var = self.cosine_s.to(var.device), self._init_alpha_cumprod.to(var.device)
t = (((var * min_var) ** 0.5).acos() / (torch.pi * 0.5)) * (1 + s) - s
return t
def percent_to_sigma(self, percent):
if percent <= 0.0:
return 999999999.9
if percent >= 1.0:
return 0.0
percent = 1.0 - percent
return self.sigma(torch.tensor(percent))

View File

@ -523,7 +523,7 @@ class UNIPCBH2(Sampler):
KSAMPLER_NAMES = ["euler", "euler_ancestral", "heun", "heunpp2","dpm_2", "dpm_2_ancestral", KSAMPLER_NAMES = ["euler", "euler_ancestral", "heun", "heunpp2","dpm_2", "dpm_2_ancestral",
"lms", "dpm_fast", "dpm_adaptive", "dpmpp_2s_ancestral", "dpmpp_sde", "dpmpp_sde_gpu", "lms", "dpm_fast", "dpm_adaptive", "dpmpp_2s_ancestral", "dpmpp_sde", "dpmpp_sde_gpu",
"dpmpp_2m", "dpmpp_2m_sde", "dpmpp_2m_sde_gpu", "dpmpp_3m_sde", "dpmpp_3m_sde_gpu", "ddpm", "lcm"] "dpmpp_2m", "dpmpp_2m_sde", "dpmpp_2m_sde_gpu", "dpmpp_3m_sde", "dpmpp_3m_sde_gpu", "ddpm", "lcm", "tcd", "edm_playground_v2.5", "restart"]
class KSAMPLER(Sampler): class KSAMPLER(Sampler):
def __init__(self, sampler_function, extra_options={}, inpaint_options={}): def __init__(self, sampler_function, extra_options={}, inpaint_options={}):

View File

@ -427,12 +427,13 @@ def load_checkpoint(config_path=None, ckpt_path=None, output_vae=True, output_cl
return (ldm_patched.modules.model_patcher.ModelPatcher(model, load_device=model_management.get_torch_device(), offload_device=offload_device), clip, vae) return (ldm_patched.modules.model_patcher.ModelPatcher(model, load_device=model_management.get_torch_device(), offload_device=offload_device), clip, vae)
def load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, output_clipvision=False, embedding_directory=None, output_model=True): def load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, output_clipvision=False, embedding_directory=None, output_model=True, vae_filename_param=None):
sd = ldm_patched.modules.utils.load_torch_file(ckpt_path) sd = ldm_patched.modules.utils.load_torch_file(ckpt_path)
sd_keys = sd.keys() sd_keys = sd.keys()
clip = None clip = None
clipvision = None clipvision = None
vae = None vae = None
vae_filename = None
model = None model = None
model_patcher = None model_patcher = None
clip_target = None clip_target = None
@ -462,8 +463,12 @@ def load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, o
model.load_model_weights(sd, "model.diffusion_model.") model.load_model_weights(sd, "model.diffusion_model.")
if output_vae: if output_vae:
if vae_filename_param is None:
vae_sd = ldm_patched.modules.utils.state_dict_prefix_replace(sd, {"first_stage_model.": ""}, filter_keys=True) vae_sd = ldm_patched.modules.utils.state_dict_prefix_replace(sd, {"first_stage_model.": ""}, filter_keys=True)
vae_sd = model_config.process_vae_state_dict(vae_sd) vae_sd = model_config.process_vae_state_dict(vae_sd)
else:
vae_sd = ldm_patched.modules.utils.load_torch_file(vae_filename_param)
vae_filename = vae_filename_param
vae = VAE(sd=vae_sd) vae = VAE(sd=vae_sd)
if output_clip: if output_clip:
@ -485,7 +490,7 @@ def load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, o
print("loaded straight to GPU") print("loaded straight to GPU")
model_management.load_model_gpu(model_patcher) model_management.load_model_gpu(model_patcher)
return (model_patcher, clip, vae, clipvision) return model_patcher, clip, vae, vae_filename, clipvision
def load_unet_state_dict(sd): #load unet in diffusers format def load_unet_state_dict(sd): #load unet in diffusers format

View File

@ -326,7 +326,7 @@ def load_embed(embedding_name, embedding_directory, embedding_size, embed_key=No
except: except:
embed_out = safe_load_embed_zip(embed_path) embed_out = safe_load_embed_zip(embed_path)
else: else:
embed = torch.load(embed_path, map_location="cpu") embed = torch.load(embed_path, map_location="cpu", weights_only=True)
except Exception as e: except Exception as e:
print(traceback.format_exc()) print(traceback.format_exc())
print() print()

View File

@ -14,7 +14,7 @@ from .timm.weight_init import trunc_normal_
def drop_path(x, drop_prob: float = 0.0, training: bool = False): def drop_path(x, drop_prob: float = 0.0, training: bool = False):
"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/drop.py From: https://github.com/huggingface/pytorch-image-models/blob/main/timm/layers/drop.py
""" """
if drop_prob == 0.0 or not training: if drop_prob == 0.0 or not training:
return x return x
@ -30,7 +30,7 @@ def drop_path(x, drop_prob: float = 0.0, training: bool = False):
class DropPath(nn.Module): class DropPath(nn.Module):
"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/drop.py From: https://github.com/huggingface/pytorch-image-models/blob/main/timm/layers/drop.py
""" """
def __init__(self, drop_prob=None): def __init__(self, drop_prob=None):

View File

@ -13,7 +13,7 @@ import torch.nn.functional as F
from . import block as B from . import block as B
# Borrowed from https://github.com/rlaphoenix/VSGAN/blob/master/vsgan/archs/ESRGAN.py # Borrowed from https://github.com/rlaphoenix/VSGAN/blob/master/vsgan/archs/esrgan.py
# Which enhanced stuff that was already here # Which enhanced stuff that was already here
class RRDBNet(nn.Module): class RRDBNet(nn.Module):
def __init__( def __init__(

View File

@ -2,7 +2,7 @@
Modified from https://github.com/sczhou/CodeFormer Modified from https://github.com/sczhou/CodeFormer
VQGAN code, adapted from the original created by the Unleashing Transformers authors: VQGAN code, adapted from the original created by the Unleashing Transformers authors:
https://github.com/samb-t/unleashing-transformers/blob/master/models/vqgan.py https://github.com/samb-t/unleashing-transformers/blob/master/models/vqgan.py
This verison of the arch specifically was gathered from an old version of GFPGAN. If this is a problem, please contact me. This version of the arch specifically was gathered from an old version of GFPGAN. If this is a problem, please contact me.
""" """
import math import math
from typing import Optional from typing import Optional
@ -377,15 +377,15 @@ class VQAutoEncoder(nn.Module):
) )
if model_path is not None: if model_path is not None:
chkpt = torch.load(model_path, map_location="cpu") chkpt = torch.load(model_path, map_location="cpu", weights_only=True)
if "params_ema" in chkpt: if "params_ema" in chkpt:
self.load_state_dict( self.load_state_dict(
torch.load(model_path, map_location="cpu")["params_ema"] torch.load(model_path, map_location="cpu", weights_only=True)["params_ema"]
) )
logger.info(f"vqgan is loaded from: {model_path} [params_ema]") logger.info(f"vqgan is loaded from: {model_path} [params_ema]")
elif "params" in chkpt: elif "params" in chkpt:
self.load_state_dict( self.load_state_dict(
torch.load(model_path, map_location="cpu")["params"] torch.load(model_path, map_location="cpu", weights_only=True)["params"]
) )
logger.info(f"vqgan is loaded from: {model_path} [params]") logger.info(f"vqgan is loaded from: {model_path} [params]")
else: else:

View File

@ -273,8 +273,8 @@ class GFPGANBilinear(nn.Module):
if decoder_load_path: if decoder_load_path:
self.stylegan_decoder.load_state_dict( self.stylegan_decoder.load_state_dict(
torch.load( torch.load(
decoder_load_path, map_location=lambda storage, loc: storage decoder_load_path, map_location=lambda storage, loc: storage,
)["params_ema"] weights_only=True)["params_ema"]
) )
# fix decoder without updating params # fix decoder without updating params
if fix_decoder: if fix_decoder:

View File

@ -373,8 +373,8 @@ class GFPGANv1(nn.Module):
if decoder_load_path: if decoder_load_path:
self.stylegan_decoder.load_state_dict( self.stylegan_decoder.load_state_dict(
torch.load( torch.load(
decoder_load_path, map_location=lambda storage, loc: storage decoder_load_path, map_location=lambda storage, loc: storage,
)["params_ema"] weights_only=True)["params_ema"]
) )
# fix decoder without updating params # fix decoder without updating params
if fix_decoder: if fix_decoder:

View File

@ -284,8 +284,8 @@ class GFPGANv1Clean(nn.Module):
if decoder_load_path: if decoder_load_path:
self.stylegan_decoder.load_state_dict( self.stylegan_decoder.load_state_dict(
torch.load( torch.load(
decoder_load_path, map_location=lambda storage, loc: storage decoder_load_path, map_location=lambda storage, loc: storage,
)["params_ema"] weights_only=True)["params_ema"]
) )
# fix decoder without updating params # fix decoder without updating params
if fix_decoder: if fix_decoder:

0
modules/__init__.py Normal file
View File

View File

@ -1,33 +0,0 @@
disable_preview, adm_scaler_positive, adm_scaler_negative, adm_scaler_end, adaptive_cfg, sampler_name, \
scheduler_name, generate_image_grid, overwrite_step, overwrite_switch, overwrite_width, overwrite_height, \
overwrite_vary_strength, overwrite_upscale_strength, \
mixing_image_prompt_and_vary_upscale, mixing_image_prompt_and_inpaint, \
debugging_cn_preprocessor, skipping_cn_preprocessor, controlnet_softness, canny_low_threshold, canny_high_threshold, \
refiner_swap_method, \
freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2, \
debugging_inpaint_preprocessor, inpaint_disable_initial_latent, inpaint_engine, inpaint_strength, inpaint_respective_field, \
inpaint_mask_upload_checkbox, invert_mask_checkbox, inpaint_erode_or_dilate = [None] * 35
def set_all_advanced_parameters(*args):
global disable_preview, adm_scaler_positive, adm_scaler_negative, adm_scaler_end, adaptive_cfg, sampler_name, \
scheduler_name, generate_image_grid, overwrite_step, overwrite_switch, overwrite_width, overwrite_height, \
overwrite_vary_strength, overwrite_upscale_strength, \
mixing_image_prompt_and_vary_upscale, mixing_image_prompt_and_inpaint, \
debugging_cn_preprocessor, skipping_cn_preprocessor, controlnet_softness, canny_low_threshold, canny_high_threshold, \
refiner_swap_method, \
freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2, \
debugging_inpaint_preprocessor, inpaint_disable_initial_latent, inpaint_engine, inpaint_strength, inpaint_respective_field, \
inpaint_mask_upload_checkbox, invert_mask_checkbox, inpaint_erode_or_dilate
disable_preview, adm_scaler_positive, adm_scaler_negative, adm_scaler_end, adaptive_cfg, sampler_name, \
scheduler_name, generate_image_grid, overwrite_step, overwrite_switch, overwrite_width, overwrite_height, \
overwrite_vary_strength, overwrite_upscale_strength, \
mixing_image_prompt_and_vary_upscale, mixing_image_prompt_and_inpaint, \
debugging_cn_preprocessor, skipping_cn_preprocessor, controlnet_softness, canny_low_threshold, canny_high_threshold, \
refiner_swap_method, \
freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2, \
debugging_inpaint_preprocessor, inpaint_disable_initial_latent, inpaint_engine, inpaint_strength, inpaint_respective_field, \
inpaint_mask_upload_checkbox, invert_mask_checkbox, inpaint_erode_or_dilate = args
return

File diff suppressed because it is too large Load Diff

View File

@ -2,24 +2,43 @@ import os
import json import json
import math import math
import numbers import numbers
import args_manager import args_manager
import tempfile
import modules.flags import modules.flags
import modules.sdxl_styles import modules.sdxl_styles
from modules.model_loader import load_file_from_url from modules.model_loader import load_file_from_url
from modules.util import get_files_from_folder from modules.extra_utils import makedirs_with_log, get_files_from_folder, try_eval_env_var
from modules.flags import OutputFormat, Performance, MetadataScheme
config_path = os.path.abspath("./config.txt") def get_config_path(key, default_value):
config_example_path = os.path.abspath("config_modification_tutorial.txt") env = os.getenv(key)
if env is not None and isinstance(env, str):
print(f"Environment: {key} = {env}")
return env
else:
return os.path.abspath(default_value)
wildcards_max_bfs_depth = 64
config_path = get_config_path('config_path', "./config.txt")
config_example_path = get_config_path('config_example_path', "config_modification_tutorial.txt")
config_dict = {} config_dict = {}
always_save_keys = [] always_save_keys = []
visited_keys = [] visited_keys = []
try:
with open(os.path.abspath(f'./presets/default.json'), "r", encoding="utf-8") as json_file:
config_dict.update(json.load(json_file))
except Exception as e:
print(f'Load default preset failed.')
print(e)
try: try:
if os.path.exists(config_path): if os.path.exists(config_path):
with open(config_path, "r", encoding="utf-8") as json_file: with open(config_path, "r", encoding="utf-8") as json_file:
config_dict = json.load(json_file) config_dict.update(json.load(json_file))
always_save_keys = list(config_dict.keys()) always_save_keys = list(config_dict.keys())
except Exception as e: except Exception as e:
print(f'Failed to load config file "{config_path}" . The reason is: {str(e)}') print(f'Failed to load config file "{config_path}" . The reason is: {str(e)}')
@ -79,30 +98,52 @@ def try_load_deprecated_user_path_config():
try_load_deprecated_user_path_config() try_load_deprecated_user_path_config()
try: def get_presets():
with open(os.path.abspath(f'./presets/default.json'), "r", encoding="utf-8") as json_file: preset_folder = 'presets'
config_dict.update(json.load(json_file)) presets = ['initial']
except Exception as e: if not os.path.exists(preset_folder):
print(f'Load default preset failed.') print('No presets found.')
print(e) return presets
preset = args_manager.args.preset return presets + [f[:f.index(".json")] for f in os.listdir(preset_folder) if f.endswith('.json')]
def update_presets():
global available_presets
available_presets = get_presets()
def try_get_preset_content(preset):
if isinstance(preset, str): if isinstance(preset, str):
preset_path = os.path.abspath(f'./presets/{preset}.json') preset_path = os.path.abspath(f'./presets/{preset}.json')
try: try:
if os.path.exists(preset_path): if os.path.exists(preset_path):
with open(preset_path, "r", encoding="utf-8") as json_file: with open(preset_path, "r", encoding="utf-8") as json_file:
config_dict.update(json.load(json_file)) json_content = json.load(json_file)
print(f'Loaded preset: {preset_path}') print(f'Loaded preset: {preset_path}')
return json_content
else: else:
raise FileNotFoundError raise FileNotFoundError
except Exception as e: except Exception as e:
print(f'Load preset [{preset_path}] failed') print(f'Load preset [{preset_path}] failed')
print(e) print(e)
return {}
available_presets = get_presets()
preset = args_manager.args.preset
config_dict.update(try_get_preset_content(preset))
def get_path_output() -> str:
"""
Checking output path argument and overriding default path.
"""
global config_dict
path_output = get_dir_or_set_default('path_outputs', '../outputs/', make_directory=True)
if args_manager.args.output_path:
print(f'Overriding config value path_outputs with {args_manager.args.output_path}')
config_dict['path_outputs'] = path_output = args_manager.args.output_path
return path_output
def get_dir_or_set_default(key, default_value): def get_dir_or_set_default(key, default_value, as_array=False, make_directory=False):
global config_dict, visited_keys, always_save_keys global config_dict, visited_keys, always_save_keys
if key not in visited_keys: if key not in visited_keys:
@ -111,36 +152,70 @@ def get_dir_or_set_default(key, default_value):
if key not in always_save_keys: if key not in always_save_keys:
always_save_keys.append(key) always_save_keys.append(key)
v = config_dict.get(key, None) v = os.getenv(key)
if isinstance(v, str) and os.path.exists(v) and os.path.isdir(v): if v is not None:
return v print(f"Environment: {key} = {v}")
config_dict[key] = v
else: else:
v = config_dict.get(key, None)
if isinstance(v, str):
if make_directory:
makedirs_with_log(v)
if os.path.exists(v) and os.path.isdir(v):
return v if not as_array else [v]
elif isinstance(v, list):
if make_directory:
for d in v:
makedirs_with_log(d)
if all([os.path.exists(d) and os.path.isdir(d) for d in v]):
return v
if v is not None: if v is not None:
print(f'Failed to load config key: {json.dumps({key:v})} is invalid or does not exist; will use {json.dumps({key:default_value})} instead.') print(f'Failed to load config key: {json.dumps({key:v})} is invalid or does not exist; will use {json.dumps({key:default_value})} instead.')
if isinstance(default_value, list):
dp = []
for path in default_value:
abs_path = os.path.abspath(os.path.join(os.path.dirname(__file__), path))
dp.append(abs_path)
os.makedirs(abs_path, exist_ok=True)
else:
dp = os.path.abspath(os.path.join(os.path.dirname(__file__), default_value)) dp = os.path.abspath(os.path.join(os.path.dirname(__file__), default_value))
os.makedirs(dp, exist_ok=True) os.makedirs(dp, exist_ok=True)
if as_array:
dp = [dp]
config_dict[key] = dp config_dict[key] = dp
return dp return dp
path_checkpoints = get_dir_or_set_default('path_checkpoints', '../models/checkpoints/') paths_checkpoints = get_dir_or_set_default('path_checkpoints', ['../models/checkpoints/'], True)
path_loras = get_dir_or_set_default('path_loras', '../models/loras/') paths_loras = get_dir_or_set_default('path_loras', ['../models/loras/'], True)
path_embeddings = get_dir_or_set_default('path_embeddings', '../models/embeddings/') path_embeddings = get_dir_or_set_default('path_embeddings', '../models/embeddings/')
path_vae_approx = get_dir_or_set_default('path_vae_approx', '../models/vae_approx/') path_vae_approx = get_dir_or_set_default('path_vae_approx', '../models/vae_approx/')
path_vae = get_dir_or_set_default('path_vae', '../models/vae/')
path_upscale_models = get_dir_or_set_default('path_upscale_models', '../models/upscale_models/') path_upscale_models = get_dir_or_set_default('path_upscale_models', '../models/upscale_models/')
path_inpaint = get_dir_or_set_default('path_inpaint', '../models/inpaint/') path_inpaint = get_dir_or_set_default('path_inpaint', '../models/inpaint/')
path_controlnet = get_dir_or_set_default('path_controlnet', '../models/controlnet/') path_controlnet = get_dir_or_set_default('path_controlnet', '../models/controlnet/')
path_clip_vision = get_dir_or_set_default('path_clip_vision', '../models/clip_vision/') path_clip_vision = get_dir_or_set_default('path_clip_vision', '../models/clip_vision/')
path_fooocus_expansion = get_dir_or_set_default('path_fooocus_expansion', '../models/prompt_expansion/fooocus_expansion') path_fooocus_expansion = get_dir_or_set_default('path_fooocus_expansion', '../models/prompt_expansion/fooocus_expansion')
path_outputs = get_dir_or_set_default('path_outputs', '../outputs/') path_wildcards = get_dir_or_set_default('path_wildcards', '../wildcards/')
path_safety_checker = get_dir_or_set_default('path_safety_checker', '../models/safety_checker/')
path_sam = get_dir_or_set_default('path_sam', '../models/sam/')
path_outputs = get_path_output()
def get_config_item_or_set_default(key, default_value, validator, disable_empty_as_none=False): def get_config_item_or_set_default(key, default_value, validator, disable_empty_as_none=False, expected_type=None):
global config_dict, visited_keys global config_dict, visited_keys
if key not in visited_keys: if key not in visited_keys:
visited_keys.append(key) visited_keys.append(key)
v = os.getenv(key)
if v is not None:
v = try_eval_env_var(v, expected_type)
print(f"Environment: {key} = {v}")
config_dict[key] = v
if key not in config_dict: if key not in config_dict:
config_dict[key] = default_value config_dict[key] = default_value
return default_value return default_value
@ -158,71 +233,145 @@ def get_config_item_or_set_default(key, default_value, validator, disable_empty_
return default_value return default_value
default_base_model_name = get_config_item_or_set_default( def init_temp_path(path: str | None, default_path: str) -> str:
if args_manager.args.temp_path:
path = args_manager.args.temp_path
if path != '' and path != default_path:
try:
if not os.path.isabs(path):
path = os.path.abspath(path)
os.makedirs(path, exist_ok=True)
print(f'Using temp path {path}')
return path
except Exception as e:
print(f'Could not create temp path {path}. Reason: {e}')
print(f'Using default temp path {default_path} instead.')
os.makedirs(default_path, exist_ok=True)
return default_path
default_temp_path = os.path.join(tempfile.gettempdir(), 'fooocus')
temp_path = init_temp_path(get_config_item_or_set_default(
key='temp_path',
default_value=default_temp_path,
validator=lambda x: isinstance(x, str),
expected_type=str
), default_temp_path)
temp_path_cleanup_on_launch = get_config_item_or_set_default(
key='temp_path_cleanup_on_launch',
default_value=True,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_base_model_name = default_model = get_config_item_or_set_default(
key='default_model', key='default_model',
default_value='model.safetensors', default_value='model.safetensors',
validator=lambda x: isinstance(x, str) validator=lambda x: isinstance(x, str),
expected_type=str
) )
previous_default_models = get_config_item_or_set_default( previous_default_models = get_config_item_or_set_default(
key='previous_default_models', key='previous_default_models',
default_value=[], default_value=[],
validator=lambda x: isinstance(x, list) and all(isinstance(k, str) for k in x) validator=lambda x: isinstance(x, list) and all(isinstance(k, str) for k in x),
expected_type=list
) )
default_refiner_model_name = get_config_item_or_set_default( default_refiner_model_name = default_refiner = get_config_item_or_set_default(
key='default_refiner', key='default_refiner',
default_value='None', default_value='None',
validator=lambda x: isinstance(x, str) validator=lambda x: isinstance(x, str),
expected_type=str
) )
default_refiner_switch = get_config_item_or_set_default( default_refiner_switch = get_config_item_or_set_default(
key='default_refiner_switch', key='default_refiner_switch',
default_value=0.8, default_value=0.8,
validator=lambda x: isinstance(x, numbers.Number) and 0 <= x <= 1 validator=lambda x: isinstance(x, numbers.Number) and 0 <= x <= 1,
expected_type=numbers.Number
)
default_loras_min_weight = get_config_item_or_set_default(
key='default_loras_min_weight',
default_value=-2,
validator=lambda x: isinstance(x, numbers.Number) and -10 <= x <= 10,
expected_type=numbers.Number
)
default_loras_max_weight = get_config_item_or_set_default(
key='default_loras_max_weight',
default_value=2,
validator=lambda x: isinstance(x, numbers.Number) and -10 <= x <= 10,
expected_type=numbers.Number
) )
default_loras = get_config_item_or_set_default( default_loras = get_config_item_or_set_default(
key='default_loras', key='default_loras',
default_value=[ default_value=[
[ [
True,
"None", "None",
1.0 1.0
], ],
[ [
True,
"None", "None",
1.0 1.0
], ],
[ [
True,
"None", "None",
1.0 1.0
], ],
[ [
True,
"None", "None",
1.0 1.0
], ],
[ [
True,
"None", "None",
1.0 1.0
] ]
], ],
validator=lambda x: isinstance(x, list) and all(len(y) == 2 and isinstance(y[0], str) and isinstance(y[1], numbers.Number) for y in x) validator=lambda x: isinstance(x, list) and all(
len(y) == 3 and isinstance(y[0], bool) and isinstance(y[1], str) and isinstance(y[2], numbers.Number)
or len(y) == 2 and isinstance(y[0], str) and isinstance(y[1], numbers.Number)
for y in x),
expected_type=list
)
default_loras = [(y[0], y[1], y[2]) if len(y) == 3 else (True, y[0], y[1]) for y in default_loras]
default_max_lora_number = get_config_item_or_set_default(
key='default_max_lora_number',
default_value=len(default_loras) if isinstance(default_loras, list) and len(default_loras) > 0 else 5,
validator=lambda x: isinstance(x, int) and x >= 1,
expected_type=int
) )
default_cfg_scale = get_config_item_or_set_default( default_cfg_scale = get_config_item_or_set_default(
key='default_cfg_scale', key='default_cfg_scale',
default_value=7.0, default_value=7.0,
validator=lambda x: isinstance(x, numbers.Number) validator=lambda x: isinstance(x, numbers.Number),
expected_type=numbers.Number
) )
default_sample_sharpness = get_config_item_or_set_default( default_sample_sharpness = get_config_item_or_set_default(
key='default_sample_sharpness', key='default_sample_sharpness',
default_value=2.0, default_value=2.0,
validator=lambda x: isinstance(x, numbers.Number) validator=lambda x: isinstance(x, numbers.Number),
expected_type=numbers.Number
) )
default_sampler = get_config_item_or_set_default( default_sampler = get_config_item_or_set_default(
key='default_sampler', key='default_sampler',
default_value='dpmpp_2m_sde_gpu', default_value='dpmpp_2m_sde_gpu',
validator=lambda x: x in modules.flags.sampler_list validator=lambda x: x in modules.flags.sampler_list,
expected_type=str
) )
default_scheduler = get_config_item_or_set_default( default_scheduler = get_config_item_or_set_default(
key='default_scheduler', key='default_scheduler',
default_value='karras', default_value='karras',
validator=lambda x: x in modules.flags.scheduler_list validator=lambda x: x in modules.flags.scheduler_list,
expected_type=str
)
default_vae = get_config_item_or_set_default(
key='default_vae',
default_value=modules.flags.default_vae,
validator=lambda x: isinstance(x, str),
expected_type=str
) )
default_styles = get_config_item_or_set_default( default_styles = get_config_item_or_set_default(
key='default_styles', key='default_styles',
@ -231,122 +380,379 @@ default_styles = get_config_item_or_set_default(
"Fooocus Enhance", "Fooocus Enhance",
"Fooocus Sharp" "Fooocus Sharp"
], ],
validator=lambda x: isinstance(x, list) and all(y in modules.sdxl_styles.legal_style_names for y in x) validator=lambda x: isinstance(x, list) and all(y in modules.sdxl_styles.legal_style_names for y in x),
expected_type=list
) )
default_prompt_negative = get_config_item_or_set_default( default_prompt_negative = get_config_item_or_set_default(
key='default_prompt_negative', key='default_prompt_negative',
default_value='', default_value='',
validator=lambda x: isinstance(x, str), validator=lambda x: isinstance(x, str),
disable_empty_as_none=True disable_empty_as_none=True,
expected_type=str
) )
default_prompt = get_config_item_or_set_default( default_prompt = get_config_item_or_set_default(
key='default_prompt', key='default_prompt',
default_value='', default_value='',
validator=lambda x: isinstance(x, str), validator=lambda x: isinstance(x, str),
disable_empty_as_none=True disable_empty_as_none=True,
expected_type=str
) )
default_performance = get_config_item_or_set_default( default_performance = get_config_item_or_set_default(
key='default_performance', key='default_performance',
default_value='Speed', default_value=Performance.SPEED.value,
validator=lambda x: x in modules.flags.performance_selections validator=lambda x: x in Performance.values(),
expected_type=str
)
default_image_prompt_checkbox = get_config_item_or_set_default(
key='default_image_prompt_checkbox',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_enhance_checkbox = get_config_item_or_set_default(
key='default_enhance_checkbox',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
) )
default_advanced_checkbox = get_config_item_or_set_default( default_advanced_checkbox = get_config_item_or_set_default(
key='default_advanced_checkbox', key='default_advanced_checkbox',
default_value=False, default_value=False,
validator=lambda x: isinstance(x, bool) validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_developer_debug_mode_checkbox = get_config_item_or_set_default(
key='default_developer_debug_mode_checkbox',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_image_prompt_advanced_checkbox = get_config_item_or_set_default(
key='default_image_prompt_advanced_checkbox',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
) )
default_max_image_number = get_config_item_or_set_default( default_max_image_number = get_config_item_or_set_default(
key='default_max_image_number', key='default_max_image_number',
default_value=32, default_value=32,
validator=lambda x: isinstance(x, int) and x >= 1 validator=lambda x: isinstance(x, int) and x >= 1,
expected_type=int
)
default_output_format = get_config_item_or_set_default(
key='default_output_format',
default_value='png',
validator=lambda x: x in OutputFormat.list(),
expected_type=str
) )
default_image_number = get_config_item_or_set_default( default_image_number = get_config_item_or_set_default(
key='default_image_number', key='default_image_number',
default_value=2, default_value=2,
validator=lambda x: isinstance(x, int) and 1 <= x <= default_max_image_number validator=lambda x: isinstance(x, int) and 1 <= x <= default_max_image_number,
expected_type=int
) )
checkpoint_downloads = get_config_item_or_set_default( checkpoint_downloads = get_config_item_or_set_default(
key='checkpoint_downloads', key='checkpoint_downloads',
default_value={}, default_value={},
validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items()) validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items()),
expected_type=dict
) )
lora_downloads = get_config_item_or_set_default( lora_downloads = get_config_item_or_set_default(
key='lora_downloads', key='lora_downloads',
default_value={}, default_value={},
validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items()) validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items()),
expected_type=dict
) )
embeddings_downloads = get_config_item_or_set_default( embeddings_downloads = get_config_item_or_set_default(
key='embeddings_downloads', key='embeddings_downloads',
default_value={}, default_value={},
validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items()) validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items()),
expected_type=dict
)
vae_downloads = get_config_item_or_set_default(
key='vae_downloads',
default_value={},
validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items()),
expected_type=dict
) )
available_aspect_ratios = get_config_item_or_set_default( available_aspect_ratios = get_config_item_or_set_default(
key='available_aspect_ratios', key='available_aspect_ratios',
default_value=[ default_value=modules.flags.sdxl_aspect_ratios,
'704*1408', '704*1344', '768*1344', '768*1280', '832*1216', '832*1152', validator=lambda x: isinstance(x, list) and all('*' in v for v in x) and len(x) > 1,
'896*1152', '896*1088', '960*1088', '960*1024', '1024*1024', '1024*960', expected_type=list
'1088*960', '1088*896', '1152*896', '1152*832', '1216*832', '1280*768',
'1344*768', '1344*704', '1408*704', '1472*704', '1536*640', '1600*640',
'1664*576', '1728*576'
],
validator=lambda x: isinstance(x, list) and all('*' in v for v in x) and len(x) > 1
) )
default_aspect_ratio = get_config_item_or_set_default( default_aspect_ratio = get_config_item_or_set_default(
key='default_aspect_ratio', key='default_aspect_ratio',
default_value='1152*896' if '1152*896' in available_aspect_ratios else available_aspect_ratios[0], default_value='1152*896' if '1152*896' in available_aspect_ratios else available_aspect_ratios[0],
validator=lambda x: x in available_aspect_ratios validator=lambda x: x in available_aspect_ratios,
expected_type=str
) )
default_inpaint_engine_version = get_config_item_or_set_default( default_inpaint_engine_version = get_config_item_or_set_default(
key='default_inpaint_engine_version', key='default_inpaint_engine_version',
default_value='v2.6', default_value='v2.6',
validator=lambda x: x in modules.flags.inpaint_engine_versions validator=lambda x: x in modules.flags.inpaint_engine_versions,
expected_type=str
)
default_selected_image_input_tab_id = get_config_item_or_set_default(
key='default_selected_image_input_tab_id',
default_value=modules.flags.default_input_image_tab,
validator=lambda x: x in modules.flags.input_image_tab_ids,
expected_type=str
)
default_uov_method = get_config_item_or_set_default(
key='default_uov_method',
default_value=modules.flags.disabled,
validator=lambda x: x in modules.flags.uov_list,
expected_type=str
)
default_controlnet_image_count = get_config_item_or_set_default(
key='default_controlnet_image_count',
default_value=4,
validator=lambda x: isinstance(x, int) and x > 0,
expected_type=int
)
default_ip_images = {}
default_ip_stop_ats = {}
default_ip_weights = {}
default_ip_types = {}
for image_count in range(default_controlnet_image_count):
image_count += 1
default_ip_images[image_count] = get_config_item_or_set_default(
key=f'default_ip_image_{image_count}',
default_value='None',
validator=lambda x: x == 'None' or isinstance(x, str) and os.path.exists(x),
expected_type=str
)
if default_ip_images[image_count] == 'None':
default_ip_images[image_count] = None
default_ip_types[image_count] = get_config_item_or_set_default(
key=f'default_ip_type_{image_count}',
default_value=modules.flags.default_ip,
validator=lambda x: x in modules.flags.ip_list,
expected_type=str
)
default_end, default_weight = modules.flags.default_parameters[default_ip_types[image_count]]
default_ip_stop_ats[image_count] = get_config_item_or_set_default(
key=f'default_ip_stop_at_{image_count}',
default_value=default_end,
validator=lambda x: isinstance(x, float) and 0 <= x <= 1,
expected_type=float
)
default_ip_weights[image_count] = get_config_item_or_set_default(
key=f'default_ip_weight_{image_count}',
default_value=default_weight,
validator=lambda x: isinstance(x, float) and 0 <= x <= 2,
expected_type=float
)
default_inpaint_advanced_masking_checkbox = get_config_item_or_set_default(
key='default_inpaint_advanced_masking_checkbox',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_inpaint_method = get_config_item_or_set_default(
key='default_inpaint_method',
default_value=modules.flags.inpaint_option_default,
validator=lambda x: x in modules.flags.inpaint_options,
expected_type=str
) )
default_cfg_tsnr = get_config_item_or_set_default( default_cfg_tsnr = get_config_item_or_set_default(
key='default_cfg_tsnr', key='default_cfg_tsnr',
default_value=7.0, default_value=7.0,
validator=lambda x: isinstance(x, numbers.Number) validator=lambda x: isinstance(x, numbers.Number),
expected_type=numbers.Number
)
default_clip_skip = get_config_item_or_set_default(
key='default_clip_skip',
default_value=2,
validator=lambda x: isinstance(x, int) and 1 <= x <= modules.flags.clip_skip_max,
expected_type=int
) )
default_overwrite_step = get_config_item_or_set_default( default_overwrite_step = get_config_item_or_set_default(
key='default_overwrite_step', key='default_overwrite_step',
default_value=-1, default_value=-1,
validator=lambda x: isinstance(x, int) validator=lambda x: isinstance(x, int),
expected_type=int
) )
default_overwrite_switch = get_config_item_or_set_default( default_overwrite_switch = get_config_item_or_set_default(
key='default_overwrite_switch', key='default_overwrite_switch',
default_value=-1, default_value=-1,
validator=lambda x: isinstance(x, int) validator=lambda x: isinstance(x, int),
expected_type=int
)
default_overwrite_upscale = get_config_item_or_set_default(
key='default_overwrite_upscale',
default_value=-1,
validator=lambda x: isinstance(x, numbers.Number)
) )
example_inpaint_prompts = get_config_item_or_set_default( example_inpaint_prompts = get_config_item_or_set_default(
key='example_inpaint_prompts', key='example_inpaint_prompts',
default_value=[ default_value=[
'highly detailed face', 'detailed girl face', 'detailed man face', 'detailed hand', 'beautiful eyes' 'highly detailed face', 'detailed girl face', 'detailed man face', 'detailed hand', 'beautiful eyes'
], ],
validator=lambda x: isinstance(x, list) and all(isinstance(v, str) for v in x) validator=lambda x: isinstance(x, list) and all(isinstance(v, str) for v in x),
expected_type=list
)
example_enhance_detection_prompts = get_config_item_or_set_default(
key='example_enhance_detection_prompts',
default_value=[
'face', 'eye', 'mouth', 'hair', 'hand', 'body'
],
validator=lambda x: isinstance(x, list) and all(isinstance(v, str) for v in x),
expected_type=list
)
default_enhance_tabs = get_config_item_or_set_default(
key='default_enhance_tabs',
default_value=3,
validator=lambda x: isinstance(x, int) and 1 <= x <= 5,
expected_type=int
)
default_enhance_uov_method = get_config_item_or_set_default(
key='default_enhance_uov_method',
default_value=modules.flags.disabled,
validator=lambda x: x in modules.flags.uov_list,
expected_type=int
)
default_enhance_uov_processing_order = get_config_item_or_set_default(
key='default_enhance_uov_processing_order',
default_value=modules.flags.enhancement_uov_before,
validator=lambda x: x in modules.flags.enhancement_uov_processing_order,
expected_type=int
)
default_enhance_uov_prompt_type = get_config_item_or_set_default(
key='default_enhance_uov_prompt_type',
default_value=modules.flags.enhancement_uov_prompt_type_original,
validator=lambda x: x in modules.flags.enhancement_uov_prompt_types,
expected_type=int
)
default_sam_max_detections = get_config_item_or_set_default(
key='default_sam_max_detections',
default_value=0,
validator=lambda x: isinstance(x, int) and 0 <= x <= 10,
expected_type=int
)
default_black_out_nsfw = get_config_item_or_set_default(
key='default_black_out_nsfw',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_save_only_final_enhanced_image = get_config_item_or_set_default(
key='default_save_only_final_enhanced_image',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_save_metadata_to_images = get_config_item_or_set_default(
key='default_save_metadata_to_images',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_metadata_scheme = get_config_item_or_set_default(
key='default_metadata_scheme',
default_value=MetadataScheme.FOOOCUS.value,
validator=lambda x: x in [y[1] for y in modules.flags.metadata_scheme if y[1] == x],
expected_type=str
)
metadata_created_by = get_config_item_or_set_default(
key='metadata_created_by',
default_value='',
validator=lambda x: isinstance(x, str),
expected_type=str
) )
example_inpaint_prompts = [[x] for x in example_inpaint_prompts] example_inpaint_prompts = [[x] for x in example_inpaint_prompts]
example_enhance_detection_prompts = [[x] for x in example_enhance_detection_prompts]
config_dict["default_loras"] = default_loras = default_loras[:5] + [['None', 1.0] for _ in range(5 - len(default_loras))] default_invert_mask_checkbox = get_config_item_or_set_default(
key='default_invert_mask_checkbox',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
possible_preset_keys = [ default_inpaint_mask_model = get_config_item_or_set_default(
"default_model", key='default_inpaint_mask_model',
"default_refiner", default_value='isnet-general-use',
"default_refiner_switch", validator=lambda x: x in modules.flags.inpaint_mask_models,
"default_loras", expected_type=str
"default_cfg_scale", )
"default_sample_sharpness",
"default_sampler",
"default_scheduler",
"default_performance",
"default_prompt",
"default_prompt_negative",
"default_styles",
"default_aspect_ratio",
"checkpoint_downloads",
"embeddings_downloads",
"lora_downloads",
]
default_enhance_inpaint_mask_model = get_config_item_or_set_default(
key='default_enhance_inpaint_mask_model',
default_value='sam',
validator=lambda x: x in modules.flags.inpaint_mask_models,
expected_type=str
)
default_inpaint_mask_cloth_category = get_config_item_or_set_default(
key='default_inpaint_mask_cloth_category',
default_value='full',
validator=lambda x: x in modules.flags.inpaint_mask_cloth_category,
expected_type=str
)
default_inpaint_mask_sam_model = get_config_item_or_set_default(
key='default_inpaint_mask_sam_model',
default_value='vit_b',
validator=lambda x: x in modules.flags.inpaint_mask_sam_model,
expected_type=str
)
default_describe_apply_prompts_checkbox = get_config_item_or_set_default(
key='default_describe_apply_prompts_checkbox',
default_value=True,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_describe_content_type = get_config_item_or_set_default(
key='default_describe_content_type',
default_value=[modules.flags.describe_type_photo],
validator=lambda x: all(k in modules.flags.describe_types for k in x),
expected_type=list
)
config_dict["default_loras"] = default_loras = default_loras[:default_max_lora_number] + [[True, 'None', 1.0] for _ in range(default_max_lora_number - len(default_loras))]
# mapping config to meta parameter
possible_preset_keys = {
"default_model": "base_model",
"default_refiner": "refiner_model",
"default_refiner_switch": "refiner_switch",
"previous_default_models": "previous_default_models",
"default_loras_min_weight": "default_loras_min_weight",
"default_loras_max_weight": "default_loras_max_weight",
"default_loras": "<processed>",
"default_cfg_scale": "guidance_scale",
"default_sample_sharpness": "sharpness",
"default_cfg_tsnr": "adaptive_cfg",
"default_clip_skip": "clip_skip",
"default_sampler": "sampler",
"default_scheduler": "scheduler",
"default_overwrite_step": "steps",
"default_overwrite_switch": "overwrite_switch",
"default_performance": "performance",
"default_image_number": "image_number",
"default_prompt": "prompt",
"default_prompt_negative": "negative_prompt",
"default_styles": "styles",
"default_aspect_ratio": "resolution",
"default_save_metadata_to_images": "default_save_metadata_to_images",
"checkpoint_downloads": "checkpoint_downloads",
"embeddings_downloads": "embeddings_downloads",
"lora_downloads": "lora_downloads",
"vae_downloads": "vae_downloads",
"default_vae": "vae",
# "default_inpaint_method": "inpaint_method", # disabled so inpaint mode doesn't refresh after every preset change
"default_inpaint_engine_version": "inpaint_engine_version",
}
REWRITE_PRESET = False REWRITE_PRESET = False
@ -366,7 +772,7 @@ def add_ratio(x):
default_aspect_ratio = add_ratio(default_aspect_ratio) default_aspect_ratio = add_ratio(default_aspect_ratio)
available_aspect_ratios = [add_ratio(x) for x in available_aspect_ratios] available_aspect_ratios_labels = [add_ratio(x) for x in available_aspect_ratios]
# Only write config in the first launch. # Only write config in the first launch.
@ -385,21 +791,32 @@ with open(config_example_path, "w", encoding="utf-8") as json_file:
'and there is no "," before the last "}". \n\n\n') 'and there is no "," before the last "}". \n\n\n')
json.dump({k: config_dict[k] for k in visited_keys}, json_file, indent=4) json.dump({k: config_dict[k] for k in visited_keys}, json_file, indent=4)
os.makedirs(path_outputs, exist_ok=True)
model_filenames = [] model_filenames = []
lora_filenames = [] lora_filenames = []
vae_filenames = []
wildcard_filenames = []
def get_model_filenames(folder_path, name_filter=None): def get_model_filenames(folder_paths, extensions=None, name_filter=None):
return get_files_from_folder(folder_path, ['.pth', '.ckpt', '.bin', '.safetensors', '.fooocus.patch'], name_filter) if extensions is None:
extensions = ['.pth', '.ckpt', '.bin', '.safetensors', '.fooocus.patch']
files = []
if not isinstance(folder_paths, list):
folder_paths = [folder_paths]
for folder in folder_paths:
files += get_files_from_folder(folder, extensions, name_filter)
return files
def update_all_model_names(): def update_files():
global model_filenames, lora_filenames global model_filenames, lora_filenames, vae_filenames, wildcard_filenames, available_presets
model_filenames = get_model_filenames(path_checkpoints) model_filenames = get_model_filenames(paths_checkpoints)
lora_filenames = get_model_filenames(path_loras) lora_filenames = get_model_filenames(paths_loras)
vae_filenames = get_model_filenames(path_vae)
wildcard_filenames = get_files_from_folder(path_wildcards, ['.txt'])
available_presets = get_presets()
return return
@ -444,10 +861,28 @@ def downloading_inpaint_models(v):
def downloading_sdxl_lcm_lora(): def downloading_sdxl_lcm_lora():
load_file_from_url( load_file_from_url(
url='https://huggingface.co/lllyasviel/misc/resolve/main/sdxl_lcm_lora.safetensors', url='https://huggingface.co/lllyasviel/misc/resolve/main/sdxl_lcm_lora.safetensors',
model_dir=path_loras, model_dir=paths_loras[0],
file_name='sdxl_lcm_lora.safetensors' file_name=modules.flags.PerformanceLoRA.EXTREME_SPEED.value
) )
return 'sdxl_lcm_lora.safetensors' return modules.flags.PerformanceLoRA.EXTREME_SPEED.value
def downloading_sdxl_lightning_lora():
load_file_from_url(
url='https://huggingface.co/mashb1t/misc/resolve/main/sdxl_lightning_4step_lora.safetensors',
model_dir=paths_loras[0],
file_name=modules.flags.PerformanceLoRA.LIGHTNING.value
)
return modules.flags.PerformanceLoRA.LIGHTNING.value
def downloading_sdxl_hyper_sd_lora():
load_file_from_url(
url='https://huggingface.co/mashb1t/misc/resolve/main/sdxl_hyper_sd_4step_lora.safetensors',
model_dir=paths_loras[0],
file_name=modules.flags.PerformanceLoRA.HYPER_SD.value
)
return modules.flags.PerformanceLoRA.HYPER_SD.value
def downloading_controlnet_canny(): def downloading_controlnet_canny():
@ -514,5 +949,49 @@ def downloading_upscale_model():
) )
return os.path.join(path_upscale_models, 'fooocus_upscaler_s409985e5.bin') return os.path.join(path_upscale_models, 'fooocus_upscaler_s409985e5.bin')
def downloading_safety_checker_model():
load_file_from_url(
url='https://huggingface.co/mashb1t/misc/resolve/main/stable-diffusion-safety-checker.bin',
model_dir=path_safety_checker,
file_name='stable-diffusion-safety-checker.bin'
)
return os.path.join(path_safety_checker, 'stable-diffusion-safety-checker.bin')
update_all_model_names()
def download_sam_model(sam_model: str) -> str:
match sam_model:
case 'vit_b':
return downloading_sam_vit_b()
case 'vit_l':
return downloading_sam_vit_l()
case 'vit_h':
return downloading_sam_vit_h()
case _:
raise ValueError(f"sam model {sam_model} does not exist.")
def downloading_sam_vit_b():
load_file_from_url(
url='https://huggingface.co/mashb1t/misc/resolve/main/sam_vit_b_01ec64.pth',
model_dir=path_sam,
file_name='sam_vit_b_01ec64.pth'
)
return os.path.join(path_sam, 'sam_vit_b_01ec64.pth')
def downloading_sam_vit_l():
load_file_from_url(
url='https://huggingface.co/mashb1t/misc/resolve/main/sam_vit_l_0b3195.pth',
model_dir=path_sam,
file_name='sam_vit_l_0b3195.pth'
)
return os.path.join(path_sam, 'sam_vit_l_0b3195.pth')
def downloading_sam_vit_h():
load_file_from_url(
url='https://huggingface.co/mashb1t/misc/resolve/main/sam_vit_h_4b8939.pth',
model_dir=path_sam,
file_name='sam_vit_h_4b8939.pth'
)
return os.path.join(path_sam, 'sam_vit_h_4b8939.pth')

View File

@ -1,8 +1,3 @@
from modules.patch import patch_all
patch_all()
import os import os
import einops import einops
import torch import torch
@ -16,7 +11,6 @@ import ldm_patched.modules.controlnet
import modules.sample_hijack import modules.sample_hijack
import ldm_patched.modules.samplers import ldm_patched.modules.samplers
import ldm_patched.modules.latent_formats import ldm_patched.modules.latent_formats
import modules.advanced_parameters
from ldm_patched.modules.sd import load_checkpoint_guess_config from ldm_patched.modules.sd import load_checkpoint_guess_config
from ldm_patched.contrib.external import VAEDecode, EmptyLatentImage, VAEEncode, VAEEncodeTiled, VAEDecodeTiled, \ from ldm_patched.contrib.external import VAEDecode, EmptyLatentImage, VAEEncode, VAEEncodeTiled, VAEDecodeTiled, \
@ -24,10 +18,10 @@ from ldm_patched.contrib.external import VAEDecode, EmptyLatentImage, VAEEncode,
from ldm_patched.contrib.external_freelunch import FreeU_V2 from ldm_patched.contrib.external_freelunch import FreeU_V2
from ldm_patched.modules.sample import prepare_mask from ldm_patched.modules.sample import prepare_mask
from modules.lora import match_lora from modules.lora import match_lora
from modules.util import get_file_from_folder_list
from ldm_patched.modules.lora import model_lora_keys_unet, model_lora_keys_clip from ldm_patched.modules.lora import model_lora_keys_unet, model_lora_keys_clip
from modules.config import path_embeddings from modules.config import path_embeddings
from ldm_patched.contrib.external_model_advanced import ModelSamplingDiscrete from ldm_patched.contrib.external_model_advanced import ModelSamplingDiscrete, ModelSamplingContinuousEDM
opEmptyLatentImage = EmptyLatentImage() opEmptyLatentImage = EmptyLatentImage()
opVAEDecode = VAEDecode() opVAEDecode = VAEDecode()
@ -37,15 +31,17 @@ opVAEEncodeTiled = VAEEncodeTiled()
opControlNetApplyAdvanced = ControlNetApplyAdvanced() opControlNetApplyAdvanced = ControlNetApplyAdvanced()
opFreeU = FreeU_V2() opFreeU = FreeU_V2()
opModelSamplingDiscrete = ModelSamplingDiscrete() opModelSamplingDiscrete = ModelSamplingDiscrete()
opModelSamplingContinuousEDM = ModelSamplingContinuousEDM()
class StableDiffusionModel: class StableDiffusionModel:
def __init__(self, unet=None, vae=None, clip=None, clip_vision=None, filename=None): def __init__(self, unet=None, vae=None, clip=None, clip_vision=None, filename=None, vae_filename=None):
self.unet = unet self.unet = unet
self.vae = vae self.vae = vae
self.clip = clip self.clip = clip
self.clip_vision = clip_vision self.clip_vision = clip_vision
self.filename = filename self.filename = filename
self.vae_filename = vae_filename
self.unet_with_lora = unet self.unet_with_lora = unet
self.clip_with_lora = clip self.clip_with_lora = clip
self.visited_loras = '' self.visited_loras = ''
@ -78,14 +74,14 @@ class StableDiffusionModel:
loras_to_load = [] loras_to_load = []
for name, weight in loras: for filename, weight in loras:
if name == 'None': if filename == 'None':
continue continue
if os.path.exists(name): if os.path.exists(filename):
lora_filename = name lora_filename = filename
else: else:
lora_filename = os.path.join(modules.config.path_loras, name) lora_filename = get_file_from_folder_list(filename, modules.config.paths_loras)
if not os.path.exists(lora_filename): if not os.path.exists(lora_filename):
print(f'Lora file not found: {lora_filename}') print(f'Lora file not found: {lora_filename}')
@ -147,9 +143,10 @@ def apply_controlnet(positive, negative, control_net, image, strength, start_per
@torch.no_grad() @torch.no_grad()
@torch.inference_mode() @torch.inference_mode()
def load_model(ckpt_filename): def load_model(ckpt_filename, vae_filename=None):
unet, clip, vae, clip_vision = load_checkpoint_guess_config(ckpt_filename, embedding_directory=path_embeddings) unet, clip, vae, vae_filename, clip_vision = load_checkpoint_guess_config(ckpt_filename, embedding_directory=path_embeddings,
return StableDiffusionModel(unet=unet, clip=clip, vae=vae, clip_vision=clip_vision, filename=ckpt_filename) vae_filename_param=vae_filename)
return StableDiffusionModel(unet=unet, clip=clip, vae=vae, clip_vision=clip_vision, filename=ckpt_filename, vae_filename=vae_filename)
@torch.no_grad() @torch.no_grad()
@ -234,7 +231,7 @@ def get_previewer(model):
if vae_approx_filename in VAE_approx_models: if vae_approx_filename in VAE_approx_models:
VAE_approx_model = VAE_approx_models[vae_approx_filename] VAE_approx_model = VAE_approx_models[vae_approx_filename]
else: else:
sd = torch.load(vae_approx_filename, map_location='cpu') sd = torch.load(vae_approx_filename, map_location='cpu', weights_only=True)
VAE_approx_model = VAEApprox() VAE_approx_model = VAEApprox()
VAE_approx_model.load_state_dict(sd) VAE_approx_model.load_state_dict(sd)
del sd del sd
@ -268,7 +265,7 @@ def get_previewer(model):
def ksampler(model, positive, negative, latent, seed=None, steps=30, cfg=7.0, sampler_name='dpmpp_2m_sde_gpu', def ksampler(model, positive, negative, latent, seed=None, steps=30, cfg=7.0, sampler_name='dpmpp_2m_sde_gpu',
scheduler='karras', denoise=1.0, disable_noise=False, start_step=None, last_step=None, scheduler='karras', denoise=1.0, disable_noise=False, start_step=None, last_step=None,
force_full_denoise=False, callback_function=None, refiner=None, refiner_switch=-1, force_full_denoise=False, callback_function=None, refiner=None, refiner_switch=-1,
previewer_start=None, previewer_end=None, sigmas=None, noise_mean=None): previewer_start=None, previewer_end=None, sigmas=None, noise_mean=None, disable_preview=False):
if sigmas is not None: if sigmas is not None:
sigmas = sigmas.clone().to(ldm_patched.modules.model_management.get_torch_device()) sigmas = sigmas.clone().to(ldm_patched.modules.model_management.get_torch_device())
@ -299,7 +296,7 @@ def ksampler(model, positive, negative, latent, seed=None, steps=30, cfg=7.0, sa
def callback(step, x0, x, total_steps): def callback(step, x0, x, total_steps):
ldm_patched.modules.model_management.throw_exception_if_processing_interrupted() ldm_patched.modules.model_management.throw_exception_if_processing_interrupted()
y = None y = None
if previewer is not None and not modules.advanced_parameters.disable_preview: if previewer is not None and not disable_preview:
y = previewer(x0, previewer_start + step, previewer_end) y = previewer(x0, previewer_start + step, previewer_end)
if callback_function is not None: if callback_function is not None:
callback_function(previewer_start + step, x0, x, previewer_end, y) callback_function(previewer_start + step, x0, x, previewer_end, y)

View File

@ -3,6 +3,7 @@ import os
import torch import torch
import modules.patch import modules.patch
import modules.config import modules.config
import modules.flags
import ldm_patched.modules.model_management import ldm_patched.modules.model_management
import ldm_patched.modules.latent_formats import ldm_patched.modules.latent_formats
import modules.inpaint_worker import modules.inpaint_worker
@ -11,6 +12,7 @@ from extras.expansion import FooocusExpansion
from ldm_patched.modules.model_base import SDXL, SDXLRefiner from ldm_patched.modules.model_base import SDXL, SDXLRefiner
from modules.sample_hijack import clip_separate from modules.sample_hijack import clip_separate
from modules.util import get_file_from_folder_list, get_enabled_loras
model_base = core.StableDiffusionModel() model_base = core.StableDiffusionModel()
@ -57,17 +59,21 @@ def assert_model_integrity():
@torch.no_grad() @torch.no_grad()
@torch.inference_mode() @torch.inference_mode()
def refresh_base_model(name): def refresh_base_model(name, vae_name=None):
global model_base global model_base
filename = os.path.abspath(os.path.realpath(os.path.join(modules.config.path_checkpoints, name))) filename = get_file_from_folder_list(name, modules.config.paths_checkpoints)
if model_base.filename == filename: vae_filename = None
if vae_name is not None and vae_name != modules.flags.default_vae:
vae_filename = get_file_from_folder_list(vae_name, modules.config.path_vae)
if model_base.filename == filename and model_base.vae_filename == vae_filename:
return return
model_base = core.StableDiffusionModel() model_base = core.load_model(filename, vae_filename)
model_base = core.load_model(filename)
print(f'Base model loaded: {model_base.filename}') print(f'Base model loaded: {model_base.filename}')
print(f'VAE loaded: {model_base.vae_filename}')
return return
@ -76,7 +82,7 @@ def refresh_base_model(name):
def refresh_refiner_model(name): def refresh_refiner_model(name):
global model_refiner global model_refiner
filename = os.path.abspath(os.path.realpath(os.path.join(modules.config.path_checkpoints, name))) filename = get_file_from_folder_list(name, modules.config.paths_checkpoints)
if model_refiner.filename == filename: if model_refiner.filename == filename:
return return
@ -195,6 +201,17 @@ def clip_encode(texts, pool_top_k=1):
return [[torch.cat(cond_list, dim=1), {"pooled_output": pooled_acc}]] return [[torch.cat(cond_list, dim=1), {"pooled_output": pooled_acc}]]
@torch.no_grad()
@torch.inference_mode()
def set_clip_skip(clip_skip: int):
global final_clip
if final_clip is None:
return
final_clip.clip_layer(-abs(clip_skip))
return
@torch.no_grad() @torch.no_grad()
@torch.inference_mode() @torch.inference_mode()
def clear_all_caches(): def clear_all_caches():
@ -215,7 +232,7 @@ def prepare_text_encoder(async_call=True):
@torch.no_grad() @torch.no_grad()
@torch.inference_mode() @torch.inference_mode()
def refresh_everything(refiner_model_name, base_model_name, loras, def refresh_everything(refiner_model_name, base_model_name, loras,
base_model_additional_loras=None, use_synthetic_refiner=False): base_model_additional_loras=None, use_synthetic_refiner=False, vae_name=None):
global final_unet, final_clip, final_vae, final_refiner_unet, final_refiner_vae, final_expansion global final_unet, final_clip, final_vae, final_refiner_unet, final_refiner_vae, final_expansion
final_unet = None final_unet = None
@ -226,11 +243,11 @@ def refresh_everything(refiner_model_name, base_model_name, loras,
if use_synthetic_refiner and refiner_model_name == 'None': if use_synthetic_refiner and refiner_model_name == 'None':
print('Synthetic Refiner Activated') print('Synthetic Refiner Activated')
refresh_base_model(base_model_name) refresh_base_model(base_model_name, vae_name)
synthesize_refiner_model() synthesize_refiner_model()
else: else:
refresh_refiner_model(refiner_model_name) refresh_refiner_model(refiner_model_name)
refresh_base_model(base_model_name) refresh_base_model(base_model_name, vae_name)
refresh_loras(loras, base_model_additional_loras=base_model_additional_loras) refresh_loras(loras, base_model_additional_loras=base_model_additional_loras)
assert_model_integrity() assert_model_integrity()
@ -253,7 +270,8 @@ def refresh_everything(refiner_model_name, base_model_name, loras,
refresh_everything( refresh_everything(
refiner_model_name=modules.config.default_refiner_model_name, refiner_model_name=modules.config.default_refiner_model_name,
base_model_name=modules.config.default_base_model_name, base_model_name=modules.config.default_base_model_name,
loras=modules.config.default_loras loras=get_enabled_loras(modules.config.default_loras),
vae_name=modules.config.default_vae,
) )
@ -315,7 +333,7 @@ def get_candidate_vae(steps, switch, denoise=1.0, refiner_swap_method='joint'):
@torch.no_grad() @torch.no_grad()
@torch.inference_mode() @torch.inference_mode()
def process_diffusion(positive_cond, negative_cond, steps, switch, width, height, image_seed, callback, sampler_name, scheduler_name, latent=None, denoise=1.0, tiled=False, cfg_scale=7.0, refiner_swap_method='joint'): def process_diffusion(positive_cond, negative_cond, steps, switch, width, height, image_seed, callback, sampler_name, scheduler_name, latent=None, denoise=1.0, tiled=False, cfg_scale=7.0, refiner_swap_method='joint', disable_preview=False):
target_unet, target_vae, target_refiner_unet, target_refiner_vae, target_clip \ target_unet, target_vae, target_refiner_unet, target_refiner_vae, target_clip \
= final_unet, final_vae, final_refiner_unet, final_refiner_vae, final_clip = final_unet, final_vae, final_refiner_unet, final_refiner_vae, final_clip
@ -374,6 +392,7 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
refiner_switch=switch, refiner_switch=switch,
previewer_start=0, previewer_start=0,
previewer_end=steps, previewer_end=steps,
disable_preview=disable_preview
) )
decoded_latent = core.decode_vae(vae=target_vae, latent_image=sampled_latent, tiled=tiled) decoded_latent = core.decode_vae(vae=target_vae, latent_image=sampled_latent, tiled=tiled)
@ -392,6 +411,7 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
scheduler=scheduler_name, scheduler=scheduler_name,
previewer_start=0, previewer_start=0,
previewer_end=steps, previewer_end=steps,
disable_preview=disable_preview
) )
print('Refiner swapped by changing ksampler. Noise preserved.') print('Refiner swapped by changing ksampler. Noise preserved.')
@ -414,6 +434,7 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
scheduler=scheduler_name, scheduler=scheduler_name,
previewer_start=switch, previewer_start=switch,
previewer_end=steps, previewer_end=steps,
disable_preview=disable_preview
) )
target_model = target_refiner_vae target_model = target_refiner_vae
@ -422,7 +443,7 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
decoded_latent = core.decode_vae(vae=target_model, latent_image=sampled_latent, tiled=tiled) decoded_latent = core.decode_vae(vae=target_model, latent_image=sampled_latent, tiled=tiled)
if refiner_swap_method == 'vae': if refiner_swap_method == 'vae':
modules.patch.eps_record = 'vae' modules.patch.patch_settings[os.getpid()].eps_record = 'vae'
if modules.inpaint_worker.current_task is not None: if modules.inpaint_worker.current_task is not None:
modules.inpaint_worker.current_task.unswap() modules.inpaint_worker.current_task.unswap()
@ -440,7 +461,8 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
sampler_name=sampler_name, sampler_name=sampler_name,
scheduler=scheduler_name, scheduler=scheduler_name,
previewer_start=0, previewer_start=0,
previewer_end=steps previewer_end=steps,
disable_preview=disable_preview
) )
print('Fooocus VAE-based swap.') print('Fooocus VAE-based swap.')
@ -459,7 +481,7 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
denoise=denoise)[switch:] * k_sigmas denoise=denoise)[switch:] * k_sigmas
len_sigmas = len(sigmas) - 1 len_sigmas = len(sigmas) - 1
noise_mean = torch.mean(modules.patch.eps_record, dim=1, keepdim=True) noise_mean = torch.mean(modules.patch.patch_settings[os.getpid()].eps_record, dim=1, keepdim=True)
if modules.inpaint_worker.current_task is not None: if modules.inpaint_worker.current_task is not None:
modules.inpaint_worker.current_task.swap() modules.inpaint_worker.current_task.swap()
@ -479,7 +501,8 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
previewer_start=switch, previewer_start=switch,
previewer_end=steps, previewer_end=steps,
sigmas=sigmas, sigmas=sigmas,
noise_mean=noise_mean noise_mean=noise_mean,
disable_preview=disable_preview
) )
target_model = target_refiner_vae target_model = target_refiner_vae
@ -488,5 +511,5 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
decoded_latent = core.decode_vae(vae=target_model, latent_image=sampled_latent, tiled=tiled) decoded_latent = core.decode_vae(vae=target_model, latent_image=sampled_latent, tiled=tiled)
images = core.pytorch_to_numpy(decoded_latent) images = core.pytorch_to_numpy(decoded_latent)
modules.patch.eps_record = None modules.patch.patch_settings[os.getpid()].eps_record = None
return images return images

41
modules/extra_utils.py Normal file
View File

@ -0,0 +1,41 @@
import os
from ast import literal_eval
def makedirs_with_log(path):
try:
os.makedirs(path, exist_ok=True)
except OSError as error:
print(f'Directory {path} could not be created, reason: {error}')
def get_files_from_folder(folder_path, extensions=None, name_filter=None):
if not os.path.isdir(folder_path):
raise ValueError("Folder path is not a valid directory.")
filenames = []
for root, _, files in os.walk(folder_path, topdown=False):
relative_path = os.path.relpath(root, folder_path)
if relative_path == ".":
relative_path = ""
for filename in sorted(files, key=lambda s: s.casefold()):
_, file_extension = os.path.splitext(filename)
if (extensions is None or file_extension.lower() in extensions) and (name_filter is None or name_filter in _):
path = os.path.join(relative_path, filename)
filenames.append(path)
return filenames
def try_eval_env_var(value: str, expected_type=None):
try:
value_eval = value
if expected_type is bool:
value_eval = value.title()
value_eval = literal_eval(value_eval)
if expected_type is not None and not isinstance(value_eval, expected_type):
return value
return value_eval
except:
return value

View File

@ -1,3 +1,5 @@
from enum import IntEnum, Enum
disabled = 'Disabled' disabled = 'Disabled'
enabled = 'Enabled' enabled = 'Enabled'
subtle_variation = 'Vary (Subtle)' subtle_variation = 'Vary (Subtle)'
@ -6,20 +8,68 @@ upscale_15 = 'Upscale (1.5x)'
upscale_2 = 'Upscale (2x)' upscale_2 = 'Upscale (2x)'
upscale_fast = 'Upscale (Fast 2x)' upscale_fast = 'Upscale (Fast 2x)'
uov_list = [ uov_list = [disabled, subtle_variation, strong_variation, upscale_15, upscale_2, upscale_fast]
disabled, subtle_variation, strong_variation, upscale_15, upscale_2, upscale_fast
]
KSAMPLER_NAMES = ["euler", "euler_ancestral", "heun", "heunpp2","dpm_2", "dpm_2_ancestral", enhancement_uov_before = "Before First Enhancement"
"lms", "dpm_fast", "dpm_adaptive", "dpmpp_2s_ancestral", "dpmpp_sde", "dpmpp_sde_gpu", enhancement_uov_after = "After Last Enhancement"
"dpmpp_2m", "dpmpp_2m_sde", "dpmpp_2m_sde_gpu", "dpmpp_3m_sde", "dpmpp_3m_sde_gpu", "ddpm", "lcm"] enhancement_uov_processing_order = [enhancement_uov_before, enhancement_uov_after]
SCHEDULER_NAMES = ["normal", "karras", "exponential", "sgm_uniform", "simple", "ddim_uniform", "lcm", "turbo"] enhancement_uov_prompt_type_original = 'Original Prompts'
SAMPLER_NAMES = KSAMPLER_NAMES + ["ddim", "uni_pc", "uni_pc_bh2"] enhancement_uov_prompt_type_last_filled = 'Last Filled Enhancement Prompts'
enhancement_uov_prompt_types = [enhancement_uov_prompt_type_original, enhancement_uov_prompt_type_last_filled]
CIVITAI_NO_KARRAS = ["euler", "euler_ancestral", "heun", "dpm_fast", "dpm_adaptive", "ddim", "uni_pc"]
# fooocus: a1111 (Civitai)
KSAMPLER = {
"euler": "Euler",
"euler_ancestral": "Euler a",
"heun": "Heun",
"heunpp2": "",
"dpm_2": "DPM2",
"dpm_2_ancestral": "DPM2 a",
"lms": "LMS",
"dpm_fast": "DPM fast",
"dpm_adaptive": "DPM adaptive",
"dpmpp_2s_ancestral": "DPM++ 2S a",
"dpmpp_sde": "DPM++ SDE",
"dpmpp_sde_gpu": "DPM++ SDE",
"dpmpp_2m": "DPM++ 2M",
"dpmpp_2m_sde": "DPM++ 2M SDE",
"dpmpp_2m_sde_gpu": "DPM++ 2M SDE",
"dpmpp_3m_sde": "",
"dpmpp_3m_sde_gpu": "",
"ddpm": "",
"lcm": "LCM",
"tcd": "TCD",
"restart": "Restart"
}
SAMPLER_EXTRA = {
"ddim": "DDIM",
"uni_pc": "UniPC",
"uni_pc_bh2": ""
}
SAMPLERS = KSAMPLER | SAMPLER_EXTRA
KSAMPLER_NAMES = list(KSAMPLER.keys())
SCHEDULER_NAMES = ["normal", "karras", "exponential", "sgm_uniform", "simple", "ddim_uniform", "lcm", "turbo", "align_your_steps", "tcd", "edm_playground_v2.5"]
SAMPLER_NAMES = KSAMPLER_NAMES + list(SAMPLER_EXTRA.keys())
sampler_list = SAMPLER_NAMES sampler_list = SAMPLER_NAMES
scheduler_list = SCHEDULER_NAMES scheduler_list = SCHEDULER_NAMES
clip_skip_max = 12
default_vae = 'Default (model)'
refiner_swap_method = 'joint'
default_input_image_tab = 'uov_tab'
input_image_tab_ids = ['uov_tab', 'ip_tab', 'inpaint_tab', 'describe_tab', 'enhance_tab', 'metadata_tab']
cn_ip = "ImagePrompt" cn_ip = "ImagePrompt"
cn_ip_face = "FaceSwap" cn_ip_face = "FaceSwap"
cn_canny = "PyraCanny" cn_canny = "PyraCanny"
@ -32,13 +82,110 @@ default_parameters = {
cn_ip: (0.5, 0.6), cn_ip_face: (0.9, 0.75), cn_canny: (0.5, 1.0), cn_cpds: (0.5, 1.0) cn_ip: (0.5, 0.6), cn_ip_face: (0.9, 0.75), cn_canny: (0.5, 1.0), cn_cpds: (0.5, 1.0)
} # stop, weight } # stop, weight
inpaint_engine_versions = ['None', 'v1', 'v2.5', 'v2.6'] output_formats = ['png', 'jpeg', 'webp']
performance_selections = ['Speed', 'Quality', 'Extreme Speed']
inpaint_mask_models = ['u2net', 'u2netp', 'u2net_human_seg', 'u2net_cloth_seg', 'silueta', 'isnet-general-use', 'isnet-anime', 'sam']
inpaint_mask_cloth_category = ['full', 'upper', 'lower']
inpaint_mask_sam_model = ['vit_b', 'vit_l', 'vit_h']
inpaint_engine_versions = ['None', 'v1', 'v2.5', 'v2.6']
inpaint_option_default = 'Inpaint or Outpaint (default)' inpaint_option_default = 'Inpaint or Outpaint (default)'
inpaint_option_detail = 'Improve Detail (face, hand, eyes, etc.)' inpaint_option_detail = 'Improve Detail (face, hand, eyes, etc.)'
inpaint_option_modify = 'Modify Content (add objects, change background, etc.)' inpaint_option_modify = 'Modify Content (add objects, change background, etc.)'
inpaint_options = [inpaint_option_default, inpaint_option_detail, inpaint_option_modify] inpaint_options = [inpaint_option_default, inpaint_option_detail, inpaint_option_modify]
desc_type_photo = 'Photograph' describe_type_photo = 'Photograph'
desc_type_anime = 'Art/Anime' describe_type_anime = 'Art/Anime'
describe_types = [describe_type_photo, describe_type_anime]
sdxl_aspect_ratios = [
'704*1408', '704*1344', '768*1344', '768*1280', '832*1216', '832*1152',
'896*1152', '896*1088', '960*1088', '960*1024', '1024*1024', '1024*960',
'1088*960', '1088*896', '1152*896', '1152*832', '1216*832', '1280*768',
'1344*768', '1344*704', '1408*704', '1472*704', '1536*640', '1600*640',
'1664*576', '1728*576'
]
class MetadataScheme(Enum):
FOOOCUS = 'fooocus'
A1111 = 'a1111'
metadata_scheme = [
(f'{MetadataScheme.FOOOCUS.value} (json)', MetadataScheme.FOOOCUS.value),
(f'{MetadataScheme.A1111.value} (plain text)', MetadataScheme.A1111.value),
]
class OutputFormat(Enum):
PNG = 'png'
JPEG = 'jpeg'
WEBP = 'webp'
@classmethod
def list(cls) -> list:
return list(map(lambda c: c.value, cls))
class PerformanceLoRA(Enum):
QUALITY = None
SPEED = None
EXTREME_SPEED = 'sdxl_lcm_lora.safetensors'
LIGHTNING = 'sdxl_lightning_4step_lora.safetensors'
HYPER_SD = 'sdxl_hyper_sd_4step_lora.safetensors'
class Steps(IntEnum):
QUALITY = 60
SPEED = 30
EXTREME_SPEED = 8
LIGHTNING = 4
HYPER_SD = 4
@classmethod
def keys(cls) -> list:
return list(map(lambda c: c, Steps.__members__))
class StepsUOV(IntEnum):
QUALITY = 36
SPEED = 18
EXTREME_SPEED = 8
LIGHTNING = 4
HYPER_SD = 4
class Performance(Enum):
QUALITY = 'Quality'
SPEED = 'Speed'
EXTREME_SPEED = 'Extreme Speed'
LIGHTNING = 'Lightning'
HYPER_SD = 'Hyper-SD'
@classmethod
def list(cls) -> list:
return list(map(lambda c: (c.name, c.value), cls))
@classmethod
def values(cls) -> list:
return list(map(lambda c: c.value, cls))
@classmethod
def by_steps(cls, steps: int | str):
return cls[Steps(int(steps)).name]
@classmethod
def has_restricted_features(cls, x) -> bool:
if isinstance(x, Performance):
x = x.value
return x in [cls.EXTREME_SPEED.value, cls.LIGHTNING.value, cls.HYPER_SD.value]
def steps(self) -> int | None:
return Steps[self.name].value if self.name in Steps.__members__ else None
def steps_uov(self) -> int | None:
return StepsUOV[self.name].value if self.name in StepsUOV.__members__ else None
def lora_filename(self) -> str | None:
return PerformanceLoRA[self.name].value if self.name in PerformanceLoRA.__members__ else None

View File

@ -17,7 +17,7 @@ from gradio_client.documentation import document, set_documentation_group
from gradio_client.serializing import ImgSerializable from gradio_client.serializing import ImgSerializable
from PIL import Image as _Image # using _ to minimize namespace pollution from PIL import Image as _Image # using _ to minimize namespace pollution
from gradio import processing_utils, utils from gradio import processing_utils, utils, Error
from gradio.components.base import IOComponent, _Keywords, Block from gradio.components.base import IOComponent, _Keywords, Block
from gradio.deprecation import warn_style_method_deprecation from gradio.deprecation import warn_style_method_deprecation
from gradio.events import ( from gradio.events import (
@ -275,7 +275,10 @@ class Image(
x, mask = x["image"], x["mask"] x, mask = x["image"], x["mask"]
assert isinstance(x, str) assert isinstance(x, str)
try:
im = processing_utils.decode_base64_to_image(x) im = processing_utils.decode_base64_to_image(x)
except PIL.UnidentifiedImageError:
raise Error("Unsupported image type in input")
with warnings.catch_warnings(): with warnings.catch_warnings():
warnings.simplefilter("ignore") warnings.simplefilter("ignore")
im = im.convert(self.image_mode) im = im.convert(self.image_mode)

83
modules/hash_cache.py Normal file
View File

@ -0,0 +1,83 @@
import json
import os
from concurrent.futures import ThreadPoolExecutor
from multiprocessing import cpu_count
import args_manager
from modules.util import sha256, HASH_SHA256_LENGTH, get_file_from_folder_list
hash_cache_filename = 'hash_cache.txt'
hash_cache = {}
def sha256_from_cache(filepath):
global hash_cache
if filepath not in hash_cache:
print(f"[Cache] Calculating sha256 for {filepath}")
hash_value = sha256(filepath)
print(f"[Cache] sha256 for {filepath}: {hash_value}")
hash_cache[filepath] = hash_value
save_cache_to_file(filepath, hash_value)
return hash_cache[filepath]
def load_cache_from_file():
global hash_cache
try:
if os.path.exists(hash_cache_filename):
with open(hash_cache_filename, 'rt', encoding='utf-8') as fp:
for line in fp:
entry = json.loads(line)
for filepath, hash_value in entry.items():
if not os.path.exists(filepath) or not isinstance(hash_value, str) and len(hash_value) != HASH_SHA256_LENGTH:
print(f'[Cache] Skipping invalid cache entry: {filepath}')
continue
hash_cache[filepath] = hash_value
except Exception as e:
print(f'[Cache] Loading failed: {e}')
def save_cache_to_file(filename=None, hash_value=None):
global hash_cache
if filename is not None and hash_value is not None:
items = [(filename, hash_value)]
mode = 'at'
else:
items = sorted(hash_cache.items())
mode = 'wt'
try:
with open(hash_cache_filename, mode, encoding='utf-8') as fp:
for filepath, hash_value in items:
json.dump({filepath: hash_value}, fp)
fp.write('\n')
except Exception as e:
print(f'[Cache] Saving failed: {e}')
def init_cache(model_filenames, paths_checkpoints, lora_filenames, paths_loras):
load_cache_from_file()
if args_manager.args.rebuild_hash_cache:
max_workers = args_manager.args.rebuild_hash_cache if args_manager.args.rebuild_hash_cache > 0 else cpu_count()
rebuild_cache(lora_filenames, model_filenames, paths_checkpoints, paths_loras, max_workers)
# write cache to file again for sorting and cleanup of invalid cache entries
save_cache_to_file()
def rebuild_cache(lora_filenames, model_filenames, paths_checkpoints, paths_loras, max_workers=cpu_count()):
def thread(filename, paths):
filepath = get_file_from_folder_list(filename, paths)
sha256_from_cache(filepath)
print('[Cache] Rebuilding hash cache')
with ThreadPoolExecutor(max_workers=max_workers) as executor:
for model_filename in model_filenames:
executor.submit(thread, model_filename, paths_checkpoints)
for lora_filename in lora_filenames:
executor.submit(thread, lora_filename, paths_loras)
print('[Cache] Done')

View File

@ -1,118 +1,3 @@
css = '''
.loader-container {
display: flex; /* Use flex to align items horizontally */
align-items: center; /* Center items vertically within the container */
white-space: nowrap; /* Prevent line breaks within the container */
}
.loader {
border: 8px solid #f3f3f3; /* Light grey */
border-top: 8px solid #3498db; /* Blue */
border-radius: 50%;
width: 30px;
height: 30px;
animation: spin 2s linear infinite;
}
@keyframes spin {
0% { transform: rotate(0deg); }
100% { transform: rotate(360deg); }
}
/* Style the progress bar */
progress {
appearance: none; /* Remove default styling */
height: 20px; /* Set the height of the progress bar */
border-radius: 5px; /* Round the corners of the progress bar */
background-color: #f3f3f3; /* Light grey background */
width: 100%;
}
/* Style the progress bar container */
.progress-container {
margin-left: 20px;
margin-right: 20px;
flex-grow: 1; /* Allow the progress container to take up remaining space */
}
/* Set the color of the progress bar fill */
progress::-webkit-progress-value {
background-color: #3498db; /* Blue color for the fill */
}
progress::-moz-progress-bar {
background-color: #3498db; /* Blue color for the fill in Firefox */
}
/* Style the text on the progress bar */
progress::after {
content: attr(value '%'); /* Display the progress value followed by '%' */
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
color: white; /* Set text color */
font-size: 14px; /* Set font size */
}
/* Style other texts */
.loader-container > span {
margin-left: 5px; /* Add spacing between the progress bar and the text */
}
.progress-bar > .generating {
display: none !important;
}
.progress-bar{
height: 30px !important;
}
.type_row{
height: 80px !important;
}
.type_row_half{
height: 32px !important;
}
.scroll-hide{
resize: none !important;
}
.refresh_button{
border: none !important;
background: none !important;
font-size: none !important;
box-shadow: none !important;
}
.advanced_check_row{
width: 250px !important;
}
.min_check{
min-width: min(1px, 100%) !important;
}
.resizable_area {
resize: vertical;
overflow: auto !important;
}
.aspect_ratios label {
width: 140px !important;
}
.aspect_ratios label span {
white-space: nowrap !important;
}
.aspect_ratios label input {
margin-left: -5px !important;
}
'''
progress_html = ''' progress_html = '''
<div class="loader-container"> <div class="loader-container">
<div class="loader"></div> <div class="loader"></div>

View File

@ -196,7 +196,7 @@ class InpaintWorker:
if inpaint_head_model is None: if inpaint_head_model is None:
inpaint_head_model = InpaintHead() inpaint_head_model = InpaintHead()
sd = torch.load(inpaint_head_model_path, map_location='cpu') sd = torch.load(inpaint_head_model_path, map_location='cpu', weights_only=True)
inpaint_head_model.load_state_dict(sd) inpaint_head_model.load_state_dict(sd)
feed = torch.cat([ feed = torch.cat([

View File

@ -1,6 +1,7 @@
import os import os
import importlib import importlib
import importlib.util import importlib.util
import shutil
import subprocess import subprocess
import sys import sys
import re import re
@ -9,13 +10,10 @@ import importlib.metadata
import packaging.version import packaging.version
from packaging.requirements import Requirement from packaging.requirements import Requirement
logging.getLogger("torch.distributed.nn").setLevel(logging.ERROR) # sshh... logging.getLogger("torch.distributed.nn").setLevel(logging.ERROR) # sshh...
logging.getLogger("xformers").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage()) logging.getLogger("xformers").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage())
re_requirement = re.compile(r"\s*([-_a-zA-Z0-9]+)\s*(?:==\s*([-+_.a-zA-Z0-9]+))?\s*") re_requirement = re.compile(r"\s*([-\w]+)\s*(?:==\s*([-+.\w]+))?\s*")
python = sys.executable python = sys.executable
default_command_live = (os.environ.get('LAUNCH_LIVE_OUTPUT') == "1") default_command_live = (os.environ.get('LAUNCH_LIVE_OUTPUT') == "1")
@ -101,3 +99,19 @@ def requirements_met(requirements_file):
return True return True
def delete_folder_content(folder, prefix=None):
result = True
for filename in os.listdir(folder):
file_path = os.path.join(folder, filename)
try:
if os.path.isfile(file_path) or os.path.islink(file_path):
os.unlink(file_path)
elif os.path.isdir(file_path):
shutil.rmtree(file_path)
except Exception as e:
print(f'{prefix}Failed to delete {file_path}. Reason: {e}')
result = False
return result

View File

@ -1,78 +1,199 @@
import json import json
import re
from abc import ABC, abstractmethod
from pathlib import Path
import gradio as gr import gradio as gr
from PIL import Image
import fooocus_version
import modules.config import modules.config
import modules.sdxl_styles
from modules.flags import MetadataScheme, Performance, Steps
from modules.flags import SAMPLERS, CIVITAI_NO_KARRAS
from modules.hash_cache import sha256_from_cache
from modules.util import quote, unquote, extract_styles_from_prompt, is_json, get_file_from_folder_list
re_param_code = r'\s*(\w[\w \-/]+):\s*("(?:\\.|[^\\"])+"|[^,]*)(?:,|$)'
re_param = re.compile(re_param_code)
re_imagesize = re.compile(r"^(\d+)x(\d+)$")
def load_parameter_button_click(raw_prompt_txt, is_generating): def load_parameter_button_click(raw_metadata: dict | str, is_generating: bool, inpaint_mode: str):
loaded_parameter_dict = json.loads(raw_prompt_txt) loaded_parameter_dict = raw_metadata
if isinstance(raw_metadata, str):
loaded_parameter_dict = json.loads(raw_metadata)
assert isinstance(loaded_parameter_dict, dict) assert isinstance(loaded_parameter_dict, dict)
results = [True, 1] results = [len(loaded_parameter_dict) > 0]
get_image_number('image_number', 'Image Number', loaded_parameter_dict, results)
get_str('prompt', 'Prompt', loaded_parameter_dict, results)
get_str('negative_prompt', 'Negative Prompt', loaded_parameter_dict, results)
get_list('styles', 'Styles', loaded_parameter_dict, results)
performance = get_str('performance', 'Performance', loaded_parameter_dict, results)
get_steps('steps', 'Steps', loaded_parameter_dict, results)
get_number('overwrite_switch', 'Overwrite Switch', loaded_parameter_dict, results)
get_resolution('resolution', 'Resolution', loaded_parameter_dict, results)
get_number('guidance_scale', 'Guidance Scale', loaded_parameter_dict, results)
get_number('sharpness', 'Sharpness', loaded_parameter_dict, results)
get_adm_guidance('adm_guidance', 'ADM Guidance', loaded_parameter_dict, results)
get_str('refiner_swap_method', 'Refiner Swap Method', loaded_parameter_dict, results)
get_number('adaptive_cfg', 'CFG Mimicking from TSNR', loaded_parameter_dict, results)
get_number('clip_skip', 'CLIP Skip', loaded_parameter_dict, results, cast_type=int)
get_str('base_model', 'Base Model', loaded_parameter_dict, results)
get_str('refiner_model', 'Refiner Model', loaded_parameter_dict, results)
get_number('refiner_switch', 'Refiner Switch', loaded_parameter_dict, results)
get_str('sampler', 'Sampler', loaded_parameter_dict, results)
get_str('scheduler', 'Scheduler', loaded_parameter_dict, results)
get_str('vae', 'VAE', loaded_parameter_dict, results)
get_seed('seed', 'Seed', loaded_parameter_dict, results)
get_inpaint_engine_version('inpaint_engine_version', 'Inpaint Engine Version', loaded_parameter_dict, results, inpaint_mode)
get_inpaint_method('inpaint_method', 'Inpaint Mode', loaded_parameter_dict, results)
if is_generating:
results.append(gr.update())
else:
results.append(gr.update(visible=True))
results.append(gr.update(visible=False))
get_freeu('freeu', 'FreeU', loaded_parameter_dict, results)
# prevent performance LoRAs to be added twice, by performance and by lora
performance_filename = None
if performance is not None and performance in Performance.values():
performance = Performance(performance)
performance_filename = performance.lora_filename()
for i in range(modules.config.default_max_lora_number):
get_lora(f'lora_combined_{i + 1}', f'LoRA {i + 1}', loaded_parameter_dict, results, performance_filename)
return results
def get_str(key: str, fallback: str | None, source_dict: dict, results: list, default=None) -> str | None:
try: try:
h = loaded_parameter_dict.get('Prompt', None) h = source_dict.get(key, source_dict.get(fallback, default))
assert isinstance(h, str) assert isinstance(h, str)
results.append(h) results.append(h)
return h
except: except:
results.append(gr.update()) results.append(gr.update())
return None
try:
h = loaded_parameter_dict.get('Negative Prompt', None)
assert isinstance(h, str)
results.append(h)
except:
results.append(gr.update())
def get_list(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try: try:
h = loaded_parameter_dict.get('Styles', None) h = source_dict.get(key, source_dict.get(fallback, default))
h = eval(h) h = eval(h)
assert isinstance(h, list) assert isinstance(h, list)
results.append(h) results.append(h)
except: except:
results.append(gr.update()) results.append(gr.update())
def get_number(key: str, fallback: str | None, source_dict: dict, results: list, default=None, cast_type=float):
try: try:
h = loaded_parameter_dict.get('Performance', None) h = source_dict.get(key, source_dict.get(fallback, default))
assert isinstance(h, str) assert h is not None
h = cast_type(h)
results.append(h) results.append(h)
except: except:
results.append(gr.update()) results.append(gr.update())
def get_image_number(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try: try:
h = loaded_parameter_dict.get('Resolution', None) h = source_dict.get(key, source_dict.get(fallback, default))
assert h is not None
h = int(h)
h = min(h, modules.config.default_max_image_number)
results.append(h)
except:
results.append(1)
def get_steps(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try:
h = source_dict.get(key, source_dict.get(fallback, default))
assert h is not None
h = int(h)
# if not in steps or in steps and performance is not the same
performance_name = source_dict.get('performance', '').replace(' ', '_').replace('-', '_').casefold()
performance_candidates = [key for key in Steps.keys() if key.casefold() == performance_name and Steps[key] == h]
if len(performance_candidates) == 0:
results.append(h)
return
results.append(-1)
except:
results.append(-1)
def get_resolution(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try:
h = source_dict.get(key, source_dict.get(fallback, default))
width, height = eval(h) width, height = eval(h)
formatted = modules.config.add_ratio(f'{width}*{height}') formatted = modules.config.add_ratio(f'{width}*{height}')
if formatted in modules.config.available_aspect_ratios: if formatted in modules.config.available_aspect_ratios_labels:
results.append(formatted) results.append(formatted)
results.append(-1) results.append(-1)
results.append(-1) results.append(-1)
else: else:
results.append(gr.update()) results.append(gr.update())
results.append(width) results.append(int(width))
results.append(height) results.append(int(height))
except: except:
results.append(gr.update()) results.append(gr.update())
results.append(gr.update()) results.append(gr.update())
results.append(gr.update()) results.append(gr.update())
def get_seed(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try: try:
h = loaded_parameter_dict.get('Sharpness', None) h = source_dict.get(key, source_dict.get(fallback, default))
assert h is not None assert h is not None
h = float(h) h = int(h)
results.append(False)
results.append(h) results.append(h)
except: except:
results.append(gr.update()) results.append(gr.update())
try:
h = loaded_parameter_dict.get('Guidance Scale', None)
assert h is not None
h = float(h)
results.append(h)
except:
results.append(gr.update()) results.append(gr.update())
def get_inpaint_engine_version(key: str, fallback: str | None, source_dict: dict, results: list, inpaint_mode: str, default=None) -> str | None:
try: try:
h = loaded_parameter_dict.get('ADM Guidance', None) h = source_dict.get(key, source_dict.get(fallback, default))
assert isinstance(h, str) and h in modules.flags.inpaint_engine_versions
if inpaint_mode != modules.flags.inpaint_option_detail:
results.append(h)
else:
results.append(gr.update())
results.append(h)
return h
except:
results.append(gr.update())
results.append('empty')
return None
def get_inpaint_method(key: str, fallback: str | None, source_dict: dict, results: list, default=None) -> str | None:
try:
h = source_dict.get(key, source_dict.get(fallback, default))
assert isinstance(h, str) and h in modules.flags.inpaint_options
results.append(h)
for i in range(modules.config.default_enhance_tabs):
results.append(h)
return h
except:
results.append(gr.update())
for i in range(modules.config.default_enhance_tabs):
results.append(gr.update())
def get_adm_guidance(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try:
h = source_dict.get(key, source_dict.get(fallback, default))
p, n, e = eval(h) p, n, e = eval(h)
results.append(float(p)) results.append(float(p))
results.append(float(n)) results.append(float(n))
@ -82,67 +203,448 @@ def load_parameter_button_click(raw_prompt_txt, is_generating):
results.append(gr.update()) results.append(gr.update())
results.append(gr.update()) results.append(gr.update())
try:
h = loaded_parameter_dict.get('Base Model', None)
assert isinstance(h, str)
results.append(h)
except:
results.append(gr.update())
def get_freeu(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try: try:
h = loaded_parameter_dict.get('Refiner Model', None) h = source_dict.get(key, source_dict.get(fallback, default))
assert isinstance(h, str) b1, b2, s1, s2 = eval(h)
results.append(h) results.append(True)
results.append(float(b1))
results.append(float(b2))
results.append(float(s1))
results.append(float(s2))
except: except:
results.append(gr.update())
try:
h = loaded_parameter_dict.get('Refiner Switch', None)
assert h is not None
h = float(h)
results.append(h)
except:
results.append(gr.update())
try:
h = loaded_parameter_dict.get('Sampler', None)
assert isinstance(h, str)
results.append(h)
except:
results.append(gr.update())
try:
h = loaded_parameter_dict.get('Scheduler', None)
assert isinstance(h, str)
results.append(h)
except:
results.append(gr.update())
try:
h = loaded_parameter_dict.get('Seed', None)
assert h is not None
h = int(h)
results.append(False) results.append(False)
results.append(h) results.append(gr.update())
except: results.append(gr.update())
results.append(gr.update()) results.append(gr.update())
results.append(gr.update()) results.append(gr.update())
if is_generating:
results.append(gr.update())
else:
results.append(gr.update(visible=True))
results.append(gr.update(visible=False)) def get_lora(key: str, fallback: str | None, source_dict: dict, results: list, performance_filename: str | None):
for i in range(1, 6):
try: try:
n, w = loaded_parameter_dict.get(f'LoRA {i}').split(' : ') split_data = source_dict.get(key, source_dict.get(fallback)).split(' : ')
w = float(w) enabled = True
results.append(n) name = split_data[0]
results.append(w) weight = split_data[1]
except:
results.append(gr.update())
results.append(gr.update())
return results if len(split_data) == 3:
enabled = split_data[0] == 'True'
name = split_data[1]
weight = split_data[2]
if name == performance_filename:
raise Exception
weight = float(weight)
results.append(enabled)
results.append(name)
results.append(weight)
except:
results.append(True)
results.append('None')
results.append(1)
def parse_meta_from_preset(preset_content):
assert isinstance(preset_content, dict)
preset_prepared = {}
items = preset_content
for settings_key, meta_key in modules.config.possible_preset_keys.items():
if settings_key == "default_loras":
loras = getattr(modules.config, settings_key)
if settings_key in items:
loras = items[settings_key]
for index, lora in enumerate(loras[:modules.config.default_max_lora_number]):
preset_prepared[f'lora_combined_{index + 1}'] = ' : '.join(map(str, lora))
elif settings_key == "default_aspect_ratio":
if settings_key in items and items[settings_key] is not None:
default_aspect_ratio = items[settings_key]
width, height = default_aspect_ratio.split('*')
else:
default_aspect_ratio = getattr(modules.config, settings_key)
width, height = default_aspect_ratio.split('×')
height = height[:height.index(" ")]
preset_prepared[meta_key] = (width, height)
else:
preset_prepared[meta_key] = items[settings_key] if settings_key in items and items[settings_key] is not None else getattr(modules.config, settings_key)
if settings_key == "default_styles" or settings_key == "default_aspect_ratio":
preset_prepared[meta_key] = str(preset_prepared[meta_key])
return preset_prepared
class MetadataParser(ABC):
def __init__(self):
self.raw_prompt: str = ''
self.full_prompt: str = ''
self.raw_negative_prompt: str = ''
self.full_negative_prompt: str = ''
self.steps: int = Steps.SPEED.value
self.base_model_name: str = ''
self.base_model_hash: str = ''
self.refiner_model_name: str = ''
self.refiner_model_hash: str = ''
self.loras: list = []
self.vae_name: str = ''
@abstractmethod
def get_scheme(self) -> MetadataScheme:
raise NotImplementedError
@abstractmethod
def to_json(self, metadata: dict | str) -> dict:
raise NotImplementedError
@abstractmethod
def to_string(self, metadata: dict) -> str:
raise NotImplementedError
def set_data(self, raw_prompt, full_prompt, raw_negative_prompt, full_negative_prompt, steps, base_model_name,
refiner_model_name, loras, vae_name):
self.raw_prompt = raw_prompt
self.full_prompt = full_prompt
self.raw_negative_prompt = raw_negative_prompt
self.full_negative_prompt = full_negative_prompt
self.steps = steps
self.base_model_name = Path(base_model_name).stem
base_model_path = get_file_from_folder_list(base_model_name, modules.config.paths_checkpoints)
self.base_model_hash = sha256_from_cache(base_model_path)
if refiner_model_name not in ['', 'None']:
self.refiner_model_name = Path(refiner_model_name).stem
refiner_model_path = get_file_from_folder_list(refiner_model_name, modules.config.paths_checkpoints)
self.refiner_model_hash = sha256_from_cache(refiner_model_path)
self.loras = []
for (lora_name, lora_weight) in loras:
if lora_name != 'None':
lora_path = get_file_from_folder_list(lora_name, modules.config.paths_loras)
lora_hash = sha256_from_cache(lora_path)
self.loras.append((Path(lora_name).stem, lora_weight, lora_hash))
self.vae_name = Path(vae_name).stem
class A1111MetadataParser(MetadataParser):
def get_scheme(self) -> MetadataScheme:
return MetadataScheme.A1111
fooocus_to_a1111 = {
'raw_prompt': 'Raw prompt',
'raw_negative_prompt': 'Raw negative prompt',
'negative_prompt': 'Negative prompt',
'styles': 'Styles',
'performance': 'Performance',
'steps': 'Steps',
'sampler': 'Sampler',
'scheduler': 'Scheduler',
'vae': 'VAE',
'guidance_scale': 'CFG scale',
'seed': 'Seed',
'resolution': 'Size',
'sharpness': 'Sharpness',
'adm_guidance': 'ADM Guidance',
'refiner_swap_method': 'Refiner Swap Method',
'adaptive_cfg': 'Adaptive CFG',
'clip_skip': 'Clip skip',
'overwrite_switch': 'Overwrite Switch',
'freeu': 'FreeU',
'base_model': 'Model',
'base_model_hash': 'Model hash',
'refiner_model': 'Refiner',
'refiner_model_hash': 'Refiner hash',
'lora_hashes': 'Lora hashes',
'lora_weights': 'Lora weights',
'created_by': 'User',
'version': 'Version'
}
def to_json(self, metadata: str) -> dict:
metadata_prompt = ''
metadata_negative_prompt = ''
done_with_prompt = False
*lines, lastline = metadata.strip().split("\n")
if len(re_param.findall(lastline)) < 3:
lines.append(lastline)
lastline = ''
for line in lines:
line = line.strip()
if line.startswith(f"{self.fooocus_to_a1111['negative_prompt']}:"):
done_with_prompt = True
line = line[len(f"{self.fooocus_to_a1111['negative_prompt']}:"):].strip()
if done_with_prompt:
metadata_negative_prompt += ('' if metadata_negative_prompt == '' else "\n") + line
else:
metadata_prompt += ('' if metadata_prompt == '' else "\n") + line
found_styles, prompt, negative_prompt = extract_styles_from_prompt(metadata_prompt, metadata_negative_prompt)
data = {
'prompt': prompt,
'negative_prompt': negative_prompt
}
for k, v in re_param.findall(lastline):
try:
if v != '' and v[0] == '"' and v[-1] == '"':
v = unquote(v)
m = re_imagesize.match(v)
if m is not None:
data['resolution'] = str((m.group(1), m.group(2)))
else:
data[list(self.fooocus_to_a1111.keys())[list(self.fooocus_to_a1111.values()).index(k)]] = v
except Exception:
print(f"Error parsing \"{k}: {v}\"")
# workaround for multiline prompts
if 'raw_prompt' in data:
data['prompt'] = data['raw_prompt']
raw_prompt = data['raw_prompt'].replace("\n", ', ')
if metadata_prompt != raw_prompt and modules.sdxl_styles.fooocus_expansion not in found_styles:
found_styles.append(modules.sdxl_styles.fooocus_expansion)
if 'raw_negative_prompt' in data:
data['negative_prompt'] = data['raw_negative_prompt']
data['styles'] = str(found_styles)
# try to load performance based on steps, fallback for direct A1111 imports
if 'steps' in data and 'performance' in data is None:
try:
data['performance'] = Performance.by_steps(data['steps']).value
except ValueError | KeyError:
pass
if 'sampler' in data:
data['sampler'] = data['sampler'].replace(' Karras', '')
# get key
for k, v in SAMPLERS.items():
if v == data['sampler']:
data['sampler'] = k
break
for key in ['base_model', 'refiner_model', 'vae']:
if key in data:
if key == 'vae':
self.add_extension_to_filename(data, modules.config.vae_filenames, 'vae')
else:
self.add_extension_to_filename(data, modules.config.model_filenames, key)
lora_data = ''
if 'lora_weights' in data and data['lora_weights'] != '':
lora_data = data['lora_weights']
elif 'lora_hashes' in data and data['lora_hashes'] != '' and data['lora_hashes'].split(', ')[0].count(':') == 2:
lora_data = data['lora_hashes']
if lora_data != '':
for li, lora in enumerate(lora_data.split(', ')):
lora_split = lora.split(': ')
lora_name = lora_split[0]
lora_weight = lora_split[2] if len(lora_split) == 3 else lora_split[1]
for filename in modules.config.lora_filenames:
path = Path(filename)
if lora_name == path.stem:
data[f'lora_combined_{li + 1}'] = f'{filename} : {lora_weight}'
break
return data
def to_string(self, metadata: dict) -> str:
data = {k: v for _, k, v in metadata}
width, height = eval(data['resolution'])
sampler = data['sampler']
scheduler = data['scheduler']
if sampler in SAMPLERS and SAMPLERS[sampler] != '':
sampler = SAMPLERS[sampler]
if sampler not in CIVITAI_NO_KARRAS and scheduler == 'karras':
sampler += f' Karras'
generation_params = {
self.fooocus_to_a1111['steps']: self.steps,
self.fooocus_to_a1111['sampler']: sampler,
self.fooocus_to_a1111['seed']: data['seed'],
self.fooocus_to_a1111['resolution']: f'{width}x{height}',
self.fooocus_to_a1111['guidance_scale']: data['guidance_scale'],
self.fooocus_to_a1111['sharpness']: data['sharpness'],
self.fooocus_to_a1111['adm_guidance']: data['adm_guidance'],
self.fooocus_to_a1111['base_model']: Path(data['base_model']).stem,
self.fooocus_to_a1111['base_model_hash']: self.base_model_hash,
self.fooocus_to_a1111['performance']: data['performance'],
self.fooocus_to_a1111['scheduler']: scheduler,
self.fooocus_to_a1111['vae']: Path(data['vae']).stem,
# workaround for multiline prompts
self.fooocus_to_a1111['raw_prompt']: self.raw_prompt,
self.fooocus_to_a1111['raw_negative_prompt']: self.raw_negative_prompt,
}
if self.refiner_model_name not in ['', 'None']:
generation_params |= {
self.fooocus_to_a1111['refiner_model']: self.refiner_model_name,
self.fooocus_to_a1111['refiner_model_hash']: self.refiner_model_hash
}
for key in ['adaptive_cfg', 'clip_skip', 'overwrite_switch', 'refiner_swap_method', 'freeu']:
if key in data:
generation_params[self.fooocus_to_a1111[key]] = data[key]
if len(self.loras) > 0:
lora_hashes = []
lora_weights = []
for index, (lora_name, lora_weight, lora_hash) in enumerate(self.loras):
# workaround for Fooocus not knowing LoRA name in LoRA metadata
lora_hashes.append(f'{lora_name}: {lora_hash}')
lora_weights.append(f'{lora_name}: {lora_weight}')
lora_hashes_string = ', '.join(lora_hashes)
lora_weights_string = ', '.join(lora_weights)
generation_params[self.fooocus_to_a1111['lora_hashes']] = lora_hashes_string
generation_params[self.fooocus_to_a1111['lora_weights']] = lora_weights_string
generation_params[self.fooocus_to_a1111['version']] = data['version']
if modules.config.metadata_created_by != '':
generation_params[self.fooocus_to_a1111['created_by']] = modules.config.metadata_created_by
generation_params_text = ", ".join(
[k if k == v else f'{k}: {quote(v)}' for k, v in generation_params.items() if
v is not None])
positive_prompt_resolved = ', '.join(self.full_prompt)
negative_prompt_resolved = ', '.join(self.full_negative_prompt)
negative_prompt_text = f"\nNegative prompt: {negative_prompt_resolved}" if negative_prompt_resolved else ""
return f"{positive_prompt_resolved}{negative_prompt_text}\n{generation_params_text}".strip()
@staticmethod
def add_extension_to_filename(data, filenames, key):
for filename in filenames:
path = Path(filename)
if data[key] == path.stem:
data[key] = filename
break
class FooocusMetadataParser(MetadataParser):
def get_scheme(self) -> MetadataScheme:
return MetadataScheme.FOOOCUS
def to_json(self, metadata: dict) -> dict:
for key, value in metadata.items():
if value in ['', 'None']:
continue
if key in ['base_model', 'refiner_model']:
metadata[key] = self.replace_value_with_filename(key, value, modules.config.model_filenames)
elif key.startswith('lora_combined_'):
metadata[key] = self.replace_value_with_filename(key, value, modules.config.lora_filenames)
elif key == 'vae':
metadata[key] = self.replace_value_with_filename(key, value, modules.config.vae_filenames)
else:
continue
return metadata
def to_string(self, metadata: list) -> str:
for li, (label, key, value) in enumerate(metadata):
# remove model folder paths from metadata
if key.startswith('lora_combined_'):
name, weight = value.split(' : ')
name = Path(name).stem
value = f'{name} : {weight}'
metadata[li] = (label, key, value)
res = {k: v for _, k, v in metadata}
res['full_prompt'] = self.full_prompt
res['full_negative_prompt'] = self.full_negative_prompt
res['steps'] = self.steps
res['base_model'] = self.base_model_name
res['base_model_hash'] = self.base_model_hash
if self.refiner_model_name not in ['', 'None']:
res['refiner_model'] = self.refiner_model_name
res['refiner_model_hash'] = self.refiner_model_hash
res['vae'] = self.vae_name
res['loras'] = self.loras
if modules.config.metadata_created_by != '':
res['created_by'] = modules.config.metadata_created_by
return json.dumps(dict(sorted(res.items())))
@staticmethod
def replace_value_with_filename(key, value, filenames):
for filename in filenames:
path = Path(filename)
if key.startswith('lora_combined_'):
name, weight = value.split(' : ')
if name == path.stem:
return f'{filename} : {weight}'
elif value == path.stem:
return filename
return None
def get_metadata_parser(metadata_scheme: MetadataScheme) -> MetadataParser:
match metadata_scheme:
case MetadataScheme.FOOOCUS:
return FooocusMetadataParser()
case MetadataScheme.A1111:
return A1111MetadataParser()
case _:
raise NotImplementedError
def read_info_from_image(file) -> tuple[str | None, MetadataScheme | None]:
items = (file.info or {}).copy()
parameters = items.pop('parameters', None)
metadata_scheme = items.pop('fooocus_scheme', None)
exif = items.pop('exif', None)
if parameters is not None and is_json(parameters):
parameters = json.loads(parameters)
elif exif is not None:
exif = file.getexif()
# 0x9286 = UserComment
parameters = exif.get(0x9286, None)
# 0x927C = MakerNote
metadata_scheme = exif.get(0x927C, None)
if is_json(parameters):
parameters = json.loads(parameters)
try:
metadata_scheme = MetadataScheme(metadata_scheme)
except ValueError:
metadata_scheme = None
# broad fallback
if isinstance(parameters, dict):
metadata_scheme = MetadataScheme.FOOOCUS
if isinstance(parameters, str):
metadata_scheme = MetadataScheme.A1111
return parameters, metadata_scheme
def get_exif(metadata: str | None, metadata_scheme: str):
exif = Image.Exif()
# tags see see https://github.com/python-pillow/Pillow/blob/9.2.x/src/PIL/ExifTags.py
# 0x9286 = UserComment
exif[0x9286] = metadata
# 0x0131 = Software
exif[0x0131] = 'Fooocus v' + fooocus_version.version
# 0x927C = MakerNote
exif[0x927C] = metadata_scheme
return exif

View File

@ -14,6 +14,8 @@ def load_file_from_url(
Returns the path to the downloaded file. Returns the path to the downloaded file.
""" """
domain = os.environ.get("HF_MIRROR", "https://huggingface.co").rstrip('/')
url = str.replace(url, "https://huggingface.co", domain, 1)
os.makedirs(model_dir, exist_ok=True) os.makedirs(model_dir, exist_ok=True)
if not file_name: if not file_name:
parts = urlparse(url) parts = urlparse(url)

View File

@ -17,7 +17,6 @@ import ldm_patched.controlnet.cldm
import ldm_patched.modules.model_patcher import ldm_patched.modules.model_patcher
import ldm_patched.modules.samplers import ldm_patched.modules.samplers
import ldm_patched.modules.args_parser import ldm_patched.modules.args_parser
import modules.advanced_parameters as advanced_parameters
import warnings import warnings
import safetensors.torch import safetensors.torch
import modules.constants as constants import modules.constants as constants
@ -29,15 +28,25 @@ from modules.patch_precision import patch_all_precision
from modules.patch_clip import patch_all_clip from modules.patch_clip import patch_all_clip
sharpness = 2.0 class PatchSettings:
def __init__(self,
sharpness=2.0,
adm_scaler_end=0.3,
positive_adm_scale=1.5,
negative_adm_scale=0.8,
controlnet_softness=0.25,
adaptive_cfg=7.0):
self.sharpness = sharpness
self.adm_scaler_end = adm_scaler_end
self.positive_adm_scale = positive_adm_scale
self.negative_adm_scale = negative_adm_scale
self.controlnet_softness = controlnet_softness
self.adaptive_cfg = adaptive_cfg
self.global_diffusion_progress = 0
self.eps_record = None
adm_scaler_end = 0.3
positive_adm_scale = 1.5
negative_adm_scale = 0.8
adaptive_cfg = 7.0 patch_settings = {}
global_diffusion_progress = 0
eps_record = None
def calculate_weight_patched(self, patches, weight, key): def calculate_weight_patched(self, patches, weight, key):
@ -201,14 +210,13 @@ class BrownianTreeNoiseSamplerPatched:
def compute_cfg(uncond, cond, cfg_scale, t): def compute_cfg(uncond, cond, cfg_scale, t):
global adaptive_cfg pid = os.getpid()
mimic_cfg = float(patch_settings[pid].adaptive_cfg)
mimic_cfg = float(adaptive_cfg)
real_cfg = float(cfg_scale) real_cfg = float(cfg_scale)
real_eps = uncond + real_cfg * (cond - uncond) real_eps = uncond + real_cfg * (cond - uncond)
if cfg_scale > adaptive_cfg: if cfg_scale > patch_settings[pid].adaptive_cfg:
mimicked_eps = uncond + mimic_cfg * (cond - uncond) mimicked_eps = uncond + mimic_cfg * (cond - uncond)
return real_eps * t + mimicked_eps * (1 - t) return real_eps * t + mimicked_eps * (1 - t)
else: else:
@ -216,13 +224,13 @@ def compute_cfg(uncond, cond, cfg_scale, t):
def patched_sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options=None, seed=None): def patched_sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options=None, seed=None):
global eps_record pid = os.getpid()
if math.isclose(cond_scale, 1.0) and not model_options.get("disable_cfg1_optimization", False): if math.isclose(cond_scale, 1.0) and not model_options.get("disable_cfg1_optimization", False):
final_x0 = calc_cond_uncond_batch(model, cond, None, x, timestep, model_options)[0] final_x0 = calc_cond_uncond_batch(model, cond, None, x, timestep, model_options)[0]
if eps_record is not None: if patch_settings[pid].eps_record is not None:
eps_record = ((x - final_x0) / timestep).cpu() patch_settings[pid].eps_record = ((x - final_x0) / timestep).cpu()
return final_x0 return final_x0
@ -231,16 +239,16 @@ def patched_sampling_function(model, x, timestep, uncond, cond, cond_scale, mode
positive_eps = x - positive_x0 positive_eps = x - positive_x0
negative_eps = x - negative_x0 negative_eps = x - negative_x0
alpha = 0.001 * sharpness * global_diffusion_progress alpha = 0.001 * patch_settings[pid].sharpness * patch_settings[pid].global_diffusion_progress
positive_eps_degraded = anisotropic.adaptive_anisotropic_filter(x=positive_eps, g=positive_x0) positive_eps_degraded = anisotropic.adaptive_anisotropic_filter(x=positive_eps, g=positive_x0)
positive_eps_degraded_weighted = positive_eps_degraded * alpha + positive_eps * (1.0 - alpha) positive_eps_degraded_weighted = positive_eps_degraded * alpha + positive_eps * (1.0 - alpha)
final_eps = compute_cfg(uncond=negative_eps, cond=positive_eps_degraded_weighted, final_eps = compute_cfg(uncond=negative_eps, cond=positive_eps_degraded_weighted,
cfg_scale=cond_scale, t=global_diffusion_progress) cfg_scale=cond_scale, t=patch_settings[pid].global_diffusion_progress)
if eps_record is not None: if patch_settings[pid].eps_record is not None:
eps_record = (final_eps / timestep).cpu() patch_settings[pid].eps_record = (final_eps / timestep).cpu()
return x - final_eps return x - final_eps
@ -255,20 +263,19 @@ def round_to_64(x):
def sdxl_encode_adm_patched(self, **kwargs): def sdxl_encode_adm_patched(self, **kwargs):
global positive_adm_scale, negative_adm_scale
clip_pooled = ldm_patched.modules.model_base.sdxl_pooled(kwargs, self.noise_augmentor) clip_pooled = ldm_patched.modules.model_base.sdxl_pooled(kwargs, self.noise_augmentor)
width = kwargs.get("width", 1024) width = kwargs.get("width", 1024)
height = kwargs.get("height", 1024) height = kwargs.get("height", 1024)
target_width = width target_width = width
target_height = height target_height = height
pid = os.getpid()
if kwargs.get("prompt_type", "") == "negative": if kwargs.get("prompt_type", "") == "negative":
width = float(width) * negative_adm_scale width = float(width) * patch_settings[pid].negative_adm_scale
height = float(height) * negative_adm_scale height = float(height) * patch_settings[pid].negative_adm_scale
elif kwargs.get("prompt_type", "") == "positive": elif kwargs.get("prompt_type", "") == "positive":
width = float(width) * positive_adm_scale width = float(width) * patch_settings[pid].positive_adm_scale
height = float(height) * positive_adm_scale height = float(height) * patch_settings[pid].positive_adm_scale
def embedder(number_list): def embedder(number_list):
h = self.embedder(torch.tensor(number_list, dtype=torch.float32)) h = self.embedder(torch.tensor(number_list, dtype=torch.float32))
@ -322,7 +329,7 @@ def patched_KSamplerX0Inpaint_forward(self, x, sigma, uncond, cond, cond_scale,
def timed_adm(y, timesteps): def timed_adm(y, timesteps):
if isinstance(y, torch.Tensor) and int(y.dim()) == 2 and int(y.shape[1]) == 5632: if isinstance(y, torch.Tensor) and int(y.dim()) == 2 and int(y.shape[1]) == 5632:
y_mask = (timesteps > 999.0 * (1.0 - float(adm_scaler_end))).to(y)[..., None] y_mask = (timesteps > 999.0 * (1.0 - float(patch_settings[os.getpid()].adm_scaler_end))).to(y)[..., None]
y_with_adm = y[..., :2816].clone() y_with_adm = y[..., :2816].clone()
y_without_adm = y[..., 2816:].clone() y_without_adm = y[..., 2816:].clone()
return y_with_adm * y_mask + y_without_adm * (1.0 - y_mask) return y_with_adm * y_mask + y_without_adm * (1.0 - y_mask)
@ -332,6 +339,7 @@ def timed_adm(y, timesteps):
def patched_cldm_forward(self, x, hint, timesteps, context, y=None, **kwargs): def patched_cldm_forward(self, x, hint, timesteps, context, y=None, **kwargs):
t_emb = ldm_patched.ldm.modules.diffusionmodules.openaimodel.timestep_embedding(timesteps, self.model_channels, repeat_only=False).to(x.dtype) t_emb = ldm_patched.ldm.modules.diffusionmodules.openaimodel.timestep_embedding(timesteps, self.model_channels, repeat_only=False).to(x.dtype)
emb = self.time_embed(t_emb) emb = self.time_embed(t_emb)
pid = os.getpid()
guided_hint = self.input_hint_block(hint, emb, context) guided_hint = self.input_hint_block(hint, emb, context)
@ -357,19 +365,17 @@ def patched_cldm_forward(self, x, hint, timesteps, context, y=None, **kwargs):
h = self.middle_block(h, emb, context) h = self.middle_block(h, emb, context)
outs.append(self.middle_block_out(h, emb, context)) outs.append(self.middle_block_out(h, emb, context))
if advanced_parameters.controlnet_softness > 0: if patch_settings[pid].controlnet_softness > 0:
for i in range(10): for i in range(10):
k = 1.0 - float(i) / 9.0 k = 1.0 - float(i) / 9.0
outs[i] = outs[i] * (1.0 - advanced_parameters.controlnet_softness * k) outs[i] = outs[i] * (1.0 - patch_settings[pid].controlnet_softness * k)
return outs return outs
def patched_unet_forward(self, x, timesteps=None, context=None, y=None, control=None, transformer_options={}, **kwargs): def patched_unet_forward(self, x, timesteps=None, context=None, y=None, control=None, transformer_options={}, **kwargs):
global global_diffusion_progress
self.current_step = 1.0 - timesteps.to(x) / 999.0 self.current_step = 1.0 - timesteps.to(x) / 999.0
global_diffusion_progress = float(self.current_step.detach().cpu().numpy().tolist()[0]) patch_settings[os.getpid()].global_diffusion_progress = float(self.current_step.detach().cpu().numpy().tolist()[0])
y = timed_adm(y, timesteps) y = timed_adm(y, timesteps)

View File

@ -51,6 +51,8 @@ def patched_register_schedule(self, given_betas=None, beta_schedule="linear", ti
self.linear_end = linear_end self.linear_end = linear_end
sigmas = torch.tensor(((1 - alphas_cumprod) / alphas_cumprod) ** 0.5, dtype=torch.float32) sigmas = torch.tensor(((1 - alphas_cumprod) / alphas_cumprod) ** 0.5, dtype=torch.float32)
self.set_sigmas(sigmas) self.set_sigmas(sigmas)
alphas_cumprod = torch.tensor(alphas_cumprod, dtype=torch.float32)
self.set_alphas_cumprod(alphas_cumprod)
return return

View File

@ -5,26 +5,49 @@ import json
import urllib.parse import urllib.parse
from PIL import Image from PIL import Image
from PIL.PngImagePlugin import PngInfo
from modules.flags import OutputFormat
from modules.meta_parser import MetadataParser, get_exif
from modules.util import generate_temp_filename from modules.util import generate_temp_filename
log_cache = {} log_cache = {}
def get_current_html_path(): def get_current_html_path(output_format=None):
output_format = output_format if output_format else modules.config.default_output_format
date_string, local_temp_filename, only_name = generate_temp_filename(folder=modules.config.path_outputs, date_string, local_temp_filename, only_name = generate_temp_filename(folder=modules.config.path_outputs,
extension='png') extension=output_format)
html_name = os.path.join(os.path.dirname(local_temp_filename), 'log.html') html_name = os.path.join(os.path.dirname(local_temp_filename), 'log.html')
return html_name return html_name
def log(img, dic): def log(img, metadata, metadata_parser: MetadataParser | None = None, output_format=None, task=None, persist_image=True) -> str:
if args_manager.args.disable_image_log: path_outputs = modules.config.temp_path if args_manager.args.disable_image_log or not persist_image else modules.config.path_outputs
return output_format = output_format if output_format else modules.config.default_output_format
date_string, local_temp_filename, only_name = generate_temp_filename(folder=path_outputs, extension=output_format)
date_string, local_temp_filename, only_name = generate_temp_filename(folder=modules.config.path_outputs, extension='png')
os.makedirs(os.path.dirname(local_temp_filename), exist_ok=True) os.makedirs(os.path.dirname(local_temp_filename), exist_ok=True)
Image.fromarray(img).save(local_temp_filename)
parsed_parameters = metadata_parser.to_string(metadata.copy()) if metadata_parser is not None else ''
image = Image.fromarray(img)
if output_format == OutputFormat.PNG.value:
if parsed_parameters != '':
pnginfo = PngInfo()
pnginfo.add_text('parameters', parsed_parameters)
pnginfo.add_text('fooocus_scheme', metadata_parser.get_scheme().value)
else:
pnginfo = None
image.save(local_temp_filename, pnginfo=pnginfo)
elif output_format == OutputFormat.JPEG.value:
image.save(local_temp_filename, quality=95, optimize=True, progressive=True, exif=get_exif(parsed_parameters, metadata_parser.get_scheme().value) if metadata_parser else Image.Exif())
elif output_format == OutputFormat.WEBP.value:
image.save(local_temp_filename, quality=95, lossless=False, exif=get_exif(parsed_parameters, metadata_parser.get_scheme().value) if metadata_parser else Image.Exif())
else:
image.save(local_temp_filename)
if args_manager.args.disable_image_log:
return local_temp_filename
html_name = os.path.join(os.path.dirname(local_temp_filename), 'log.html') html_name = os.path.join(os.path.dirname(local_temp_filename), 'log.html')
css_styles = ( css_styles = (
@ -32,7 +55,7 @@ def log(img, dic):
"body { background-color: #121212; color: #E0E0E0; } " "body { background-color: #121212; color: #E0E0E0; } "
"a { color: #BB86FC; } " "a { color: #BB86FC; } "
".metadata { border-collapse: collapse; width: 100%; } " ".metadata { border-collapse: collapse; width: 100%; } "
".metadata .key { width: 15%; } " ".metadata .label { width: 15%; } "
".metadata .value { width: 85%; font-weight: bold; } " ".metadata .value { width: 85%; font-weight: bold; } "
".metadata th, .metadata td { border: 1px solid #4d4d4d; padding: 4px; } " ".metadata th, .metadata td { border: 1px solid #4d4d4d; padding: 4px; } "
".image-container img { height: auto; max-width: 512px; display: block; padding-right:10px; } " ".image-container img { height: auto; max-width: 512px; display: block; padding-right:10px; } "
@ -68,7 +91,7 @@ def log(img, dic):
</script>""" </script>"""
) )
begin_part = f"<html><head><title>Fooocus Log {date_string}</title>{css_styles}</head><body>{js}<p>Fooocus Log {date_string} (private)</p>\n<p>All images are clean, without any hidden data/meta, and safe to share with others.</p><!--fooocus-log-split-->\n\n" begin_part = f"<!DOCTYPE html><html><head><title>Fooocus Log {date_string}</title>{css_styles}</head><body>{js}<p>Fooocus Log {date_string} (private)</p>\n<p>Metadata is embedded if enabled in the config or developer debug mode. You can find the information for each image in line Metadata Scheme.</p><!--fooocus-log-split-->\n\n"
end_part = f'\n<!--fooocus-log-split--></body></html>' end_part = f'\n<!--fooocus-log-split--></body></html>'
middle_part = log_cache.get(html_name, "") middle_part = log_cache.get(html_name, "")
@ -83,14 +106,20 @@ def log(img, dic):
div_name = only_name.replace('.', '_') div_name = only_name.replace('.', '_')
item = f"<div id=\"{div_name}\" class=\"image-container\"><hr><table><tr>\n" item = f"<div id=\"{div_name}\" class=\"image-container\"><hr><table><tr>\n"
item += f"<td><a href=\"{only_name}\" target=\"_blank\"><img src='{only_name}' onerror=\"this.closest('.image-container').style.display='none';\" loading='lazy'></img></a><div>{only_name}</div></td>" item += f"<td><a href=\"{only_name}\" target=\"_blank\"><img src='{only_name}' onerror=\"this.closest('.image-container').style.display='none';\" loading='lazy'/></a><div>{only_name}</div></td>"
item += "<td><table class='metadata'>" item += "<td><table class='metadata'>"
for key, value in dic: for label, key, value in metadata:
value_txt = str(value).replace('\n', ' </br> ') value_txt = str(value).replace('\n', ' </br> ')
item += f"<tr><td class='key'>{key}</td><td class='value'>{value_txt}</td></tr>\n" item += f"<tr><td class='label'>{label}</td><td class='value'>{value_txt}</td></tr>\n"
if task is not None and 'positive' in task and 'negative' in task:
full_prompt_details = f"""<details><summary>Positive</summary>{', '.join(task['positive'])}</details>
<details><summary>Negative</summary>{', '.join(task['negative'])}</details>"""
item += f"<tr><td class='label'>Full raw prompt</td><td class='value'>{full_prompt_details}</td></tr>\n"
item += "</table>" item += "</table>"
js_txt = urllib.parse.quote(json.dumps({k: v for k, v in dic}, indent=0), safe='') js_txt = urllib.parse.quote(json.dumps({k: v for _, k, v, in metadata}, indent=0), safe='')
item += f"</br><button onclick=\"to_clipboard('{js_txt}')\">Copy to Clipboard</button>" item += f"</br><button onclick=\"to_clipboard('{js_txt}')\">Copy to Clipboard</button>"
item += "</td>" item += "</td>"
@ -105,4 +134,4 @@ def log(img, dic):
log_cache[html_name] = middle_part log_cache[html_name] = middle_part
return return local_temp_filename

View File

@ -3,6 +3,7 @@ import ldm_patched.modules.samplers
import ldm_patched.modules.model_management import ldm_patched.modules.model_management
from collections import namedtuple from collections import namedtuple
from ldm_patched.contrib.external_align_your_steps import AlignYourStepsScheduler
from ldm_patched.contrib.external_custom_sampler import SDTurboScheduler from ldm_patched.contrib.external_custom_sampler import SDTurboScheduler
from ldm_patched.k_diffusion import sampling as k_diffusion_sampling from ldm_patched.k_diffusion import sampling as k_diffusion_sampling
from ldm_patched.modules.samplers import normal_scheduler, simple_scheduler, ddim_scheduler from ldm_patched.modules.samplers import normal_scheduler, simple_scheduler, ddim_scheduler
@ -174,7 +175,10 @@ def calculate_sigmas_scheduler_hacked(model, scheduler_name, steps):
elif scheduler_name == "sgm_uniform": elif scheduler_name == "sgm_uniform":
sigmas = normal_scheduler(model, steps, sgm=True) sigmas = normal_scheduler(model, steps, sgm=True)
elif scheduler_name == "turbo": elif scheduler_name == "turbo":
sigmas = SDTurboScheduler().get_sigmas(namedtuple('Patcher', ['model'])(model=model), steps=steps, denoise=1.0)[0] sigmas = SDTurboScheduler().get_sigmas(model=model, steps=steps, denoise=1.0)[0]
elif scheduler_name == "align_your_steps":
model_type = 'SDXL' if isinstance(model.latent_format, ldm_patched.modules.latent_formats.SDXL) else 'SD1'
sigmas = AlignYourStepsScheduler().get_sigmas(model_type=model_type, steps=steps, denoise=1.0)[0]
else: else:
raise TypeError("error invalid scheduler") raise TypeError("error invalid scheduler")
return sigmas return sigmas

View File

@ -1,14 +1,13 @@
import os import os
import re import re
import json import json
import math
from modules.util import get_files_from_folder from modules.extra_utils import get_files_from_folder
from random import Random
# cannot use modules.config - validators causing circular imports # cannot use modules.config - validators causing circular imports
styles_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../sdxl_styles/')) styles_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../sdxl_styles/'))
wildcards_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../wildcards/'))
wildcards_max_bfs_depth = 64
def normalize_key(k): def normalize_key(k):
@ -24,7 +23,6 @@ def normalize_key(k):
styles = {} styles = {}
styles_files = get_files_from_folder(styles_path, ['.json']) styles_files = get_files_from_folder(styles_path, ['.json'])
for x in ['sdxl_styles_fooocus.json', for x in ['sdxl_styles_fooocus.json',
@ -50,33 +48,50 @@ for styles_file in styles_files:
print(f'Failed to load style file {styles_file}') print(f'Failed to load style file {styles_file}')
style_keys = list(styles.keys()) style_keys = list(styles.keys())
fooocus_expansion = "Fooocus V2" fooocus_expansion = 'Fooocus V2'
legal_style_names = [fooocus_expansion] + style_keys random_style_name = 'Random Style'
legal_style_names = [fooocus_expansion, random_style_name] + style_keys
def get_random_style(rng: Random) -> str:
return rng.choice(list(styles.items()))[0]
def apply_style(style, positive): def apply_style(style, positive):
p, n = styles[style] p, n = styles[style]
return p.replace('{prompt}', positive).splitlines(), n.splitlines() return p.replace('{prompt}', positive).splitlines(), n.splitlines(), '{prompt}' in p
def apply_wildcards(wildcard_text, rng, directory=wildcards_path): def get_words(arrays, total_mult, index):
for _ in range(wildcards_max_bfs_depth): if len(arrays) == 1:
placeholders = re.findall(r'__([\w-]+)__', wildcard_text) return [arrays[0].split(',')[index]]
if len(placeholders) == 0: else:
return wildcard_text words = arrays[0].split(',')
word = words[index % len(words)]
index -= index % len(words)
index /= len(words)
index = math.floor(index)
return [word] + get_words(arrays[1:], math.floor(total_mult / len(words)), index)
print(f'[Wildcards] processing: {wildcard_text}')
for placeholder in placeholders:
try:
words = open(os.path.join(directory, f'{placeholder}.txt'), encoding='utf-8').read().splitlines()
words = [x for x in words if x != '']
assert len(words) > 0
wildcard_text = wildcard_text.replace(f'__{placeholder}__', rng.choice(words), 1)
except:
print(f'[Wildcards] Warning: {placeholder}.txt missing or empty. '
f'Using "{placeholder}" as a normal word.')
wildcard_text = wildcard_text.replace(f'__{placeholder}__', placeholder)
print(f'[Wildcards] {wildcard_text}')
print(f'[Wildcards] BFS stack overflow. Current text: {wildcard_text}') def apply_arrays(text, index):
return wildcard_text arrays = re.findall(r'\[\[(.*?)\]\]', text)
if len(arrays) == 0:
return text
print(f'[Arrays] processing: {text}')
mult = 1
for arr in arrays:
words = arr.split(',')
mult *= len(words)
index %= mult
chosen_words = get_words(arrays, mult, index)
i = 0
for arr in arrays:
text = text.replace(f'[[{arr}]]', chosen_words[i], 1)
i = i+1
return text

View File

@ -39,7 +39,7 @@ def javascript_html():
head += f'<script type="text/javascript" src="{edit_attention_js_path}"></script>\n' head += f'<script type="text/javascript" src="{edit_attention_js_path}"></script>\n'
head += f'<script type="text/javascript" src="{viewer_js_path}"></script>\n' head += f'<script type="text/javascript" src="{viewer_js_path}"></script>\n'
head += f'<script type="text/javascript" src="{image_viewer_js_path}"></script>\n' head += f'<script type="text/javascript" src="{image_viewer_js_path}"></script>\n'
head += f'<meta name="samples-path" content="{samples_path}"></meta>\n' head += f'<meta name="samples-path" content="{samples_path}">\n'
if args_manager.args.theme: if args_manager.args.theme:
head += f'<script type="text/javascript">set_theme(\"{args_manager.args.theme}\");</script>\n' head += f'<script type="text/javascript">set_theme(\"{args_manager.args.theme}\");</script>\n'

View File

@ -1,13 +1,11 @@
import os
import torch
import modules.core as core
from ldm_patched.pfn.architecture.RRDB import RRDBNet as ESRGAN
from ldm_patched.contrib.external_upscale_model import ImageUpscaleWithModel
from collections import OrderedDict from collections import OrderedDict
from modules.config import path_upscale_models
model_filename = os.path.join(path_upscale_models, 'fooocus_upscaler_s409985e5.bin') import modules.core as core
import torch
from ldm_patched.contrib.external_upscale_model import ImageUpscaleWithModel
from ldm_patched.pfn.architecture.RRDB import RRDBNet as ESRGAN
from modules.config import downloading_upscale_model
opImageUpscaleWithModel = ImageUpscaleWithModel() opImageUpscaleWithModel = ImageUpscaleWithModel()
model = None model = None
@ -18,7 +16,8 @@ def perform_upscale(img):
print(f'Upscaling image with shape {str(img.shape)} ...') print(f'Upscaling image with shape {str(img.shape)} ...')
if model is None: if model is None:
sd = torch.load(model_filename) model_filename = downloading_upscale_model()
sd = torch.load(model_filename, weights_only=True)
sdo = OrderedDict() sdo = OrderedDict()
for k, v in sd.items(): for k, v in sd.items():
sdo[k.replace('residual_block_', 'RDB')] = v sdo[k.replace('residual_block_', 'RDB')] = v

View File

@ -1,15 +1,32 @@
from pathlib import Path
import numpy as np import numpy as np
import datetime import datetime
import random import random
import math import math
import os import os
import cv2 import cv2
import re
from typing import List, Tuple, AnyStr, NamedTuple
import json
import hashlib
from PIL import Image from PIL import Image
import modules.config
import modules.sdxl_styles
from modules.flags import Performance
LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS) LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS)
# Regexp compiled once. Matches entries with the following pattern:
# <lora:some_lora:1>
# <lora:aNotherLora:-1.6>
LORAS_PROMPT_PATTERN = re.compile(r"(<lora:([^:]+):([+-]?(?:\d+(?:\.\d*)?|\.\d+))>)", re.X)
HASH_SHA256_LENGTH = 10
def erode_or_dilate(x, k): def erode_or_dilate(x, k):
k = int(k) k = int(k)
@ -155,23 +172,344 @@ def generate_temp_filename(folder='./outputs/', extension='png'):
random_number = random.randint(1000, 9999) random_number = random.randint(1000, 9999)
filename = f"{time_string}_{random_number}.{extension}" filename = f"{time_string}_{random_number}.{extension}"
result = os.path.join(folder, date_string, filename) result = os.path.join(folder, date_string, filename)
return date_string, os.path.abspath(os.path.realpath(result)), filename return date_string, os.path.abspath(result), filename
def get_files_from_folder(folder_path, exensions=None, name_filter=None): def sha256(filename, use_addnet_hash=False, length=HASH_SHA256_LENGTH):
if not os.path.isdir(folder_path): if use_addnet_hash:
raise ValueError("Folder path is not a valid directory.") with open(filename, "rb") as file:
sha256_value = addnet_hash_safetensors(file)
else:
sha256_value = calculate_sha256(filename)
filenames = [] return sha256_value[:length] if length is not None else sha256_value
for root, dirs, files in os.walk(folder_path):
relative_path = os.path.relpath(root, folder_path)
if relative_path == ".":
relative_path = ""
for filename in files:
_, file_extension = os.path.splitext(filename)
if (exensions == None or file_extension.lower() in exensions) and (name_filter == None or name_filter in _):
path = os.path.join(relative_path, filename)
filenames.append(path)
return sorted(filenames, key=lambda x: -1 if os.sep in x else 1) def addnet_hash_safetensors(b):
"""kohya-ss hash for safetensors from https://github.com/kohya-ss/sd-scripts/blob/main/library/train_util.py"""
hash_sha256 = hashlib.sha256()
blksize = 1024 * 1024
b.seek(0)
header = b.read(8)
n = int.from_bytes(header, "little")
offset = n + 8
b.seek(offset)
for chunk in iter(lambda: b.read(blksize), b""):
hash_sha256.update(chunk)
return hash_sha256.hexdigest()
def calculate_sha256(filename) -> str:
hash_sha256 = hashlib.sha256()
blksize = 1024 * 1024
with open(filename, "rb") as f:
for chunk in iter(lambda: f.read(blksize), b""):
hash_sha256.update(chunk)
return hash_sha256.hexdigest()
def quote(text):
if ',' not in str(text) and '\n' not in str(text) and ':' not in str(text):
return text
return json.dumps(text, ensure_ascii=False)
def unquote(text):
if len(text) == 0 or text[0] != '"' or text[-1] != '"':
return text
try:
return json.loads(text)
except Exception:
return text
def unwrap_style_text_from_prompt(style_text, prompt):
"""
Checks the prompt to see if the style text is wrapped around it. If so,
returns True plus the prompt text without the style text. Otherwise, returns
False with the original prompt.
Note that the "cleaned" version of the style text is only used for matching
purposes here. It isn't returned; the original style text is not modified.
"""
stripped_prompt = prompt
stripped_style_text = style_text
if "{prompt}" in stripped_style_text:
# Work out whether the prompt is wrapped in the style text. If so, we
# return True and the "inner" prompt text that isn't part of the style.
try:
left, right = stripped_style_text.split("{prompt}", 2)
except ValueError as e:
# If the style text has multple "{prompt}"s, we can't split it into
# two parts. This is an error, but we can't do anything about it.
print(f"Unable to compare style text to prompt:\n{style_text}")
print(f"Error: {e}")
return False, prompt, ''
left_pos = stripped_prompt.find(left)
right_pos = stripped_prompt.find(right)
if 0 <= left_pos < right_pos:
real_prompt = stripped_prompt[left_pos + len(left):right_pos]
prompt = stripped_prompt.replace(left + real_prompt + right, '', 1)
if prompt.startswith(", "):
prompt = prompt[2:]
if prompt.endswith(", "):
prompt = prompt[:-2]
return True, prompt, real_prompt
else:
# Work out whether the given prompt starts with the style text. If so, we
# return True and the prompt text up to where the style text starts.
if stripped_prompt.endswith(stripped_style_text):
prompt = stripped_prompt[: len(stripped_prompt) - len(stripped_style_text)]
if prompt.endswith(", "):
prompt = prompt[:-2]
return True, prompt, prompt
return False, prompt, ''
def extract_original_prompts(style, prompt, negative_prompt):
"""
Takes a style and compares it to the prompt and negative prompt. If the style
matches, returns True plus the prompt and negative prompt with the style text
removed. Otherwise, returns False with the original prompt and negative prompt.
"""
if not style.prompt and not style.negative_prompt:
return False, prompt, negative_prompt
match_positive, extracted_positive, real_prompt = unwrap_style_text_from_prompt(
style.prompt, prompt
)
if not match_positive:
return False, prompt, negative_prompt, ''
match_negative, extracted_negative, _ = unwrap_style_text_from_prompt(
style.negative_prompt, negative_prompt
)
if not match_negative:
return False, prompt, negative_prompt, ''
return True, extracted_positive, extracted_negative, real_prompt
def extract_styles_from_prompt(prompt, negative_prompt):
extracted = []
applicable_styles = []
for style_name, (style_prompt, style_negative_prompt) in modules.sdxl_styles.styles.items():
applicable_styles.append(PromptStyle(name=style_name, prompt=style_prompt, negative_prompt=style_negative_prompt))
real_prompt = ''
while True:
found_style = None
for style in applicable_styles:
is_match, new_prompt, new_neg_prompt, new_real_prompt = extract_original_prompts(
style, prompt, negative_prompt
)
if is_match:
found_style = style
prompt = new_prompt
negative_prompt = new_neg_prompt
if real_prompt == '' and new_real_prompt != '' and new_real_prompt != prompt:
real_prompt = new_real_prompt
break
if not found_style:
break
applicable_styles.remove(found_style)
extracted.append(found_style.name)
# add prompt expansion if not all styles could be resolved
if prompt != '':
if real_prompt != '':
extracted.append(modules.sdxl_styles.fooocus_expansion)
else:
# find real_prompt when only prompt expansion is selected
first_word = prompt.split(', ')[0]
first_word_positions = [i for i in range(len(prompt)) if prompt.startswith(first_word, i)]
if len(first_word_positions) > 1:
real_prompt = prompt[:first_word_positions[-1]]
extracted.append(modules.sdxl_styles.fooocus_expansion)
if real_prompt.endswith(', '):
real_prompt = real_prompt[:-2]
return list(reversed(extracted)), real_prompt, negative_prompt
class PromptStyle(NamedTuple):
name: str
prompt: str
negative_prompt: str
def is_json(data: str) -> bool:
try:
loaded_json = json.loads(data)
assert isinstance(loaded_json, dict)
except (ValueError, AssertionError):
return False
return True
def get_filname_by_stem(lora_name, filenames: List[str]) -> str | None:
for filename in filenames:
path = Path(filename)
if lora_name == path.stem:
return filename
return None
def get_file_from_folder_list(name, folders):
if not isinstance(folders, list):
folders = [folders]
for folder in folders:
filename = os.path.abspath(os.path.realpath(os.path.join(folder, name)))
if os.path.isfile(filename):
return filename
return os.path.abspath(os.path.realpath(os.path.join(folders[0], name)))
def get_enabled_loras(loras: list, remove_none=True) -> list:
return [(lora[1], lora[2]) for lora in loras if lora[0] and (lora[1] != 'None' if remove_none else True)]
def parse_lora_references_from_prompt(prompt: str, loras: List[Tuple[AnyStr, float]], loras_limit: int = 5,
skip_file_check=False, prompt_cleanup=True, deduplicate_loras=True,
lora_filenames=None) -> tuple[List[Tuple[AnyStr, float]], str]:
# prevent unintended side effects when returning without detection
loras = loras.copy()
if lora_filenames is None:
lora_filenames = []
found_loras = []
prompt_without_loras = ''
cleaned_prompt = ''
for token in prompt.split(','):
matches = LORAS_PROMPT_PATTERN.findall(token)
if len(matches) == 0:
prompt_without_loras += token + ', '
continue
for match in matches:
lora_name = match[1] + '.safetensors'
if not skip_file_check:
lora_name = get_filname_by_stem(match[1], lora_filenames)
if lora_name is not None:
found_loras.append((lora_name, float(match[2])))
token = token.replace(match[0], '')
prompt_without_loras += token + ', '
if prompt_without_loras != '':
cleaned_prompt = prompt_without_loras[:-2]
if prompt_cleanup:
cleaned_prompt = cleanup_prompt(prompt_without_loras)
new_loras = []
lora_names = [lora[0] for lora in loras]
for found_lora in found_loras:
if deduplicate_loras and (found_lora[0] in lora_names or found_lora in new_loras):
continue
new_loras.append(found_lora)
if len(new_loras) == 0:
return loras, cleaned_prompt
updated_loras = []
for lora in loras + new_loras:
if lora[0] != "None":
updated_loras.append(lora)
return updated_loras[:loras_limit], cleaned_prompt
def remove_performance_lora(filenames: list, performance: Performance | None):
loras_without_performance = filenames.copy()
if performance is None:
return loras_without_performance
performance_lora = performance.lora_filename()
for filename in filenames:
path = Path(filename)
if performance_lora == path.name:
loras_without_performance.remove(filename)
return loras_without_performance
def cleanup_prompt(prompt):
prompt = re.sub(' +', ' ', prompt)
prompt = re.sub(',+', ',', prompt)
cleaned_prompt = ''
for token in prompt.split(','):
token = token.strip()
if token == '':
continue
cleaned_prompt += token + ', '
return cleaned_prompt[:-2]
def apply_wildcards(wildcard_text, rng, i, read_wildcards_in_order) -> str:
for _ in range(modules.config.wildcards_max_bfs_depth):
placeholders = re.findall(r'__([\w-]+)__', wildcard_text)
if len(placeholders) == 0:
return wildcard_text
print(f'[Wildcards] processing: {wildcard_text}')
for placeholder in placeholders:
try:
matches = [x for x in modules.config.wildcard_filenames if os.path.splitext(os.path.basename(x))[0] == placeholder]
words = open(os.path.join(modules.config.path_wildcards, matches[0]), encoding='utf-8').read().splitlines()
words = [x for x in words if x != '']
assert len(words) > 0
if read_wildcards_in_order:
wildcard_text = wildcard_text.replace(f'__{placeholder}__', words[i % len(words)], 1)
else:
wildcard_text = wildcard_text.replace(f'__{placeholder}__', rng.choice(words), 1)
except:
print(f'[Wildcards] Warning: {placeholder}.txt missing or empty. '
f'Using "{placeholder}" as a normal word.')
wildcard_text = wildcard_text.replace(f'__{placeholder}__', placeholder)
print(f'[Wildcards] {wildcard_text}')
print(f'[Wildcards] BFS stack overflow. Current text: {wildcard_text}')
return wildcard_text
def get_image_size_info(image: np.ndarray, aspect_ratios: list) -> str:
try:
image = Image.fromarray(np.uint8(image))
width, height = image.size
ratio = round(width / height, 2)
gcd = math.gcd(width, height)
lcm_ratio = f'{width // gcd}:{height // gcd}'
size_info = f'Image Size: {width} x {height}, Ratio: {ratio}, {lcm_ratio}'
closest_ratio = min(aspect_ratios, key=lambda x: abs(ratio - float(x.split('*')[0]) / float(x.split('*')[1])))
recommended_width, recommended_height = map(int, closest_ratio.split('*'))
recommended_ratio = round(recommended_width / recommended_height, 2)
recommended_gcd = math.gcd(recommended_width, recommended_height)
recommended_lcm_ratio = f'{recommended_width // recommended_gcd}:{recommended_height // recommended_gcd}'
size_info = f'{width} x {height}, {ratio}, {lcm_ratio}'
size_info += f'\n{recommended_width} x {recommended_height}, {recommended_ratio}, {recommended_lcm_ratio}'
return size_info
except Exception as e:
return f'Error reading image: {e}'

BIN
notification-example.mp3 Normal file

Binary file not shown.

Binary file not shown.

8
presets/.gitignore vendored Normal file
View File

@ -0,0 +1,8 @@
*.json
!anime.json
!default.json
!lcm.json
!playground_v2.5.json
!pony_v6.json
!realistic.json
!sai.json

View File

@ -1,45 +1,60 @@
{ {
"default_model": "animaPencilXL_v100.safetensors", "default_model": "animaPencilXL_v500.safetensors",
"default_refiner": "None", "default_refiner": "None",
"default_refiner_switch": 0.5, "default_refiner_switch": 0.5,
"default_loras": [ "default_loras": [
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
] ]
], ],
"default_cfg_scale": 7.0, "default_cfg_scale": 6.0,
"default_sample_sharpness": 2.0, "default_sample_sharpness": 2.0,
"default_sampler": "dpmpp_2m_sde_gpu", "default_sampler": "dpmpp_2m_sde_gpu",
"default_scheduler": "karras", "default_scheduler": "karras",
"default_performance": "Speed", "default_performance": "Speed",
"default_prompt": "1girl, ", "default_prompt": "",
"default_prompt_negative": "", "default_prompt_negative": "",
"default_styles": [ "default_styles": [
"Fooocus V2", "Fooocus V2",
"Fooocus Negative", "Fooocus Semi Realistic",
"Fooocus Masterpiece" "Fooocus Masterpiece"
], ],
"default_aspect_ratio": "896*1152", "default_aspect_ratio": "896*1152",
"default_overwrite_step": -1,
"checkpoint_downloads": { "checkpoint_downloads": {
"animaPencilXL_v100.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/animaPencilXL_v100.safetensors" "animaPencilXL_v500.safetensors": "https://huggingface.co/mashb1t/fav_models/resolve/main/fav/animaPencilXL_v500.safetensors"
}, },
"embeddings_downloads": {}, "embeddings_downloads": {},
"lora_downloads": {} "lora_downloads": {},
"previous_default_models": [
"animaPencilXL_v400.safetensors",
"animaPencilXL_v310.safetensors",
"animaPencilXL_v300.safetensors",
"animaPencilXL_v260.safetensors",
"animaPencilXL_v210.safetensors",
"animaPencilXL_v200.safetensors",
"animaPencilXL_v100.safetensors"
]
} }

View File

@ -4,22 +4,27 @@
"default_refiner_switch": 0.5, "default_refiner_switch": 0.5,
"default_loras": [ "default_loras": [
[ [
true,
"sd_xl_offset_example-lora_1.0.safetensors", "sd_xl_offset_example-lora_1.0.safetensors",
0.1 0.1
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
] ]
@ -37,6 +42,7 @@
"Fooocus Sharp" "Fooocus Sharp"
], ],
"default_aspect_ratio": "1152*896", "default_aspect_ratio": "1152*896",
"default_overwrite_step": -1,
"checkpoint_downloads": { "checkpoint_downloads": {
"juggernautXL_v8Rundiffusion.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_v8Rundiffusion.safetensors" "juggernautXL_v8Rundiffusion.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_v8Rundiffusion.safetensors"
}, },

View File

@ -1,25 +1,30 @@
{ {
"default_model": "juggernautXL_version6Rundiffusion.safetensors", "default_model": "juggernautXL_v8Rundiffusion.safetensors",
"default_refiner": "None", "default_refiner": "None",
"default_refiner_switch": 0.5, "default_refiner_switch": 0.5,
"default_loras": [ "default_loras": [
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
] ]
@ -37,9 +42,17 @@
"Fooocus Sharp" "Fooocus Sharp"
], ],
"default_aspect_ratio": "1152*896", "default_aspect_ratio": "1152*896",
"default_overwrite_step": -1,
"checkpoint_downloads": { "checkpoint_downloads": {
"juggernautXL_version6Rundiffusion.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_version6Rundiffusion.safetensors" "juggernautXL_v8Rundiffusion.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_v8Rundiffusion.safetensors"
}, },
"embeddings_downloads": {}, "embeddings_downloads": {},
"lora_downloads": {} "lora_downloads": {},
"previous_default_models": [
"juggernautXL_version8Rundiffusion.safetensors",
"juggernautXL_version7Rundiffusion.safetensors",
"juggernautXL_v7Rundiffusion.safetensors",
"juggernautXL_version6Rundiffusion.safetensors",
"juggernautXL_v6Rundiffusion.safetensors"
]
} }

57
presets/lightning.json Normal file
View File

@ -0,0 +1,57 @@
{
"default_model": "juggernautXL_v8Rundiffusion.safetensors",
"default_refiner": "None",
"default_refiner_switch": 0.5,
"default_loras": [
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
]
],
"default_cfg_scale": 4.0,
"default_sample_sharpness": 2.0,
"default_sampler": "dpmpp_2m_sde_gpu",
"default_scheduler": "karras",
"default_performance": "Lightning",
"default_prompt": "",
"default_prompt_negative": "",
"default_styles": [
"Fooocus V2",
"Fooocus Enhance",
"Fooocus Sharp"
],
"default_aspect_ratio": "1152*896",
"checkpoint_downloads": {
"juggernautXL_v8Rundiffusion.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_v8Rundiffusion.safetensors"
},
"embeddings_downloads": {},
"lora_downloads": {},
"previous_default_models": [
"juggernautXL_version8Rundiffusion.safetensors",
"juggernautXL_version7Rundiffusion.safetensors",
"juggernautXL_v7Rundiffusion.safetensors",
"juggernautXL_version6Rundiffusion.safetensors",
"juggernautXL_v6Rundiffusion.safetensors"
]
}

View File

@ -0,0 +1,51 @@
{
"default_model": "playground-v2.5-1024px-aesthetic.fp16.safetensors",
"default_refiner": "None",
"default_refiner_switch": 0.5,
"default_loras": [
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
]
],
"default_cfg_scale": 2.0,
"default_sample_sharpness": 2.0,
"default_sampler": "dpmpp_2m",
"default_scheduler": "edm_playground_v2.5",
"default_performance": "Speed",
"default_prompt": "",
"default_prompt_negative": "",
"default_styles": [
"Fooocus V2"
],
"default_aspect_ratio": "1024*1024",
"default_overwrite_step": -1,
"default_inpaint_engine_version": "None",
"checkpoint_downloads": {
"playground-v2.5-1024px-aesthetic.fp16.safetensors": "https://huggingface.co/mashb1t/fav_models/resolve/main/fav/playground-v2.5-1024px-aesthetic.fp16.safetensors"
},
"embeddings_downloads": {},
"lora_downloads": {},
"previous_default_models": []
}

54
presets/pony_v6.json Normal file
View File

@ -0,0 +1,54 @@
{
"default_model": "ponyDiffusionV6XL.safetensors",
"default_refiner": "None",
"default_refiner_switch": 0.5,
"default_vae": "ponyDiffusionV6XL_vae.safetensors",
"default_loras": [
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
]
],
"default_cfg_scale": 7.0,
"default_sample_sharpness": 2.0,
"default_sampler": "dpmpp_2m_sde_gpu",
"default_scheduler": "karras",
"default_performance": "Speed",
"default_prompt": "",
"default_prompt_negative": "",
"default_styles": [
"Fooocus Pony"
],
"default_aspect_ratio": "896*1152",
"default_overwrite_step": -1,
"default_inpaint_engine_version": "None",
"checkpoint_downloads": {
"ponyDiffusionV6XL.safetensors": "https://huggingface.co/mashb1t/fav_models/resolve/main/fav/ponyDiffusionV6XL.safetensors"
},
"embeddings_downloads": {},
"lora_downloads": {},
"vae_downloads": {
"ponyDiffusionV6XL_vae.safetensors": "https://huggingface.co/mashb1t/fav_models/resolve/main/fav/ponyDiffusionV6XL_vae.safetensors"
}
}

View File

@ -1,25 +1,30 @@
{ {
"default_model": "realisticStockPhoto_v20.safetensors", "default_model": "realisticStockPhoto_v20.safetensors",
"default_refiner": "", "default_refiner": "None",
"default_refiner_switch": 0.5, "default_refiner_switch": 0.5,
"default_loras": [ "default_loras": [
[ [
"SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors", true,
"SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors",
0.25 0.25
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
] ]
@ -37,12 +42,13 @@
"Fooocus Negative" "Fooocus Negative"
], ],
"default_aspect_ratio": "896*1152", "default_aspect_ratio": "896*1152",
"default_overwrite_step": -1,
"checkpoint_downloads": { "checkpoint_downloads": {
"realisticStockPhoto_v20.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/realisticStockPhoto_v20.safetensors" "realisticStockPhoto_v20.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/realisticStockPhoto_v20.safetensors"
}, },
"embeddings_downloads": {}, "embeddings_downloads": {},
"lora_downloads": { "lora_downloads": {
"SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors" "SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors": "https://huggingface.co/mashb1t/fav_models/resolve/main/fav/SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors"
}, },
"previous_default_models": ["realisticStockPhoto_v10.safetensors"] "previous_default_models": ["realisticStockPhoto_v10.safetensors"]
} }

View File

@ -4,22 +4,27 @@
"default_refiner_switch": 0.75, "default_refiner_switch": 0.75,
"default_loras": [ "default_loras": [
[ [
true,
"sd_xl_offset_example-lora_1.0.safetensors", "sd_xl_offset_example-lora_1.0.safetensors",
0.5 0.5
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
] ]
@ -36,6 +41,7 @@
"Fooocus Cinematic" "Fooocus Cinematic"
], ],
"default_aspect_ratio": "1152*896", "default_aspect_ratio": "1152*896",
"default_overwrite_step": -1,
"checkpoint_downloads": { "checkpoint_downloads": {
"sd_xl_base_1.0_0.9vae.safetensors": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0_0.9vae.safetensors", "sd_xl_base_1.0_0.9vae.safetensors": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0_0.9vae.safetensors",
"sd_xl_refiner_1.0_0.9vae.safetensors": "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0_0.9vae.safetensors" "sd_xl_refiner_1.0_0.9vae.safetensors": "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0_0.9vae.safetensors"
@ -43,5 +49,6 @@
"embeddings_downloads": {}, "embeddings_downloads": {},
"lora_downloads": { "lora_downloads": {
"sd_xl_offset_example-lora_1.0.safetensors": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors" "sd_xl_offset_example-lora_1.0.safetensors": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors"
} },
"previous_default_models": []
} }

167
readme.md
View File

@ -1,40 +1,30 @@
<div align=center> <div align=center>
<img src="https://github.com/lllyasviel/Fooocus/assets/19834515/483fb86d-c9a2-4c20-997c-46dafc124f25"> <img src="https://github.com/lllyasviel/Fooocus/assets/19834515/483fb86d-c9a2-4c20-997c-46dafc124f25">
**Non-cherry-picked** random batch by just typing two words "forest elf",
without any parameter tweaking, without any strange prompt tags.
See also **non-cherry-picked** generalization and diversity tests [here](https://github.com/lllyasviel/Fooocus/discussions/2067) and [here](https://github.com/lllyasviel/Fooocus/discussions/808) and [here](https://github.com/lllyasviel/Fooocus/discussions/679) and [here](https://github.com/lllyasviel/Fooocus/discussions/679#realistic).
In the entire open source community, only Fooocus can achieve this level of **non-cherry-picked** quality.
</div> </div>
# Fooocus # Fooocus
Fooocus is an image generating software (based on [Gradio](https://www.gradio.app/)). [>>> Click Here to Install Fooocus <<<](#download)
Fooocus is a rethinking of Stable Diffusion and Midjourneys designs: Fooocus is an image generating software (based on [Gradio](https://www.gradio.app/) <a href='https://github.com/gradio-app/gradio'><img src='https://img.shields.io/github/stars/gradio-app/gradio'></a>).
* Learned from Stable Diffusion, the software is offline, open source, and free. Fooocus presents a rethinking of image generator designs. The software is offline, open source, and free, while at the same time, similar to many online image generators like Midjourney, the manual tweaking is not needed, and users only need to focus on the prompts and images. Fooocus has also simplified the installation: between pressing "download" and generating the first image, the number of needed mouse clicks is strictly limited to less than 3. Minimal GPU memory requirement is 4GB (Nvidia).
* Learned from Midjourney, the manual tweaking is not needed, and users only need to focus on the prompts and images.
Fooocus has included and automated [lots of inner optimizations and quality improvements](#tech_list). Users can forget all those difficult technical parameters, and just enjoy the interaction between human and computer to "explore new mediums of thought and expanding the imaginative powers of the human species" `[1]`.
Fooocus has simplified the installation. Between pressing "download" and generating the first image, the number of needed mouse clicks is strictly limited to less than 3. Minimal GPU memory requirement is 4GB (Nvidia).
`[1]` David Holz, 2019.
**Recently many fake websites exist on Google when you search “fooocus”. Do not trust those here is the only official source of Fooocus.** **Recently many fake websites exist on Google when you search “fooocus”. Do not trust those here is the only official source of Fooocus.**
## [Installing Fooocus](#download) # Project Status: Limited Long-Term Support (LTS) with Bug Fixes Only
# Moving from Midjourney to Fooocus The Fooocus project, built entirely on the **Stable Diffusion XL** architecture, is now in a state of limited long-term support (LTS) with bug fixes only. As the existing functionalities are considered as nearly free of programmartic issues (Thanks to [mashb1t](https://github.com/mashb1t)'s huge efforts), future updates will focus exclusively on addressing any bugs that may arise.
Using Fooocus is as easy as (probably easier than) Midjourney but this does not mean we lack functionality. Below are the details. **There are no current plans to migrate to or incorporate newer model architectures.** However, this may change during time with the development of open-source community. For example, if the community converge to one single dominant method for image generation (which may really happen in half or one years given the current status), Fooocus may also migrate to that exact method.
For those interested in utilizing newer models such as **Flux**, we recommend exploring alternative platforms such as [WebUI Forge](https://github.com/lllyasviel/stable-diffusion-webui-forge) (also from us), [ComfyUI/SwarmUI](https://github.com/comfyanonymous/ComfyUI). Additionally, several [excellent forks of Fooocus](https://github.com/lllyasviel/Fooocus?tab=readme-ov-file#forks) are available for experimentation.
Again, recently many fake websites exist on Google when you search “fooocus”. Do **NOT** get Fooocus from those websites this page is the only official source of Fooocus. We never have any website like such as “fooocus.com”, “fooocus.net”, “fooocus.co”, “fooocus.ai”, “fooocus.org”, “fooocus.pro”, “fooocus.one”. Those websites are ALL FAKE. **They have ABSOLUTLY no relationship to us. Fooocus is a 100% non-commercial offline open-source software.**
# Features
Below is a quick list using Midjourney's examples:
| Midjourney | Fooocus | | Midjourney | Fooocus |
| - | - | | - | - |
@ -55,7 +45,7 @@ Using Fooocus is as easy as (probably easier than) Midjourney but this does
| InsightFace | Input Image -> Image Prompt -> Advanced -> FaceSwap | | InsightFace | Input Image -> Image Prompt -> Advanced -> FaceSwap |
| Describe | Input Image -> Describe | | Describe | Input Image -> Describe |
We also have a few things borrowed from the best parts of LeonardoAI: Below is a quick list using LeonardoAI's examples:
| LeonardoAI | Fooocus | | LeonardoAI | Fooocus |
| - | - | | - | - |
@ -63,7 +53,7 @@ We also have a few things borrowed from the best parts of LeonardoAI:
| Advanced Sampler Parameters (like Contrast/Sharpness/etc) | Advanced -> Advanced -> Sampling Sharpness / etc | | Advanced Sampler Parameters (like Contrast/Sharpness/etc) | Advanced -> Advanced -> Sampling Sharpness / etc |
| User-friendly ControlNets | Input Image -> Image Prompt -> Advanced | | User-friendly ControlNets | Input Image -> Image Prompt -> Advanced |
Fooocus also developed many "fooocus-only" features for advanced users to get perfect results. [Click here to browse the advanced features.](https://github.com/lllyasviel/Fooocus/discussions/117) Also, [click here to browse the advanced features.](https://github.com/lllyasviel/Fooocus/discussions/117)
# Download # Download
@ -71,7 +61,7 @@ Fooocus also developed many "fooocus-only" features for advanced users to get pe
You can directly download Fooocus with: You can directly download Fooocus with:
**[>>> Click here to download <<<](https://github.com/lllyasviel/Fooocus/releases/download/release/Fooocus_win64_2-1-831.7z)** **[>>> Click here to download <<<](https://github.com/lllyasviel/Fooocus/releases/download/v2.5.0/Fooocus_win64_2-5-0.7z)**
After you download the file, please uncompress it and then run the "run.bat". After you download the file, please uncompress it and then run the "run.bat".
@ -84,6 +74,10 @@ The first time you launch the software, it will automatically download models:
After Fooocus 2.1.60, you will also have `run_anime.bat` and `run_realistic.bat`. They are different model presets (and require different models, but they will be automatically downloaded). [Check here for more details](https://github.com/lllyasviel/Fooocus/discussions/679). After Fooocus 2.1.60, you will also have `run_anime.bat` and `run_realistic.bat`. They are different model presets (and require different models, but they will be automatically downloaded). [Check here for more details](https://github.com/lllyasviel/Fooocus/discussions/679).
After Fooocus 2.3.0 you can also switch presets directly in the browser. Keep in mind to add these arguments if you want to change the default behavior:
* Use `--disable-preset-selection` to disable preset selection in the browser.
* Use `--always-download-new-model` to download missing models on preset switch. Default is fallback to `previous_default_models` defined in the corresponding preset, also see terminal output.
![image](https://github.com/lllyasviel/Fooocus/assets/19834515/d386f817-4bd7-490c-ad89-c1e228c23447) ![image](https://github.com/lllyasviel/Fooocus/assets/19834515/d386f817-4bd7-490c-ad89-c1e228c23447)
If you already have these files, you can copy them to the above locations to speed up installation. If you already have these files, you can copy them to the above locations to speed up installation.
@ -115,17 +109,21 @@ See also the common problems and troubleshoots [here](troubleshoot.md).
### Colab ### Colab
(Last tested - 2023 Dec 12) (Last tested - 2024 Aug 12 by [mashb1t](https://github.com/mashb1t))
| Colab | Info | Colab | Info
| --- | --- | | --- | --- |
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/lllyasviel/Fooocus/blob/main/fooocus_colab.ipynb) | Fooocus Official [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/lllyasviel/Fooocus/blob/main/fooocus_colab.ipynb) | Fooocus Official
In Colab, you can modify the last line to `!python entry_with_update.py --share` or `!python entry_with_update.py --preset anime --share` or `!python entry_with_update.py --preset realistic --share` for Fooocus Default/Anime/Realistic Edition. In Colab, you can modify the last line to `!python entry_with_update.py --share --always-high-vram` or `!python entry_with_update.py --share --always-high-vram --preset anime` or `!python entry_with_update.py --share --always-high-vram --preset realistic` for Fooocus Default/Anime/Realistic Edition.
You can also change the preset in the UI. Please be aware that this may lead to timeouts after 60 seconds. If this is the case, please wait until the download has finished, change the preset to initial and back to the one you've selected or reload the page.
Note that this Colab will disable refiner by default because Colab free's resources are relatively limited (and some "big" features like image prompt may cause free-tier Colab to disconnect). We make sure that basic text-to-image is always working on free-tier Colab. Note that this Colab will disable refiner by default because Colab free's resources are relatively limited (and some "big" features like image prompt may cause free-tier Colab to disconnect). We make sure that basic text-to-image is always working on free-tier Colab.
Thanks to [camenduru](https://github.com/camenduru)! Using `--always-high-vram` shifts resource allocation from RAM to VRAM and achieves the overall best balance between performance, flexibility and stability on the default T4 instance. Please find more information [here](https://github.com/lllyasviel/Fooocus/pull/1710#issuecomment-1989185346).
Thanks to [camenduru](https://github.com/camenduru) for the template!
### Linux (Using Anaconda) ### Linux (Using Anaconda)
@ -217,7 +215,7 @@ Then run the `run.bat`.
AMD is not intensively tested, however. The AMD support is in beta. AMD is not intensively tested, however. The AMD support is in beta.
For AMD, use `.\python_embeded\python.exe entry_with_update.py --directml --preset anime` or `.\python_embeded\python.exe entry_with_update.py --directml --preset realistic` for Fooocus Anime/Realistic Edition. For AMD, use `.\python_embeded\python.exe Fooocus\entry_with_update.py --directml --preset anime` or `.\python_embeded\python.exe Fooocus\entry_with_update.py --directml --preset realistic` for Fooocus Anime/Realistic Edition.
### Mac ### Mac
@ -237,6 +235,10 @@ You can install Fooocus on Apple Mac silicon (M1 or M2) with macOS 'Catalina' or
Use `python entry_with_update.py --preset anime` or `python entry_with_update.py --preset realistic` for Fooocus Anime/Realistic Edition. Use `python entry_with_update.py --preset anime` or `python entry_with_update.py --preset realistic` for Fooocus Anime/Realistic Edition.
### Docker
See [docker.md](docker.md)
### Download Previous Version ### Download Previous Version
See the guidelines [here](https://github.com/lllyasviel/Fooocus/discussions/1405). See the guidelines [here](https://github.com/lllyasviel/Fooocus/discussions/1405).
@ -274,21 +276,29 @@ See the common problems [here](troubleshoot.md).
Given different goals, the default models and configs of Fooocus are different: Given different goals, the default models and configs of Fooocus are different:
| Task | Windows | Linux args | Main Model | Refiner | Config | | Task | Windows | Linux args | Main Model | Refiner | Config |
| --- | --- | --- | --- | --- |--------------------------------------------------------------------------------| |-----------| --- | --- |-----------------------------| --- |--------------------------------------------------------------------------------|
| General | run.bat | | juggernautXL_v8Rundiffusion | not used | [here](https://github.com/lllyasviel/Fooocus/blob/main/presets/default.json) | | General | run.bat | | juggernautXL_v8Rundiffusion | not used | [here](https://github.com/lllyasviel/Fooocus/blob/main/presets/default.json) |
| Realistic | run_realistic.bat | --preset realistic | realisticStockPhoto_v20 | not used | [here](https://github.com/lllyasviel/Fooocus/blob/main/presets/realistic.json) | | Realistic | run_realistic.bat | --preset realistic | realisticStockPhoto_v20 | not used | [here](https://github.com/lllyasviel/Fooocus/blob/main/presets/realistic.json) |
| Anime | run_anime.bat | --preset anime | animaPencilXL_v100 | not used | [here](https://github.com/lllyasviel/Fooocus/blob/main/presets/anime.json) | | Anime | run_anime.bat | --preset anime | animaPencilXL_v500 | not used | [here](https://github.com/lllyasviel/Fooocus/blob/main/presets/anime.json) |
Note that the download is **automatic** - you do not need to do anything if the internet connection is okay. However, you can download them manually if you (or move them from somewhere else) have your own preparation. Note that the download is **automatic** - you do not need to do anything if the internet connection is okay. However, you can download them manually if you (or move them from somewhere else) have your own preparation.
## UI Access and Authentication
In addition to running on localhost, Fooocus can also expose its UI in two ways:
* Local UI listener: use `--listen` (specify port e.g. with `--port 8888`).
* API access: use `--share` (registers an endpoint at `.gradio.live`).
In both ways the access is unauthenticated by default. You can add basic authentication by creating a file called `auth.json` in the main directory, which contains a list of JSON objects with the keys `user` and `pass` (see example in [auth-example.json](./auth-example.json)).
## List of "Hidden" Tricks ## List of "Hidden" Tricks
<a name="tech_list"></a> <a name="tech_list"></a>
The below things are already inside the software, and **users do not need to do anything about these**. <details>
<summary>Click to see a list of tricks. Those are based on SDXL and are not very up-to-date with latest models.</summary>
1. GPT2-based [prompt expansion as a dynamic style "Fooocus V2".](https://github.com/lllyasviel/Fooocus/discussions/117#raw) (similar to Midjourney's hidden pre-processsing and "raw" mode, or the LeonardoAI's Prompt Magic). 1. GPT2-based [prompt expansion as a dynamic style "Fooocus V2".](https://github.com/lllyasviel/Fooocus/discussions/117#raw) (similar to Midjourney's hidden pre-processing and "raw" mode, or the LeonardoAI's Prompt Magic).
2. Native refiner swap inside one single k-sampler. The advantage is that the refiner model can now reuse the base model's momentum (or ODE's history parameters) collected from k-sampling to achieve more coherent sampling. In Automatic1111's high-res fix and ComfyUI's node system, the base model and refiner use two independent k-samplers, which means the momentum is largely wasted, and the sampling continuity is broken. Fooocus uses its own advanced k-diffusion sampling that ensures seamless, native, and continuous swap in a refiner setup. (Update Aug 13: Actually, I discussed this with Automatic1111 several days ago, and it seems that the “native refiner swap inside one single k-sampler” is [merged]( https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12371) into the dev branch of webui. Great!) 2. Native refiner swap inside one single k-sampler. The advantage is that the refiner model can now reuse the base model's momentum (or ODE's history parameters) collected from k-sampling to achieve more coherent sampling. In Automatic1111's high-res fix and ComfyUI's node system, the base model and refiner use two independent k-samplers, which means the momentum is largely wasted, and the sampling continuity is broken. Fooocus uses its own advanced k-diffusion sampling that ensures seamless, native, and continuous swap in a refiner setup. (Update Aug 13: Actually, I discussed this with Automatic1111 several days ago, and it seems that the “native refiner swap inside one single k-sampler” is [merged]( https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12371) into the dev branch of webui. Great!)
3. Negative ADM guidance. Because the highest resolution level of XL Base does not have cross attentions, the positive and negative signals for XL's highest resolution level cannot receive enough contrasts during the CFG sampling, causing the results to look a bit plastic or overly smooth in certain cases. Fortunately, since the XL's highest resolution level is still conditioned on image aspect ratios (ADM), we can modify the adm on the positive/negative side to compensate for the lack of CFG contrast in the highest resolution level. (Update Aug 16, the IOS App [Drawing Things](https://apps.apple.com/us/app/draw-things-ai-generation/id6444050820) will support Negative ADM Guidance. Great!) 3. Negative ADM guidance. Because the highest resolution level of XL Base does not have cross attentions, the positive and negative signals for XL's highest resolution level cannot receive enough contrasts during the CFG sampling, causing the results to look a bit plastic or overly smooth in certain cases. Fortunately, since the XL's highest resolution level is still conditioned on image aspect ratios (ADM), we can modify the adm on the positive/negative side to compensate for the lack of CFG contrast in the highest resolution level. (Update Aug 16, the IOS App [Draw Things](https://apps.apple.com/us/app/draw-things-ai-generation/id6444050820) will support Negative ADM Guidance. Great!)
4. We implemented a carefully tuned variation of Section 5.1 of ["Improving Sample Quality of Diffusion Models Using Self-Attention Guidance"](https://arxiv.org/pdf/2210.00939.pdf). The weight is set to very low, but this is Fooocus's final guarantee to make sure that the XL will never yield an overly smooth or plastic appearance (examples [here](https://github.com/lllyasviel/Fooocus/discussions/117#sharpness)). This can almost eliminate all cases for which XL still occasionally produces overly smooth results, even with negative ADM guidance. (Update 2023 Aug 18, the Gaussian kernel of SAG is changed to an anisotropic kernel for better structure preservation and fewer artifacts.) 4. We implemented a carefully tuned variation of Section 5.1 of ["Improving Sample Quality of Diffusion Models Using Self-Attention Guidance"](https://arxiv.org/pdf/2210.00939.pdf). The weight is set to very low, but this is Fooocus's final guarantee to make sure that the XL will never yield an overly smooth or plastic appearance (examples [here](https://github.com/lllyasviel/Fooocus/discussions/117#sharpness)). This can almost eliminate all cases for which XL still occasionally produces overly smooth results, even with negative ADM guidance. (Update 2023 Aug 18, the Gaussian kernel of SAG is changed to an anisotropic kernel for better structure preservation and fewer artifacts.)
5. We modified the style templates a bit and added the "cinematic-default". 5. We modified the style templates a bit and added the "cinematic-default".
6. We tested the "sd_xl_offset_example-lora_1.0.safetensors" and it seems that when the lora weight is below 0.5, the results are always better than XL without lora. 6. We tested the "sd_xl_offset_example-lora_1.0.safetensors" and it seems that when the lora weight is below 0.5, the results are always better than XL without lora.
@ -300,6 +310,7 @@ The below things are already inside the software, and **users do not need to do
12. Using automatic1111's method to normalize prompt emphasizing. This significantly improves results when users directly copy prompts from civitai. 12. Using automatic1111's method to normalize prompt emphasizing. This significantly improves results when users directly copy prompts from civitai.
13. The joint swap system of the refiner now also supports img2img and upscale in a seamless way. 13. The joint swap system of the refiner now also supports img2img and upscale in a seamless way.
14. CFG Scale and TSNR correction (tuned for SDXL) when CFG is bigger than 10. 14. CFG Scale and TSNR correction (tuned for SDXL) when CFG is bigger than 10.
</details>
## Customization ## Customization
@ -349,43 +360,93 @@ A safer way is just to try "run_anime.bat" or "run_realistic.bat" - they should
entry_with_update.py [-h] [--listen [IP]] [--port PORT] entry_with_update.py [-h] [--listen [IP]] [--port PORT]
[--disable-header-check [ORIGIN]] [--disable-header-check [ORIGIN]]
[--web-upload-size WEB_UPLOAD_SIZE] [--web-upload-size WEB_UPLOAD_SIZE]
[--hf-mirror HF_MIRROR]
[--external-working-path PATH [PATH ...]] [--external-working-path PATH [PATH ...]]
[--output-path OUTPUT_PATH] [--temp-path TEMP_PATH] [--output-path OUTPUT_PATH]
[--cache-path CACHE_PATH] [--in-browser] [--temp-path TEMP_PATH] [--cache-path CACHE_PATH]
[--disable-in-browser] [--gpu-device-id DEVICE_ID] [--in-browser] [--disable-in-browser]
[--gpu-device-id DEVICE_ID]
[--async-cuda-allocation | --disable-async-cuda-allocation] [--async-cuda-allocation | --disable-async-cuda-allocation]
[--disable-attention-upcast] [--all-in-fp32 | --all-in-fp16] [--disable-attention-upcast]
[--all-in-fp32 | --all-in-fp16]
[--unet-in-bf16 | --unet-in-fp16 | --unet-in-fp8-e4m3fn | --unet-in-fp8-e5m2] [--unet-in-bf16 | --unet-in-fp16 | --unet-in-fp8-e4m3fn | --unet-in-fp8-e5m2]
[--vae-in-fp16 | --vae-in-fp32 | --vae-in-bf16] [--vae-in-fp16 | --vae-in-fp32 | --vae-in-bf16]
[--vae-in-cpu]
[--clip-in-fp8-e4m3fn | --clip-in-fp8-e5m2 | --clip-in-fp16 | --clip-in-fp32] [--clip-in-fp8-e4m3fn | --clip-in-fp8-e5m2 | --clip-in-fp16 | --clip-in-fp32]
[--directml [DIRECTML_DEVICE]] [--disable-ipex-hijack] [--directml [DIRECTML_DEVICE]]
[--disable-ipex-hijack]
[--preview-option [none,auto,fast,taesd]] [--preview-option [none,auto,fast,taesd]]
[--attention-split | --attention-quad | --attention-pytorch] [--attention-split | --attention-quad | --attention-pytorch]
[--disable-xformers] [--disable-xformers]
[--always-gpu | --always-high-vram | --always-normal-vram | [--always-gpu | --always-high-vram | --always-normal-vram | --always-low-vram | --always-no-vram | --always-cpu [CPU_NUM_THREADS]]
--always-low-vram | --always-no-vram | --always-cpu] [--always-offload-from-vram]
[--always-offload-from-vram] [--disable-server-log] [--pytorch-deterministic] [--disable-server-log]
[--debug-mode] [--is-windows-embedded-python] [--debug-mode] [--is-windows-embedded-python]
[--disable-server-info] [--share] [--preset PRESET] [--disable-server-info] [--multi-user] [--share]
[--language LANGUAGE] [--disable-offload-from-vram] [--preset PRESET] [--disable-preset-selection]
[--theme THEME] [--disable-image-log] [--language LANGUAGE]
[--disable-offload-from-vram] [--theme THEME]
[--disable-image-log] [--disable-analytics]
[--disable-metadata] [--disable-preset-download]
[--disable-enhance-output-sorting]
[--enable-auto-describe-image]
[--always-download-new-model]
[--rebuild-hash-cache [CPU_NUM_THREADS]]
``` ```
## Inline Prompt Features
### Wildcards
Example prompt: `__color__ flower`
Processed for positive and negative prompt.
Selects a random wildcard from a predefined list of options, in this case the `wildcards/color.txt` file.
The wildcard will be replaced with a random color (randomness based on seed).
You can also disable randomness and process a wildcard file from top to bottom by enabling the checkbox `Read wildcards in order` in Developer Debug Mode.
Wildcards can be nested and combined, and multiple wildcards can be used in the same prompt (example see `wildcards/color_flower.txt`).
### Array Processing
Example prompt: `[[red, green, blue]] flower`
Processed only for positive prompt.
Processes the array from left to right, generating a separate image for each element in the array. In this case 3 images would be generated, one for each color.
Increase the image number to 3 to generate all 3 variants.
Arrays can not be nested, but multiple arrays can be used in the same prompt.
Does support inline LoRAs as array elements!
### Inline LoRAs
Example prompt: `flower <lora:sunflowers:1.2>`
Processed only for positive prompt.
Applies a LoRA to the prompt. The LoRA file must be located in the `models/loras` directory.
## Advanced Features ## Advanced Features
[Click here to browse the advanced features.](https://github.com/lllyasviel/Fooocus/discussions/117) [Click here to browse the advanced features.](https://github.com/lllyasviel/Fooocus/discussions/117)
Fooocus also has many community forks, just like SD-WebUI's [vladmandic/automatic](https://github.com/vladmandic/automatic) and [anapnoe/stable-diffusion-webui-ux](https://github.com/anapnoe/stable-diffusion-webui-ux), for enthusiastic users who want to try! ## Forks
Below are some Forks to Fooocus:
| Fooocus' forks | | Fooocus' forks |
| - | | - |
| [fenneishi/Fooocus-Control](https://github.com/fenneishi/Fooocus-Control) </br>[runew0lf/RuinedFooocus](https://github.com/runew0lf/RuinedFooocus) </br> [MoonRide303/Fooocus-MRE](https://github.com/MoonRide303/Fooocus-MRE) </br> [metercai/SimpleSDXL](https://github.com/metercai/SimpleSDXL) </br> and so on ... | | [fenneishi/Fooocus-Control](https://github.com/fenneishi/Fooocus-Control) </br>[runew0lf/RuinedFooocus](https://github.com/runew0lf/RuinedFooocus) </br> [MoonRide303/Fooocus-MRE](https://github.com/MoonRide303/Fooocus-MRE) </br> [mashb1t/Fooocus](https://github.com/mashb1t/Fooocus) </br> and so on ... |
See also [About Forking and Promotion of Forks](https://github.com/lllyasviel/Fooocus/discussions/699).
## Thanks ## Thanks
Special thanks to [twri](https://github.com/twri) and [3Diva](https://github.com/3Diva) and [Marc K3nt3L](https://github.com/K3nt3L) for creating additional SDXL styles available in Fooocus. Thanks [daswer123](https://github.com/daswer123) for contributing the Canvas Zoom! Many thanks to [twri](https://github.com/twri) and [3Diva](https://github.com/3Diva) and [Marc K3nt3L](https://github.com/K3nt3L) for creating additional SDXL styles available in Fooocus.
The project starts from a mixture of [Stable Diffusion WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) and [ComfyUI](https://github.com/comfyanonymous/ComfyUI) codebases.
Also, thanks [daswer123](https://github.com/daswer123) for contributing the Canvas Zoom!
## Update Log ## Update Log
@ -393,8 +454,6 @@ The log is [here](update_log.md).
## Localization/Translation/I18N ## Localization/Translation/I18N
**We need your help!** Please help translate Fooocus into international languages.
You can put json files in the `language` folder to translate the user interface. You can put json files in the `language` folder to translate the user interface.
For example, below is the content of `Fooocus/language/example.json`: For example, below is the content of `Fooocus/language/example.json`:

2
requirements_docker.txt Normal file
View File

@ -0,0 +1,2 @@
torch==2.1.0
torchvision==0.16.0

View File

@ -1,18 +1,24 @@
torchsde==0.2.5 torchsde==0.2.6
einops==0.4.1 einops==0.8.0
transformers==4.30.2 transformers==4.42.4
safetensors==0.3.1 safetensors==0.4.3
accelerate==0.21.0 accelerate==0.32.1
pyyaml==6.0 pyyaml==6.0.1
Pillow==9.2.0 pillow==10.4.0
scipy==1.9.3 scipy==1.14.0
tqdm==4.64.1 tqdm==4.66.4
psutil==5.9.5 psutil==6.0.0
pytorch_lightning==1.9.4 pytorch_lightning==2.3.3
omegaconf==2.2.3 omegaconf==2.3.0
gradio==3.41.2 gradio==3.41.2
pygit2==1.12.2 pygit2==1.15.1
opencv-contrib-python==4.8.0.74 opencv-contrib-python-headless==4.10.0.84
httpx==0.24.1 httpx==0.27.0
onnxruntime==1.16.3 onnxruntime==1.18.1
timm==0.9.2 timm==1.0.7
numpy==1.26.4
tokenizers==0.19.1
packaging==24.1
rembg==2.0.57
groundingdino-py==0.4.0
segment_anything==1.0

Some files were not shown because too many files have changed in this diff Show More