Commit Graph

301 Commits

Author SHA1 Message Date
Manuel Schmid c32bc5e199
feat: add optional model VAE select (#2867)
* Revert "fix: use LF as line breaks for Docker entrypoint.sh (#2843)" (#2865)

False alarm, worked as intended before. Sorry for the fuzz.
This reverts commit d16a54edd6.

* feat: add VAE select

* feat: use different default label, add translation

* fix: do not reload model when VAE stays the same

* refactor: code cleanup

* feat: add metadata handling
2024-05-09 18:59:35 +02:00
Manuel Schmid a6207a949a
Merge branch 'feature/add-vae-select' 2024-05-05 01:07:39 +02:00
Manuel Schmid b0d3a3abb0
Merge branch 'feature/add-random-style'
# Conflicts:
#	modules/async_worker.py
2024-05-05 01:07:34 +02:00
Manuel Schmid ab76a26806
feat: add metadata handling 2024-05-05 00:33:24 +02:00
Manuel Schmid 8e6299b898
feat: add VAE select 2024-05-04 20:36:47 +02:00
Manuel Schmid 0db3c10c60
feat: add random style 2024-05-02 22:20:18 +02:00
Manuel Schmid 2bd9936d5b
Merge branch 'feature/hyper-sd-performance'
# Conflicts:
#	modules/async_worker.py
2024-04-30 15:24:28 +02:00
Manuel Schmid 27df5df20b
feat: use LoRA weight 0.8, sampler dpmpp_sde_gpu and scheduler_name karras
suggested in https://github.com/lllyasviel/Fooocus/discussions/2813#discussioncomment-9245251
results see https://github.com/lllyasviel/Fooocus/discussions/2813#discussioncomment-9275251
2024-04-30 15:06:33 +02:00
Manuel Schmid fa74a0c7fe
feat: add performance hyper-sd based on 4step LoRA 2024-04-26 22:22:09 +02:00
delta_lt_0 5ada070d88
feat: support download of huggingface files from a mirror website (#2637)
* fix: load image number from preset (#2611)

* fix: add default_image_number to preset handling

* fix: use minimum image number of preset and config to prevent UI overflow

* fix: use correct base dimensions for outpaint mask padding (#2612)

* fix: add Civitai compatibility for LoRAs in a1111 metadata scheme by switching schema (#2615)

* feat: update sha256 generation functions

29be1da7cf/modules/hashes.py

* feat: add compatibility for LoRAs in a1111 metadata scheme

* feat: add backwards compatibility

* refactor: extract remove_special_loras

* fix: correctly apply LoRA weight for legacy schema

* docs: bump version number to 2.3.1, add changelog (#2616)

* feat:support download huggingface files from a  mirror site

---------

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
2024-04-06 15:25:19 +02:00
Manuel Schmid 8fff0476ce
Merge commit 'e2f9bcb11d06216d6800676c48d8d74d6fd77a4b'
# Conflicts:
#	fooocus_version.py
#	modules/meta_parser.py
2024-03-23 17:04:47 +01:00
Manuel Schmid 9aaa400553
fix: use correct base dimensions for outpaint mask padding (#2612) 2024-03-23 13:10:21 +01:00
Manuel Schmid 679c02a09f
Merge branch 'main_upstream'
# Conflicts:
#	css/style.css
#	fooocus_colab.ipynb
#	fooocus_version.py
#	launch.py
#	modules/async_worker.py
#	modules/config.py
#	modules/flags.py
#	modules/meta_parser.py
#	webui.py
2024-03-18 21:27:56 +01:00
Manuel Schmid 8baafcd79c
Merge branch 'main_upstream' into develop 2024-03-15 20:52:06 +01:00
Manuel Schmid 9cd0366d30
fix: parse seed as string to display correctly in metadata preview (#2536) 2024-03-15 20:38:21 +01:00
Manuel Schmid 57a01865b9
refactor: only use LoRA activate on handover to async worker, extract method 2024-03-11 23:49:45 +01:00
xhoxye ead24c9361
feat: read wildcards in order 通配符增强,切换顺序读取。(#1761)
* 通配符增强,切换顺序读取

通配符增强,通过勾选切换通配符读取方法,默认不勾选为随机读取一行,勾选后为按顺序读取,并使用相同的种子。

* 代码来自刁璐璐

* update

* Update async_worker.py

* refactor: rename read_wildcard_in_order_checkbox to read_wildcard_in_order

* fix: use correct method call for interrupt_current_processing

actually achieves the same result, stopping the task

* refactor: move checkbox to developer debug mode, rename to plural

below disable seed increment

* refactor: code cleanup, separate code for disable_seed_increment

* i18n: add translation for checkbox text

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-03-10 23:18:36 +01:00
Manuel Schmid 5c7dc12470
Merge branch 'main_upstream' into develop 2024-03-10 23:14:52 +01:00
Manuel Schmid bc9c586082
fix: use correct method call for interrupt_current_processing (#2506)
actually achieves the same result, stopping the task
2024-03-10 23:13:09 +01:00
xhoxye db7d2018ca
fix: change synthetic refiner switch from 0.5 to 0.8 (#2165)
* fix problem

1. In partial redrawing, when refiner is empty, enable use_synthetic_refiner. The default switching timing of 0.5 is too early, which is now modified to SDXL default of 0.8.
2. When using custom steps, the calculation of switching timing is wrong. Now it is modified to calculate "steps x timing" after custom steps are used.

* fix: parse width and height as int when applying metadata (#2452)

fixes an issue with A1111 metadata scheme where width and height are strings after splitting resolution

* fix: do not attempt to remove non-existing image grid file (#2456)

image grid is actually not an image here but a numpy array, as the grid isn't saved by default

* feat: add troubleshooting guide to bug report template again (#2489)

---------

Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-03-10 14:42:03 +01:00
Manuel Schmid 25650b4bc4
feat: add performance lightning with 4 step LoRA (#2415)
* feat: add performance sdxl lightning

based on https://huggingface.co/ByteDance/SDXL-Lightning/blob/main/sdxl_lightning_4step_lora.safetensors

* feat: add method for centralized restriction of features for specific performance modes

* feat: add lightning preset
2024-03-10 14:34:48 +01:00
Manuel Schmid bb7156ff55
Merge branch 'main_upstream'
# Conflicts:
#	fooocus_version.py
#	modules/async_worker.py
2024-03-04 11:49:51 +01:00
Manuel Schmid c3fd57acb9
feat: add metadata flag and steps override to history log (#2425)
* feat: add metadata hint to history log

* feat: add actual metadata_scheme to log instead of only boolean

* feat: add steps to log if they were overridden

* fix: pass copy of metadata

prevents LoRA file extension removal in history log caused by passing reference to meta_parser fooocus scheme
2024-03-03 19:34:38 +01:00
Manuel Schmid 5c1886b9bd
Merge branch 'feature/add-metadata-hint-to-history-log' 2024-03-03 15:26:37 +01:00
Manuel Schmid 7651c13b37
feat: add actual metadata_scheme to log instead of only boolean 2024-03-03 15:26:21 +01:00
Manuel Schmid 8e999764a8
Merge branch 'feature/add-inpaint-mask-generation'
# Conflicts:
#	language/en.json
#	modules/config.py
2024-03-03 15:14:20 +01:00
Manuel Schmid bf63506e89
fix: merge full array shapes, not only single channel of mask 2024-03-03 15:02:03 +01:00
Manuel Schmid 83c24ff2e5
feat: add inpaint functionality for mask upload
allows to quickly adjust the mask after automated generation
2024-03-03 14:50:07 +01:00
Manuel Schmid 57c9c5a95e
Merge branch 'feature/add-metadata-hint-to-history-log' 2024-03-03 13:25:23 +01:00
Manuel Schmid 7ef65ca94c
feat: add metadata hint to history log 2024-03-03 13:12:04 +01:00
Manuel Schmid ac841f1f4c
Merge branch 'feature/add-performance-sdxl-lightning'
# Conflicts:
#	modules/async_worker.py
2024-03-03 01:26:44 +01:00
Manuel Schmid 0768357136
Merge branch 'main_upstream'
# Conflicts:
#	modules/async_worker.py
2024-03-03 00:46:54 +01:00
Manuel Schmid 0f6e912fde
feat: add performance sdxl lightning
based on https://huggingface.co/ByteDance/SDXL-Lightning/blob/main/sdxl_lightning_4step_lora.safetensors
2024-03-03 00:40:02 +01:00
Manuel Schmid 4ea3baff50
fix: add handling for filepaths to image grid (#2414)
previously skipped due to not being in np.ndarray format but string
2024-03-03 00:21:59 +01:00
Manuel Schmid 90839430da
fix: adjust parameters for upscale fast 2x (#2411) 2024-03-02 19:05:11 +01:00
Manuel Schmid 056840c513
Merge commit '4945fc99624afc661aae2d3c5c5d73a32ba21897'
# Conflicts:
#	fooocus_version.py
#	language/en.json
#	launch.py
#	modules/async_worker.py
#	modules/config.py
#	modules/flags.py
#	modules/meta_parser.py
#	modules/util.py
#	webui.py
2024-03-02 17:24:53 +01:00
Manuel Schmid b6d23670d8
feat: add jpg and webp support, add exif data handling for metadata (#1863)
* feature: added flag, config and ui update for image extension change #1789

* moved function to config module

* moved image extension to webui via async worker. Passing as parameter to log and get_current_html_path functions per feedback

* check flag before displaying image extension radio button

* disabled if image log flag is passed in

* fix: add missing image_extension parameter to log call

* refactor: change label

* feat: add webp to image_extensions

supported image extemsions: see https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html

* feat: use consistent file name in gradio

returns and uses filepaths instead of numpy image by saving to temp dir
uses double the temp dir file storage on disk as it saves to temp dir and gradio temp dir when displaying the image, but reuses logged output image

* feat: delete temp images after yielding to gradio

* feat: use args temp path if given

* chore: code cleanup, remove redundant if statement

* feat: always show image_extension element

this is now possible due to image extension support in gradio via https://github.com/lllyasviel/Fooocus/pull/1932

* refactor: rename image_extension to image_file_extension

* feat: use optimized jpg parameters when saving the image

quality=95
optimize=True
progressive=True

* refactor: rename image_file_extension to output_format

* feat: add exif handling

* refactor: code cleanup, remove items from metadata output

---------

Co-authored-by: Manuel Schmid <dev@mash1t.de>
Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
Co-authored by: eddyizm <wtfisup@hotmail.com>
2024-02-26 15:31:32 +01:00
Manuel Schmid ba9eadbcda
feat: add metadata to images (#1940)
* feat: add metadata logging for images

inspired by https://github.com/MoonRide303/Fooocus-MRE

* feat: add config and checkbox for save_metadata_to_images

* feat: add argument disable_metadata

* feat: add support for A1111 metadata schema

cf2772fab0/modules/processing.py (L672)

* feat: add model hash support for a1111

* feat: use resolved prompts with included expansion and styles for a1111 metadata

* fix: code cleanup and resolved prompt fixes

* feat: add config metadata_created_by

* fix: use stting isntead of quote wrap for A1111 created_by

* fix: correctlyy hide/show metadata schema on app start

* fix: do not generate hashes when arg --disable-metadata is used

* refactor: rename metadata_schema to metadata_scheme

* fix: use pnginfo "parameters" insteadf of "Comments"

see https://github.com/RupertAvery/DiffusionToolkit/issues/202 and cf2772fab0/modules/processing.py (L939)

* feat: add resolved prompts to metadata

* fix: use correct default value in metadata check for created_by

* wip: add metadata mapping, reading and writing

applying data after reading currently not functional for A1111

* feat: rename metadata tab and import button label

* feat: map basic information for scheme A1111

* wip: optimize handling for metadata in Gradio calls

* feat: add enums for Performance, Steps and StepsUOV

also move MetadataSchema enum to prevent circular dependency

* fix: correctly map resolution, use empty styles for A1111

* chore: code cleanup

* feat: add A1111 prompt style detection

only detects one style as Fooocus doesn't wrap {prompt} with the whole style, but has a separate prompt string for each style

* wip: add prompt style extraction for A1111 scheme

* feat: sort styles after metadata import

* refactor: use central flag for LoRA count

* refactor: use central flag for ControlNet image count

* fix: use correct LoRA mapping, add fallback for backwards compatibility

* feat: add created_by again

* feat: add prefix "Fooocus" to version

* wip: code cleanup, update todos

* fix: use correct order to read LoRA in meta parser

* wip: code cleanup, update todos

* feat: make sha256 with length 10 default

* feat: add lora handling to A1111 scheme

* feat: override existing LoRA values when importing, would cause images to differ

* fix: correctly extract prompt style when only prompt expansion is selected

* feat: allow model / LoRA loading from subfolders

* feat: code cleanup, do not queue metadata preview on image upload

* refactor: add flag for refiner_swap_method

* feat: add metadata handling for all non-img2img parameters

* refactor: code cleanup

* chore: use str as return type in calculate_sha256

* feat: add hash cache to metadata

* chore: code cleanup

* feat: add method get_scheme to Metadata

* fix: align handling for scheme Fooocus by removing lcm lora from json parsing

* refactor: add step before parsing to set data in parser

- add constructor for MetadataSchema class
- remove showable and copyable from log output
- add functional hash cache (model hashing takes about 5 seconds, only required once per model, using hash lazy loading)

* feat: sort metadata attributes before writing to image

* feat: add translations and hint for image prompt parameters

* chore: check and remove ToDo's

* refactor: merge metadata.py into meta_parser.py

* fix: add missing refiner in A1111 parse_json

* wip: add TODO for ultiline prompt style resolution

* fix: remove sorting for A1111, change performance key position

fixes https://github.com/lllyasviel/Fooocus/pull/1940#issuecomment-1924444633

* fix: add workaround for multiline prompts

* feat: add sampler mapping

* feat: prevent config reset by renaming metadata_scheme to match config options

* chore: remove remaining todos after analysis

refiner is added when set
restoring multiline prompts has been resolved by using separate parameters "raw_prompt" and "raw_negative_prompt"

* chore: specify too broad exception types

* feat: add mapping for _gpu samplers to cpu samplers

gpu samplers are less deterministic than cpu but in general similar, see https://www.reddit.com/r/comfyui/comments/15hayzo/comment/juqcpep/

* feat: add better handling for image import with empty metadata

* fix: parse adaptive_cfg as float instead of string

* chore: loosen strict type for parse_json, fix indent

* chore: make steps enums more strict

* feat: only override steps if metadata value is not in steps enum or in steps enum and performance is not the same

* fix: handle empty strings in metadata

e.g. raw negative prompt when none is set
2024-02-26 14:27:57 +01:00
Manuel Schmid d3113f5c3f
feat: use consistent file name in gradio (#1932)
* feat: use consistent file name in gradio

returns and uses filepaths instead of numpy image by saving to temp dir
uses double the temp dir file storage on disk as it saves to temp dir and gradio temp dir when displaying the image, but reuses logged output image

* feat: delete temp images after yielding to gradio

* feat: use args temp path if given

* chore: code cleanup, remove redundant if statement
2024-02-25 22:56:38 +01:00
Brian Flannery c898e6a4dc
feat: add array support on main prompt (#1503)
* prompt array support

* update change log

* update change log

* docs: remove 2.1.847 change log

* refactor: rename freeze_seed to disable_seed_increment, move to developer debug mode

* feat: add translation for new labels

* fix: use task_rng based on task_seed, not initial seed

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-02-25 22:22:49 +01:00
MindOfMatter 18f9f7dc31
feat: make lora number editable in config (#2215)
* Initial commit

* Update README.md

* sync with original main Fooocus repo

* update with my gitignore setup

* add max lora config feature

* Revert "add max lora config feature"

This reverts commit cfe7463fe2.

* add max loras config feature

* Update README.md

* Update .gitignore

* update

* merge

* revert

* refactor: rename default_loras_max_number to default_max_lora_number, validate config for int

* fix: add missing patch_all call and imports again

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-02-25 21:12:26 +01:00
MindOfMatter 468d704b29
feat: add button to enable LoRAs (#2210)
* Initial commit

* Update README.md

* sync with original main Fooocus repo

* update with my gitignore setup

* add max lora config feature

* Revert "add max lora config feature"

This reverts commit cfe7463fe2.

* add lora enabler feature

* Update README.md

* Update .gitignore

* update

* merge

* revert changes

* revert

* feat: change width of LoRA columns

* refactor: rename lora_enable to lora_enabled, optimize code

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-02-25 19:59:28 +01:00
Manuel Schmid 7cfb5e742d
feat: add advanced parameter for disable_intermediate_results (progress_gallery) (#1013)
* add advanced parameter for disable_intermediate_results

prevents gradio frontend process from clogging image output and updates in high throughput scenarios such as LCM with image number >= 4

* update disable_intermediate_results correctly

based on default and selected performance

* chore: add missing translations
2024-02-25 11:31:00 +01:00
Manuel Schmid 5b7ddf8b22
feat: advanced params refactoring + prevent users from skipping/stopping other users tasks in queue (#981)
* only make stop_button and skip_button interactive when rendering process starts

fix inconsistency in behaviour of stop_button and skip_button as it was possible to skip or stop other users processes while still being in queue

* use AsyncTask for last_stop handling instead of shared

* Revert "only make stop_button and skip_button interactive when rendering process starts"

This reverts commit d3f9156854.

* introduce state for task skipping/stopping

* fix return parameters of stop_clicked

* code cleanup, do not disable skip/stop on stop_clicked

* reset last_stop when skipping for further processing

* fix: replace fcbh with ldm_patched

* fix: use currentTask instead of ctrls after merging upstream

* feat: extract attribute disable_preview

* feat: extract attribute adm_scaler_positive

* feat: extract attribute adm_scaler_negative

* feat: extract attribute adm_scaler_end

* feat: extract attribute adaptive_cfg

* feat: extract attribute sampler_name

* feat: extract attribute scheduler_name

* feat: extract attribute generate_image_grid

* feat: extract attribute overwrite_step

* feat: extract attribute overwrite_switch

* feat: extract attribute overwrite_width

* feat: extract attribute overwrite_height

* feat: extract attribute overwrite_vary_strength

* feat: extract attribute overwrite_upscale_strength

* feat: extract attribute mixing_image_prompt_and_vary_upscale

* feat: extract attribute mixing_image_prompt_and_inpaint

* feat: extract attribute debugging_cn_preprocessor

* feat: extract attribute skipping_cn_preprocessor

* feat: extract attribute canny_low_threshold

* feat: extract attribute canny_high_threshold

* feat: extract attribute refiner_swap_method

* feat: extract freeu_ctrls attributes

freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2

* feat: extract inpaint_ctrls attributes

debugging_inpaint_preprocessor, inpaint_disable_initial_latent, inpaint_engine, inpaint_strength, inpaint_respective_field, inpaint_mask_upload_checkbox, invert_mask_checkbox, inpaint_erode_or_dilate

* wip: add TODOs

* chore: cleanup code

* feat: extract attribute controlnet_softness

* feat: extract remaining attributes, do not use globals in patch

* fix: resolve circular import, patch_all now in async_worker

* chore: cleanup pid code
2024-02-24 19:01:06 +01:00
Manuel Schmid 1d606ecb7e
feat: optimize image censoring
Does not save 2x to file (log and yield), but only once (log).
2024-02-24 18:09:45 +01:00
Manuel Schmid 4905f3f2fa
fix: use correct format for upscale metadata 2024-02-24 17:50:40 +01:00
Manuel Schmid 741a693083
feat: add ByteDance lightning preset, code cleanup 2024-02-22 23:02:58 +01:00
Manuel Schmid 5e3816a8b3
fix: add nsfw filter support again
accidentally deleted when merging
2024-02-19 23:12:33 +01:00
Manuel Schmid 26601a99d1
Merge branch 'feature/add-metadata-to-files' 2024-02-18 16:27:29 +01:00
Manuel Schmid 267d5eee7d
Merge commit '1c999be8c8134fe01a75723ea933858435856950'
# Conflicts:
#	.github/ISSUE_TEMPLATE/bug_report.md
#	launch.py
#	modules/async_worker.py
#	modules/config.py
#	modules/private_logger.py
#	modules/util.py
#	webui.py
2024-02-12 21:13:36 +01:00
Manuel Schmid f4a8bf24cf
fix: correctly calculate refiner switch when overwrite_switch is > 0 (#2165)
When using custom steps, the calculation of switching timing is wrong. Now it is modified to calculate "steps x timing" after custom steps are used.
By @xhoxye
2024-02-11 15:13:20 +01:00
hisk2323 eb3f4d745c
feat: add suffix ordinals (#845)
* add suffix ordinals with lambda

* delay importing of modules.config (#2195)

* refactor: use easier to read version to find matching ordinal suffix

---------

Co-authored-by: rsl8 <138326583+rsl8@users.noreply.github.com>
Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
Co-authored-by: Manuel Schmid <9307310+mashb1t@users.noreply.github.com>
2024-02-10 21:49:23 +01:00
Manuel Schmid ceefba9b69
Merge branch 'feature/add-metadata-to-files'
# Conflicts:
#	language/en.json
#	modules/async_worker.py
#	modules/config.py
#	modules/flags.py
#	modules/meta_parser.py
#	modules/private_logger.py
#	modules/util.py
#	webui.py
2024-02-04 21:09:24 +01:00
Manuel Schmid 8af73e622f
chore: remove remaining todos after analysis
refiner is added when set
restoring multiline prompts has been resolved by using separate parameters "raw_prompt" and "raw_negative_prompt"
2024-02-04 00:44:26 +01:00
Manuel Schmid ed4a958da8
fix: add workaround for multiline prompts 2024-02-02 22:04:28 +01:00
Manuel Schmid f745d40687
refactor: merge metadata.py into meta_parser.py 2024-02-02 01:55:32 +01:00
Manuel Schmid e55870124b
refactor: add step before parsing to set data in parser
- add constructor for MetadataSchema class
- remove showable and copyable from log output
- add functional hash cache (model hashing takes about 5 seconds, only required once per model, using hash lazy loading)
2024-02-02 01:25:47 +01:00
Manuel Schmid 9bdb65ec5d
feat: add metadata handling for all non-img2img parameters 2024-01-31 01:18:09 +01:00
Manuel Schmid 89c8e3a812
feat: make sha256 with length 10 default 2024-01-29 21:54:39 +01:00
Manuel Schmid e541097451
wip: code cleanup, update todos 2024-01-29 21:54:22 +01:00
Manuel Schmid 33d644f4a5
feat: add prefix "Fooocus" to version 2024-01-29 16:29:40 +01:00
Manuel Schmid 7fefe3a3c2
feat: add created_by again 2024-01-29 16:28:47 +01:00
Manuel Schmid c80011b1d1
fix: use correct LoRA mapping, add fallback for backwards compatibility 2024-01-29 15:45:55 +01:00
Manuel Schmid 20e53028a4
refactor: use central flag for ControlNet image count 2024-01-29 14:27:51 +01:00
Manuel Schmid c3ab9f1f30
refactor: use central flag for LoRA count 2024-01-29 14:26:56 +01:00
Manuel Schmid cbc63ebba3
feat: add enums for Performance, Steps and StepsUOV
also move MetadataSchema enum to prevent circular dependency
2024-01-28 20:01:33 +01:00
Manuel Schmid e19596c2df
feat: map basic information for scheme A1111 2024-01-28 18:04:40 +01:00
Manuel Schmid f3010313fc
wip: add metadata mapping, reading and writing
applying data after reading currently not functional for A1111
2024-01-28 05:35:44 +01:00
Manuel Schmid 051faf78b8
fix: use correct default value in metadata check for created_by 2024-01-25 23:49:25 +01:00
Manuel Schmid 20b79788a0
feat: add resolved prompts to metadata 2024-01-25 23:48:47 +01:00
Manuel Schmid d7c1f4a6aa
Merge branch 'hotfix/prevent-skipping-and-stopping-by-other-users'
# Conflicts:
#	modules/advanced_parameters.py
#	modules/async_worker.py
#	webui.py
2024-01-22 23:15:18 +01:00
Manuel Schmid 148eddf48d
Merge branch 'feature/extract-advanced-parameters' into hotfix/prevent-skipping-and-stopping-by-other-users
# Conflicts:
#	webui.py
2024-01-22 21:31:24 +01:00
Manuel Schmid 031b1f8b11
chore: cleanup pid code 2024-01-22 21:20:17 +01:00
Manuel Schmid 21f4767c65
fix: resolve circular import, patch_all now in async_worker 2024-01-22 21:14:54 +01:00
Manuel Schmid 177075ff7b
feat: extract remaining attributes, do not use globals in patch 2024-01-22 21:13:44 +01:00
Manuel Schmid f3222b0f27
feat: extract attribute controlnet_softness 2024-01-22 20:09:24 +01:00
Manuel Schmid 78d2ec8d77
chore: cleanup code 2024-01-22 20:01:43 +01:00
Manuel Schmid 4ce27aeb0f
feat: extract inpaint_ctrls attributes
debugging_inpaint_preprocessor, inpaint_disable_initial_latent, inpaint_engine, inpaint_strength, inpaint_respective_field, inpaint_mask_upload_checkbox, invert_mask_checkbox, inpaint_erode_or_dilate
2024-01-22 19:20:04 +01:00
Manuel Schmid eb1d3938fe
feat: extract freeu_ctrls attributes
freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2
2024-01-22 19:19:40 +01:00
Manuel Schmid cfb70c0278
feat: extract attribute refiner_swap_method 2024-01-22 19:10:32 +01:00
Manuel Schmid 2d8ca41ce5
feat: extract attribute canny_high_threshold 2024-01-22 19:08:54 +01:00
Manuel Schmid ec486443ea
feat: extract attribute canny_low_threshold 2024-01-22 19:06:10 +01:00
Manuel Schmid 9f194a91fa
feat: extract attribute skipping_cn_preprocessor 2024-01-22 18:54:25 +01:00
Manuel Schmid 0bf41591a6
feat: extract attribute debugging_cn_preprocessor 2024-01-22 18:52:41 +01:00
Manuel Schmid 6289e5daea
feat: extract attribute mixing_image_prompt_and_inpaint 2024-01-22 18:51:12 +01:00
Manuel Schmid cce9871cc5
feat: extract attribute mixing_image_prompt_and_vary_upscale 2024-01-22 18:49:29 +01:00
Manuel Schmid 2ab5593d71
feat: extract attribute overwrite_upscale_strength 2024-01-22 18:46:47 +01:00
Manuel Schmid 22af976c51
feat: extract attribute overwrite_vary_strength 2024-01-22 18:45:01 +01:00
Manuel Schmid 9f4a00e868
feat: extract attribute overwrite_height 2024-01-22 18:42:54 +01:00
Manuel Schmid 2eed5a28f2
feat: extract attribute overwrite_width 2024-01-22 18:41:02 +01:00
Manuel Schmid 2b1f501462
feat: extract attribute overwrite_switch 2024-01-22 18:38:20 +01:00
Manuel Schmid df35033cc9
feat: extract attribute overwrite_step 2024-01-22 18:34:40 +01:00
Manuel Schmid 217be190bb
feat: extract attribute generate_image_grid 2024-01-22 18:23:10 +01:00
Manuel Schmid d72573aca4
feat: extract attribute scheduler_name 2024-01-22 18:06:11 +01:00
Manuel Schmid e54bad87f1
feat: extract attribute sampler_name 2024-01-22 18:00:27 +01:00
Manuel Schmid fc3da75baf
feat: extract attribute adaptive_cfg 2024-01-22 17:31:18 +01:00
Manuel Schmid 618b01764c
feat: extract attribute adm_scaler_end 2024-01-22 17:11:27 +01:00
Manuel Schmid 64dcdbbef3
feat: extract attribute adm_scaler_negative 2024-01-22 17:03:50 +01:00
Manuel Schmid 3607059224
feat: extract attribute adm_scaler_positive 2024-01-22 17:01:32 +01:00
Manuel Schmid 79a63491fe
feat: extract attribute disable_preview 2024-01-22 16:58:46 +01:00