Commit Graph

87 Commits

Author SHA1 Message Date
Manuel Schmid ba9eadbcda
feat: add metadata to images (#1940)
* feat: add metadata logging for images

inspired by https://github.com/MoonRide303/Fooocus-MRE

* feat: add config and checkbox for save_metadata_to_images

* feat: add argument disable_metadata

* feat: add support for A1111 metadata schema

cf2772fab0/modules/processing.py (L672)

* feat: add model hash support for a1111

* feat: use resolved prompts with included expansion and styles for a1111 metadata

* fix: code cleanup and resolved prompt fixes

* feat: add config metadata_created_by

* fix: use stting isntead of quote wrap for A1111 created_by

* fix: correctlyy hide/show metadata schema on app start

* fix: do not generate hashes when arg --disable-metadata is used

* refactor: rename metadata_schema to metadata_scheme

* fix: use pnginfo "parameters" insteadf of "Comments"

see https://github.com/RupertAvery/DiffusionToolkit/issues/202 and cf2772fab0/modules/processing.py (L939)

* feat: add resolved prompts to metadata

* fix: use correct default value in metadata check for created_by

* wip: add metadata mapping, reading and writing

applying data after reading currently not functional for A1111

* feat: rename metadata tab and import button label

* feat: map basic information for scheme A1111

* wip: optimize handling for metadata in Gradio calls

* feat: add enums for Performance, Steps and StepsUOV

also move MetadataSchema enum to prevent circular dependency

* fix: correctly map resolution, use empty styles for A1111

* chore: code cleanup

* feat: add A1111 prompt style detection

only detects one style as Fooocus doesn't wrap {prompt} with the whole style, but has a separate prompt string for each style

* wip: add prompt style extraction for A1111 scheme

* feat: sort styles after metadata import

* refactor: use central flag for LoRA count

* refactor: use central flag for ControlNet image count

* fix: use correct LoRA mapping, add fallback for backwards compatibility

* feat: add created_by again

* feat: add prefix "Fooocus" to version

* wip: code cleanup, update todos

* fix: use correct order to read LoRA in meta parser

* wip: code cleanup, update todos

* feat: make sha256 with length 10 default

* feat: add lora handling to A1111 scheme

* feat: override existing LoRA values when importing, would cause images to differ

* fix: correctly extract prompt style when only prompt expansion is selected

* feat: allow model / LoRA loading from subfolders

* feat: code cleanup, do not queue metadata preview on image upload

* refactor: add flag for refiner_swap_method

* feat: add metadata handling for all non-img2img parameters

* refactor: code cleanup

* chore: use str as return type in calculate_sha256

* feat: add hash cache to metadata

* chore: code cleanup

* feat: add method get_scheme to Metadata

* fix: align handling for scheme Fooocus by removing lcm lora from json parsing

* refactor: add step before parsing to set data in parser

- add constructor for MetadataSchema class
- remove showable and copyable from log output
- add functional hash cache (model hashing takes about 5 seconds, only required once per model, using hash lazy loading)

* feat: sort metadata attributes before writing to image

* feat: add translations and hint for image prompt parameters

* chore: check and remove ToDo's

* refactor: merge metadata.py into meta_parser.py

* fix: add missing refiner in A1111 parse_json

* wip: add TODO for ultiline prompt style resolution

* fix: remove sorting for A1111, change performance key position

fixes https://github.com/lllyasviel/Fooocus/pull/1940#issuecomment-1924444633

* fix: add workaround for multiline prompts

* feat: add sampler mapping

* feat: prevent config reset by renaming metadata_scheme to match config options

* chore: remove remaining todos after analysis

refiner is added when set
restoring multiline prompts has been resolved by using separate parameters "raw_prompt" and "raw_negative_prompt"

* chore: specify too broad exception types

* feat: add mapping for _gpu samplers to cpu samplers

gpu samplers are less deterministic than cpu but in general similar, see https://www.reddit.com/r/comfyui/comments/15hayzo/comment/juqcpep/

* feat: add better handling for image import with empty metadata

* fix: parse adaptive_cfg as float instead of string

* chore: loosen strict type for parse_json, fix indent

* chore: make steps enums more strict

* feat: only override steps if metadata value is not in steps enum or in steps enum and performance is not the same

* fix: handle empty strings in metadata

e.g. raw negative prompt when none is set
2024-02-26 14:27:57 +01:00
MindOfMatter 18f9f7dc31
feat: make lora number editable in config (#2215)
* Initial commit

* Update README.md

* sync with original main Fooocus repo

* update with my gitignore setup

* add max lora config feature

* Revert "add max lora config feature"

This reverts commit cfe7463fe2.

* add max loras config feature

* Update README.md

* Update .gitignore

* update

* merge

* revert

* refactor: rename default_loras_max_number to default_max_lora_number, validate config for int

* fix: add missing patch_all call and imports again

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-02-25 21:12:26 +01:00
MindOfMatter 468d704b29
feat: add button to enable LoRAs (#2210)
* Initial commit

* Update README.md

* sync with original main Fooocus repo

* update with my gitignore setup

* add max lora config feature

* Revert "add max lora config feature"

This reverts commit cfe7463fe2.

* add lora enabler feature

* Update README.md

* Update .gitignore

* update

* merge

* revert changes

* revert

* feat: change width of LoRA columns

* refactor: rename lora_enable to lora_enabled, optimize code

---------

Co-authored-by: Manuel Schmid <manuel.schmid@odt.net>
2024-02-25 19:59:28 +01:00
Manuel Schmid 26601a99d1
Merge branch 'feature/add-metadata-to-files' 2024-02-18 16:27:29 +01:00
Manuel Schmid 692a2e4369
Merge branch 'feature/add-metadata-to-files' of github.com:mashb1t/Fooocus into feature/add-metadata-to-files 2024-02-18 16:15:56 +01:00
Manuel Schmid f93dd6edcc
feat: only override steps if metadata value is not in steps enum or in steps enum and performance is not the same 2024-02-18 16:15:39 +01:00
Manuel Schmid b281375ce2
feat: add exif data processing (saving and loading) 2024-02-04 23:29:48 +01:00
Manuel Schmid ceefba9b69
Merge branch 'feature/add-metadata-to-files'
# Conflicts:
#	language/en.json
#	modules/async_worker.py
#	modules/config.py
#	modules/flags.py
#	modules/meta_parser.py
#	modules/private_logger.py
#	modules/util.py
#	webui.py
2024-02-04 21:09:24 +01:00
Manuel Schmid 832441e86d
chore: loosen strict type for parse_json, fix indent 2024-02-04 19:26:10 +01:00
Manuel Schmid c104d58f76
fix: parse adaptive_cfg as float instead of string 2024-02-04 19:25:20 +01:00
Manuel Schmid dfb48fd754
feat: add better handling for image import with empty metadata 2024-02-04 19:24:45 +01:00
Manuel Schmid c668228fe8
chore: specify too broad exception types 2024-02-04 01:31:24 +01:00
Manuel Schmid 8af73e622f
chore: remove remaining todos after analysis
refiner is added when set
restoring multiline prompts has been resolved by using separate parameters "raw_prompt" and "raw_negative_prompt"
2024-02-04 00:44:26 +01:00
Manuel Schmid 63403d614e
feat: add sampler mapping 2024-02-02 23:44:47 +01:00
Manuel Schmid ed4a958da8
fix: add workaround for multiline prompts 2024-02-02 22:04:28 +01:00
Manuel Schmid 349556bfa6
fix: remove sorting for A1111, change performance key position
fixes https://github.com/lllyasviel/Fooocus/pull/1940#issuecomment-1924444633
2024-02-02 20:58:16 +01:00
Manuel Schmid 9aa82aa80a
fix: add missing refiner in A1111 parse_json 2024-02-02 01:57:33 +01:00
Manuel Schmid f745d40687
refactor: merge metadata.py into meta_parser.py 2024-02-02 01:55:32 +01:00
Manuel Schmid 6b9c0bd448
refactor: code cleanup 2024-01-31 01:35:51 +01:00
Manuel Schmid 9bdb65ec5d
feat: add metadata handling for all non-img2img parameters 2024-01-31 01:18:09 +01:00
Manuel Schmid dcc4874455
feat: override existing LoRA values when importing, would cause images to differ 2024-01-29 21:57:02 +01:00
Manuel Schmid 2656356206
fix: use correct order to read LoRA in meta parser 2024-01-29 18:17:51 +01:00
Manuel Schmid c80011b1d1
fix: use correct LoRA mapping, add fallback for backwards compatibility 2024-01-29 15:45:55 +01:00
Manuel Schmid c3ab9f1f30
refactor: use central flag for LoRA count 2024-01-29 14:26:56 +01:00
Manuel Schmid f3010313fc
wip: add metadata mapping, reading and writing
applying data after reading currently not functional for A1111
2024-01-28 05:35:44 +01:00
Manuel Schmid 7185abb8ba
Merge branch 'main_upstream'
# Conflicts:
#	launch.py
#	ldm_patched/modules/args_parser.py
#	modules/config.py
#	presets/anime.json
#	presets/default.json
#	presets/lcm.json
#	presets/realistic.json
2024-01-27 21:09:08 +01:00
Manuel Schmid c7a5638f54
Merge remote-tracking branch 'upstream/main'
# Conflicts:
#	webui.py
2023-12-30 14:40:49 +01:00
lllyasviel 2f6ebbf876 some fix for previous PRs 2023-12-28 08:07:43 -08:00
Manuel Schmid 3ba59df559
add default_overwrite_step handling for meta data and gradio
allows turbo preset switching to set default_overwrite_step correctly
2023-12-24 23:31:46 +01:00
Manuel Schmid e72850de70
download missing models from preset 2023-12-24 13:09:20 +01:00
Manuel Schmid 2e23e2b7b1
code cleanup 2023-12-24 11:53:06 +01:00
Manuel Schmid f1bee4b103
add special handling for default_styles and default_aspect_ratio 2023-12-24 11:25:07 +01:00
Manuel Schmid 2770a40dc1
use default config as fallback value 2023-12-23 21:38:22 +01:00
Manuel Schmid 891a1acb62
add LoRA handling 2023-12-23 19:57:56 +01:00
Manuel Schmid f56e3eb3b0
add preset selection
uses meta parsing to set presets in user session (UI elements only)
2023-12-23 19:30:49 +01:00
lllyasviel 81107298a8
minor fix (#1532) 2023-12-20 19:58:53 -08:00
lllyasviel f7bb578a14
2.1.854
* Add a button to copy parameters to clipboard in log.
* Allow users to load parameters directly by pasting parameters to prompt.
2023-12-20 19:52:38 -08:00