Merge pull request #2406 from lllyasviel/develop

release 2.2.0
This commit is contained in:
Manuel Schmid 2024-03-02 16:27:54 +01:00 committed by GitHub
commit 4945fc9962
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
37 changed files with 1857 additions and 474 deletions

1
.dockerignore Normal file
View File

@ -0,0 +1 @@
.idea

View File

@ -1,18 +0,0 @@
---
name: Bug report
about: Describe a problem
title: ''
labels: ''
assignees: ''
---
**Read Troubleshoot**
[x] I confirm that I have read the [Troubleshoot](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md) guide before making this issue.
**Describe the problem**
A clear and concise description of what the bug is.
**Full Console Log**
Paste the **full** console log here. You will make our job easier if you give a **full** log.

106
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@ -0,0 +1,106 @@
name: Bug Report
description: You think something is broken in Fooocus
title: "[Bug]: "
labels: ["bug", "triage"]
body:
- type: markdown
attributes:
value: |
> The title of the bug report should be short and descriptive.
> Use relevant keywords for searchability.
> Do not leave it blank, but also do not put an entire error log in it.
- type: checkboxes
attributes:
label: Checklist
description: |
Please perform basic debugging to see if your configuration is the cause of the issue.
Basic debug procedure
 2. Update Fooocus - sometimes things just need to be updated
 3. Backup and remove your config.txt - check if the issue is caused by bad configuration
 5. Try a fresh installation of Fooocus in a different directory - see if a clean installation solves the issue
Before making a issue report please, check that the issue hasn't been reported recently.
options:
- label: The issue exists on a clean installation of Fooocus
- label: The issue exists in the current version of Fooocus
- label: The issue has not been reported before recently
- label: The issue has been reported before but has not been fixed yet
- type: markdown
attributes:
value: |
> Please fill this form with as much information as possible. Don't forget to add information about "What browsers" and provide screenshots if possible
- type: textarea
id: what-did
attributes:
label: What happened?
description: Tell us what happened in a very clear and simple way
placeholder: |
image generation is not working as intended.
validations:
required: true
- type: textarea
id: steps
attributes:
label: Steps to reproduce the problem
description: Please provide us with precise step by step instructions on how to reproduce the bug
placeholder: |
1. Go to ...
2. Press ...
3. ...
validations:
required: true
- type: textarea
id: what-should
attributes:
label: What should have happened?
description: Tell us what you think the normal behavior should be
placeholder: |
Fooocus should ...
validations:
required: true
- type: dropdown
id: browsers
attributes:
label: What browsers do you use to access Fooocus?
multiple: true
options:
- Mozilla Firefox
- Google Chrome
- Brave
- Apple Safari
- Microsoft Edge
- Android
- iOS
- Other
- type: dropdown
id: hosting
attributes:
label: Where are you running Fooocus?
multiple: false
options:
- Locally
- Locally with virtualization (e.g. Docker)
- Cloud (Google Colab)
- Cloud (other)
- type: input
id: operating-system
attributes:
label: What operating system are you using?
placeholder: |
Windows 10
- type: textarea
id: logs
attributes:
label: Console logs
description: Please provide **full** cmd/terminal logs from the moment you started UI to the end of it, after the bug occured. If it's very long, provide a link to pastebin or similar service.
render: Shell
validations:
required: true
- type: textarea
id: misc
attributes:
label: Additional information
description: |
Please provide us with any relevant additional info or context.
Examples:
 I have updated my GPU driver recently.

5
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: Ask a question
url: https://github.com/lllyasviel/Fooocus/discussions/new?category=q-a
about: Ask the community for help

View File

@ -1,14 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the idea you'd like**
A clear and concise description of what you want to happen.

View File

@ -0,0 +1,40 @@
name: Feature request
description: Suggest an idea for this project
title: "[Feature Request]: "
labels: ["enhancement", "triage"]
body:
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: Please search to see if an issue already exists for the feature you want, and that it's not implemented in a recent build/commit.
options:
- label: I have searched the existing issues and checked the recent builds/commits
required: true
- type: markdown
attributes:
value: |
*Please fill this form with as much information as possible, provide screenshots and/or illustrations of the feature if possible*
- type: textarea
id: feature
attributes:
label: What would your feature do?
description: Tell us about your feature in a very clear and simple way, and what problem it would solve
validations:
required: true
- type: textarea
id: workflow
attributes:
label: Proposed workflow
description: Please provide us with step by step information on how you'd like the feature to be accessed and used
value: |
1. Go to ....
2. Press ....
3. ...
validations:
required: true
- type: textarea
id: misc
attributes:
label: Additional information
description: Add any other context or screenshots about the feature request here.

1
.gitignore vendored
View File

@ -51,3 +51,4 @@ user_path_config-deprecated.txt
/package-lock.json
/.coverage*
/auth.json
.DS_Store

29
Dockerfile Normal file
View File

@ -0,0 +1,29 @@
FROM nvidia/cuda:12.3.1-base-ubuntu22.04
ENV DEBIAN_FRONTEND noninteractive
ENV CMDARGS --listen
RUN apt-get update -y && \
apt-get install -y curl libgl1 libglib2.0-0 python3-pip python-is-python3 git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY requirements_docker.txt requirements_versions.txt /tmp/
RUN pip install --no-cache-dir -r /tmp/requirements_docker.txt -r /tmp/requirements_versions.txt && \
rm -f /tmp/requirements_docker.txt /tmp/requirements_versions.txt
RUN pip install --no-cache-dir xformers==0.0.22 --no-dependencies
RUN curl -fsL -o /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2 https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64 && \
chmod +x /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2
RUN adduser --disabled-password --gecos '' user && \
mkdir -p /content/app /content/data
COPY entrypoint.sh /content/
RUN chown -R user:user /content
WORKDIR /content
USER user
RUN git clone https://github.com/lllyasviel/Fooocus /content/app
RUN mv /content/app/models /content/app/models.org
CMD [ "sh", "-c", "/content/entrypoint.sh ${CMDARGS}" ]

View File

@ -1,5 +1,7 @@
import ldm_patched.modules.args_parser as args_parser
import os
from tempfile import gettempdir
args_parser.parser.add_argument("--share", action='store_true', help="Set whether to share on Gradio.")
args_parser.parser.add_argument("--preset", type=str, default=None, help="Apply specified UI preset.")
@ -18,7 +20,10 @@ args_parser.parser.add_argument("--disable-image-log", action='store_true',
help="Prevent writing images and logs to hard drive.")
args_parser.parser.add_argument("--disable-analytics", action='store_true',
help="Disables analytics for Gradio", default=False)
help="Disables analytics for Gradio.")
args_parser.parser.add_argument("--disable-metadata", action='store_true',
help="Disables saving metadata to images.")
args_parser.parser.add_argument("--disable-preset-download", action='store_true',
help="Disables downloading models for presets", default=False)
@ -40,7 +45,11 @@ args_parser.args.always_offload_from_vram = not args_parser.args.disable_offload
if args_parser.args.disable_analytics:
import os
os.environ["GRADIO_ANALYTICS_ENABLED"] = "False"
if args_parser.args.disable_in_browser:
args_parser.args.in_browser = False
if args_parser.args.temp_path is None:
args_parser.args.temp_path = os.path.join(gettempdir(), 'Fooocus')
args = args_parser.args

38
docker-compose.yml Normal file
View File

@ -0,0 +1,38 @@
version: '3.9'
volumes:
fooocus-data:
services:
app:
build: .
image: fooocus
ports:
- "7865:7865"
environment:
- CMDARGS=--listen # Arguments for launch.py.
- DATADIR=/content/data # Directory which stores models, outputs dir
- config_path=/content/data/config.txt
- config_example_path=/content/data/config_modification_tutorial.txt
- path_checkpoints=/content/data/models/checkpoints/
- path_loras=/content/data/models/loras/
- path_embeddings=/content/data/models/embeddings/
- path_vae_approx=/content/data/models/vae_approx/
- path_upscale_models=/content/data/models/upscale_models/
- path_inpaint=/content/data/models/inpaint/
- path_controlnet=/content/data/models/controlnet/
- path_clip_vision=/content/data/models/clip_vision/
- path_fooocus_expansion=/content/data/models/prompt_expansion/fooocus_expansion/
- path_outputs=/content/app/outputs/ # Warning: If it is not located under '/content/app', you can't see history log!
volumes:
- fooocus-data:/content/data
#- ./models:/import/models # Once you import files, you don't need to mount again.
#- ./outputs:/import/outputs # Once you import files, you don't need to mount again.
tty: true
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [compute, utility]

66
docker.md Normal file
View File

@ -0,0 +1,66 @@
# Fooocus on Docker
The docker image is based on NVIDIA CUDA 12.3 and PyTorch 2.0, see [Dockerfile](Dockerfile) and [requirements_docker.txt](requirements_docker.txt) for details.
## Quick start
**This is just an easy way for testing. Please find more information in the [notes](#notes).**
1. Clone this repository
2. Build the image with `docker compose build`
3. Run the docker container with `docker compose up`. Building the image takes some time.
When you see the message `Use the app with http://0.0.0.0:7865/` in the console, you can access the URL in your browser.
Your models and outputs are stored in the `fooocus-data` volume, which, depending on OS, is stored in `/var/lib/docker/volumes`.
## Details
### Update the container manually
When you are using `docker compose up` continuously, the container is not updated to the latest version of Fooocus automatically.
Run `git pull` before executing `docker compose build --no-cache` to build an image with the latest Fooocus version.
You can then start it with `docker compose up`
### Import models, outputs
If you want to import files from models or the outputs folder, you can uncomment the following settings in the [docker-compose.yml](docker-compose.yml):
```
#- ./models:/import/models # Once you import files, you don't need to mount again.
#- ./outputs:/import/outputs # Once you import files, you don't need to mount again.
```
After running `docker compose up`, your files will be copied into `/content/data/models` and `/content/data/outputs`
Since `/content/data` is a persistent volume folder, your files will be persisted even when you re-run `docker compose up --build` without above volume settings.
### Paths inside the container
|Path|Details|
|-|-|
|/content/app|The application stored folder|
|/content/app/models.org|Original 'models' folder.<br> Files are copied to the '/content/app/models' which is symlinked to '/content/data/models' every time the container boots. (Existing files will not be overwritten.) |
|/content/data|Persistent volume mount point|
|/content/data/models|The folder is symlinked to '/content/app/models'|
|/content/data/outputs|The folder is symlinked to '/content/app/outputs'|
### Environments
You can change `config.txt` parameters by using environment variables.
**The priority of using the environments is higher than the values defined in `config.txt`, and they will be saved to the `config_modification_tutorial.txt`**
Docker specified environments are there. They are used by 'entrypoint.sh'
|Environment|Details|
|-|-|
|DATADIR|'/content/data' location.|
|CMDARGS|Arguments for [entry_with_update.py](entry_with_update.py) which is called by [entrypoint.sh](entrypoint.sh)|
|config_path|'config.txt' location|
|config_example_path|'config_modification_tutorial.txt' location|
You can also use the same json key names and values explained in the 'config_modification_tutorial.txt' as the environments.
See examples in the [docker-compose.yml](docker-compose.yml)
## Notes
- Please keep 'path_outputs' under '/content/app'. Otherwise, you may get an error when you open the history log.
- Docker on Mac/Windows still has issues in the form of slow volume access when you use "bind mount" volumes. Please refer to [this article](https://docs.docker.com/storage/volumes/#use-a-volume-with-docker-compose) for not using "bind mount".
- The MPS backend (Metal Performance Shaders, Apple Silicon M1/M2/etc.) is not yet supported in Docker, see https://github.com/pytorch/pytorch/issues/81224
- You can also use `docker compose up -d` to start the container detached and connect to the logs with `docker compose logs -f`. This way you can also close the terminal and keep the container running.

33
entrypoint.sh Executable file
View File

@ -0,0 +1,33 @@
#!/bin/bash
ORIGINALDIR=/content/app
# Use predefined DATADIR if it is defined
[[ x"${DATADIR}" == "x" ]] && DATADIR=/content/data
# Make persistent dir from original dir
function mklink () {
mkdir -p $DATADIR/$1
ln -s $DATADIR/$1 $ORIGINALDIR
}
# Copy old files from import dir
function import () {
(test -d /import/$1 && cd /import/$1 && cp -Rpn . $DATADIR/$1/)
}
cd $ORIGINALDIR
# models
mklink models
# Copy original files
(cd $ORIGINALDIR/models.org && cp -Rpn . $ORIGINALDIR/models/)
# Import old files
import models
# outputs
mklink outputs
# Import old files
import outputs
# Start application
python launch.py $*

View File

@ -112,6 +112,9 @@ class FooocusExpansion:
max_token_length = 75 * int(math.ceil(float(current_token_length) / 75.0))
max_new_tokens = max_token_length - current_token_length
if max_new_tokens == 0:
return prompt[:-1]
# https://huggingface.co/blog/introducing-csearch
# https://huggingface.co/docs/transformers/generation_strategies
features = self.model.generate(**tokenized_kwargs,

View File

@ -1,27 +1,26 @@
import cv2
import numpy as np
import modules.advanced_parameters as advanced_parameters
def centered_canny(x: np.ndarray):
def centered_canny(x: np.ndarray, canny_low_threshold, canny_high_threshold):
assert isinstance(x, np.ndarray)
assert x.ndim == 2 and x.dtype == np.uint8
y = cv2.Canny(x, int(advanced_parameters.canny_low_threshold), int(advanced_parameters.canny_high_threshold))
y = cv2.Canny(x, int(canny_low_threshold), int(canny_high_threshold))
y = y.astype(np.float32) / 255.0
return y
def centered_canny_color(x: np.ndarray):
def centered_canny_color(x: np.ndarray, canny_low_threshold, canny_high_threshold):
assert isinstance(x, np.ndarray)
assert x.ndim == 3 and x.shape[2] == 3
result = [centered_canny(x[..., i]) for i in range(3)]
result = [centered_canny(x[..., i], canny_low_threshold, canny_high_threshold) for i in range(3)]
result = np.stack(result, axis=2)
return result
def pyramid_canny_color(x: np.ndarray):
def pyramid_canny_color(x: np.ndarray, canny_low_threshold, canny_high_threshold):
assert isinstance(x, np.ndarray)
assert x.ndim == 3 and x.shape[2] == 3
@ -31,7 +30,7 @@ def pyramid_canny_color(x: np.ndarray):
for k in [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
Hs, Ws = int(H * k), int(W * k)
small = cv2.resize(x, (Ws, Hs), interpolation=cv2.INTER_AREA)
edge = centered_canny_color(small)
edge = centered_canny_color(small, canny_low_threshold, canny_high_threshold)
if acc_edge is None:
acc_edge = edge
else:
@ -54,11 +53,11 @@ def norm255(x, low=4, high=96):
return x * 255.0
def canny_pyramid(x):
def canny_pyramid(x, canny_low_threshold, canny_high_threshold):
# For some reasons, SAI's Control-lora Canny seems to be trained on canny maps with non-standard resolutions.
# Then we use pyramid to use all resolutions to avoid missing any structure in specific resolutions.
color_canny = pyramid_canny_color(x)
color_canny = pyramid_canny_color(x, canny_low_threshold, canny_high_threshold)
result = np.sum(color_canny, axis=2)
return norm255(result, low=1, high=99).clip(0, 255).astype(np.uint8)

View File

@ -1 +1 @@
version = '2.1.865'
version = '2.2.0'

View File

@ -48,6 +48,8 @@
"Describing what you do not want to see.": "Describing what you do not want to see.",
"Random": "Random",
"Seed": "Seed",
"Disable seed increment": "Disable seed increment",
"Disable automatic seed increment when image number is > 1.": "Disable automatic seed increment when image number is > 1.",
"\ud83d\udcda History Log": "\uD83D\uDCDA History Log",
"Image Style": "Image Style",
"Fooocus V2": "Fooocus V2",
@ -342,6 +344,10 @@
"Forced Overwrite of Denoising Strength of \"Vary\"": "Forced Overwrite of Denoising Strength of \"Vary\"",
"Set as negative number to disable. For developer debugging.": "Set as negative number to disable. For developer debugging.",
"Forced Overwrite of Denoising Strength of \"Upscale\"": "Forced Overwrite of Denoising Strength of \"Upscale\"",
"Disable Preview": "Disable Preview",
"Disable preview during generation.": "Disable preview during generation.",
"Disable Intermediate Results": "Disable Intermediate Results",
"Disable intermediate results during generation, only show final gallery.": "Disable intermediate results during generation, only show final gallery.",
"Inpaint Engine": "Inpaint Engine",
"v1": "v1",
"Version of Fooocus inpaint model": "Version of Fooocus inpaint model",
@ -368,5 +374,12 @@
"* Powered by Fooocus Inpaint Engine (beta)": "* Powered by Fooocus Inpaint Engine (beta)",
"Fooocus Enhance": "Fooocus Enhance",
"Fooocus Cinematic": "Fooocus Cinematic",
"Fooocus Sharp": "Fooocus Sharp"
"Fooocus Sharp": "Fooocus Sharp",
"Drag any image generated by Fooocus here": "Drag any image generated by Fooocus here",
"Metadata": "Metadata",
"Apply Metadata": "Apply Metadata",
"Metadata Scheme": "Metadata Scheme",
"Image Prompt parameters are not included. Use a1111 for compatibility with Civitai.": "Image Prompt parameters are not included. Use a1111 for compatibility with Civitai.",
"fooocus (json)": "fooocus (json)",
"a1111 (plain text)": "a1111 (plain text)"
}

View File

@ -68,7 +68,6 @@ vae_approx_filenames = [
'https://huggingface.co/lllyasviel/misc/resolve/main/xl-to-v1_interposer-v3.1.safetensors')
]
def ini_args():
from args_manager import args
return args
@ -101,9 +100,9 @@ def download_models():
return
if not args.always_download_new_model:
if not os.path.exists(os.path.join(config.path_checkpoints, config.default_base_model_name)):
if not os.path.exists(os.path.join(config.paths_checkpoints[0], config.default_base_model_name)):
for alternative_model_name in config.previous_default_models:
if os.path.exists(os.path.join(config.path_checkpoints, alternative_model_name)):
if os.path.exists(os.path.join(config.paths_checkpoints[0], alternative_model_name)):
print(f'You do not have [{config.default_base_model_name}] but you have [{alternative_model_name}].')
print(f'Fooocus will use [{alternative_model_name}] to avoid downloading new models, '
f'but you are not using latest models.')
@ -113,11 +112,11 @@ def download_models():
break
for file_name, url in config.checkpoint_downloads.items():
load_file_from_url(url=url, model_dir=config.path_checkpoints, file_name=file_name)
load_file_from_url(url=url, model_dir=config.paths_checkpoints[0], file_name=file_name)
for file_name, url in config.embeddings_downloads.items():
load_file_from_url(url=url, model_dir=config.path_embeddings, file_name=file_name)
for file_name, url in config.lora_downloads.items():
load_file_from_url(url=url, model_dir=config.path_loras, file_name=file_name)
load_file_from_url(url=url, model_dir=config.paths_loras[0], file_name=file_name)
return

View File

@ -100,8 +100,7 @@ vram_group.add_argument("--always-high-vram", action="store_true")
vram_group.add_argument("--always-normal-vram", action="store_true")
vram_group.add_argument("--always-low-vram", action="store_true")
vram_group.add_argument("--always-no-vram", action="store_true")
vram_group.add_argument("--always-cpu", action="store_true")
vram_group.add_argument("--always-cpu", type=int, nargs="?", metavar="CPU_NUM_THREADS", const=-1)
parser.add_argument("--always-offload-from-vram", action="store_true")
parser.add_argument("--pytorch-deterministic", action="store_true")

View File

@ -60,6 +60,9 @@ except:
pass
if args.always_cpu:
if args.always_cpu > 0:
torch.set_num_threads(args.always_cpu)
print(f"Running on {torch.get_num_threads()} CPU threads")
cpu_state = CPUState.CPU
def is_intel_xpu():

View File

@ -1,33 +0,0 @@
disable_preview, adm_scaler_positive, adm_scaler_negative, adm_scaler_end, adaptive_cfg, sampler_name, \
scheduler_name, generate_image_grid, overwrite_step, overwrite_switch, overwrite_width, overwrite_height, \
overwrite_vary_strength, overwrite_upscale_strength, \
mixing_image_prompt_and_vary_upscale, mixing_image_prompt_and_inpaint, \
debugging_cn_preprocessor, skipping_cn_preprocessor, controlnet_softness, canny_low_threshold, canny_high_threshold, \
refiner_swap_method, \
freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2, \
debugging_inpaint_preprocessor, inpaint_disable_initial_latent, inpaint_engine, inpaint_strength, inpaint_respective_field, \
inpaint_mask_upload_checkbox, invert_mask_checkbox, inpaint_erode_or_dilate = [None] * 35
def set_all_advanced_parameters(*args):
global disable_preview, adm_scaler_positive, adm_scaler_negative, adm_scaler_end, adaptive_cfg, sampler_name, \
scheduler_name, generate_image_grid, overwrite_step, overwrite_switch, overwrite_width, overwrite_height, \
overwrite_vary_strength, overwrite_upscale_strength, \
mixing_image_prompt_and_vary_upscale, mixing_image_prompt_and_inpaint, \
debugging_cn_preprocessor, skipping_cn_preprocessor, controlnet_softness, canny_low_threshold, canny_high_threshold, \
refiner_swap_method, \
freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2, \
debugging_inpaint_preprocessor, inpaint_disable_initial_latent, inpaint_engine, inpaint_strength, inpaint_respective_field, \
inpaint_mask_upload_checkbox, invert_mask_checkbox, inpaint_erode_or_dilate
disable_preview, adm_scaler_positive, adm_scaler_negative, adm_scaler_end, adaptive_cfg, sampler_name, \
scheduler_name, generate_image_grid, overwrite_step, overwrite_switch, overwrite_width, overwrite_height, \
overwrite_vary_strength, overwrite_upscale_strength, \
mixing_image_prompt_and_vary_upscale, mixing_image_prompt_and_inpaint, \
debugging_cn_preprocessor, skipping_cn_preprocessor, controlnet_softness, canny_low_threshold, canny_high_threshold, \
refiner_swap_method, \
freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2, \
debugging_inpaint_preprocessor, inpaint_disable_initial_latent, inpaint_engine, inpaint_strength, inpaint_respective_field, \
inpaint_mask_upload_checkbox, invert_mask_checkbox, inpaint_erode_or_dilate = args
return

View File

@ -1,11 +1,15 @@
import threading
from modules.patch import PatchSettings, patch_settings, patch_all
patch_all()
class AsyncTask:
def __init__(self, args):
self.args = args
self.yields = []
self.results = []
self.last_stop = False
self.processing = False
async_tasks = []
@ -14,6 +18,7 @@ async_tasks = []
def worker():
global async_tasks
import os
import traceback
import math
import numpy as np
@ -31,17 +36,22 @@ def worker():
import extras.preprocessors as preprocessors
import modules.inpaint_worker as inpaint_worker
import modules.constants as constants
import modules.advanced_parameters as advanced_parameters
import extras.ip_adapter as ip_adapter
import extras.face_crop
import fooocus_version
import args_manager
from modules.sdxl_styles import apply_style, apply_wildcards, fooocus_expansion
from modules.sdxl_styles import apply_style, apply_wildcards, fooocus_expansion, apply_arrays
from modules.private_logger import log
from extras.expansion import safe_str
from modules.util import remove_empty_str, HWC3, resize_image, \
get_image_shape_ceil, set_image_shape_ceil, get_shape_ceil, resample_image, erode_or_dilate, ordinal_suffix
from modules.upscaler import perform_upscale
from modules.flags import Performance
from modules.meta_parser import get_metadata_parser, MetadataScheme
pid = os.getpid()
print(f'Started worker with PID {pid}')
try:
async_gradio_app = shared.gradio_root
@ -69,9 +79,6 @@ def worker():
return
def build_image_wall(async_task):
if not advanced_parameters.generate_image_grid:
return
results = async_task.results
if len(results) < 2:
@ -111,10 +118,19 @@ def worker():
async_task.results = async_task.results + [wall]
return
def apply_enabled_loras(loras):
enabled_loras = []
for lora_enabled, lora_model, lora_weight in loras:
if lora_enabled:
enabled_loras.append([lora_model, lora_weight])
return enabled_loras
@torch.no_grad()
@torch.inference_mode()
def handler(async_task):
execution_start_time = time.perf_counter()
async_task.processing = True
args = async_task.args
args.reverse()
@ -122,16 +138,17 @@ def worker():
prompt = args.pop()
negative_prompt = args.pop()
style_selections = args.pop()
performance_selection = args.pop()
performance_selection = Performance(args.pop())
aspect_ratios_selection = args.pop()
image_number = args.pop()
output_format = args.pop()
image_seed = args.pop()
sharpness = args.pop()
guidance_scale = args.pop()
base_model_name = args.pop()
refiner_model_name = args.pop()
refiner_switch = args.pop()
loras = [[str(args.pop()), float(args.pop())] for _ in range(5)]
loras = apply_enabled_loras([[bool(args.pop()), str(args.pop()), float(args.pop()), ] for _ in range(modules.config.default_max_lora_number)])
input_image_checkbox = args.pop()
current_tab = args.pop()
uov_method = args.pop()
@ -141,8 +158,48 @@ def worker():
inpaint_additional_prompt = args.pop()
inpaint_mask_image_upload = args.pop()
disable_preview = args.pop()
disable_intermediate_results = args.pop()
disable_seed_increment = args.pop()
adm_scaler_positive = args.pop()
adm_scaler_negative = args.pop()
adm_scaler_end = args.pop()
adaptive_cfg = args.pop()
sampler_name = args.pop()
scheduler_name = args.pop()
overwrite_step = args.pop()
overwrite_switch = args.pop()
overwrite_width = args.pop()
overwrite_height = args.pop()
overwrite_vary_strength = args.pop()
overwrite_upscale_strength = args.pop()
mixing_image_prompt_and_vary_upscale = args.pop()
mixing_image_prompt_and_inpaint = args.pop()
debugging_cn_preprocessor = args.pop()
skipping_cn_preprocessor = args.pop()
canny_low_threshold = args.pop()
canny_high_threshold = args.pop()
refiner_swap_method = args.pop()
controlnet_softness = args.pop()
freeu_enabled = args.pop()
freeu_b1 = args.pop()
freeu_b2 = args.pop()
freeu_s1 = args.pop()
freeu_s2 = args.pop()
debugging_inpaint_preprocessor = args.pop()
inpaint_disable_initial_latent = args.pop()
inpaint_engine = args.pop()
inpaint_strength = args.pop()
inpaint_respective_field = args.pop()
inpaint_mask_upload_checkbox = args.pop()
invert_mask_checkbox = args.pop()
inpaint_erode_or_dilate = args.pop()
save_metadata_to_images = args.pop() if not args_manager.args.disable_metadata else False
metadata_scheme = MetadataScheme(args.pop()) if not args_manager.args.disable_metadata else MetadataScheme.FOOOCUS
cn_tasks = {x: [] for x in flags.ip_list}
for _ in range(4):
for _ in range(flags.controlnet_image_count):
cn_img = args.pop()
cn_stop = args.pop()
cn_weight = args.pop()
@ -167,17 +224,9 @@ def worker():
print(f'Refiner disabled because base model and refiner are same.')
refiner_model_name = 'None'
assert performance_selection in ['Speed', 'Quality', 'Extreme Speed']
steps = performance_selection.steps()
steps = 30
if performance_selection == 'Speed':
steps = 30
if performance_selection == 'Quality':
steps = 60
if performance_selection == 'Extreme Speed':
if performance_selection == Performance.EXTREME_SPEED:
print('Enter LCM mode.')
progressbar(async_task, 1, 'Downloading LCM components ...')
loras += [(modules.config.downloading_sdxl_lcm_lora(), 1.0)]
@ -186,30 +235,32 @@ def worker():
print(f'Refiner disabled in LCM mode.')
refiner_model_name = 'None'
sampler_name = advanced_parameters.sampler_name = 'lcm'
scheduler_name = advanced_parameters.scheduler_name = 'lcm'
modules.patch.sharpness = sharpness = 0.0
cfg_scale = guidance_scale = 1.0
modules.patch.adaptive_cfg = advanced_parameters.adaptive_cfg = 1.0
sampler_name = 'lcm'
scheduler_name = 'lcm'
sharpness = 0.0
guidance_scale = 1.0
adaptive_cfg = 1.0
refiner_switch = 1.0
modules.patch.positive_adm_scale = advanced_parameters.adm_scaler_positive = 1.0
modules.patch.negative_adm_scale = advanced_parameters.adm_scaler_negative = 1.0
modules.patch.adm_scaler_end = advanced_parameters.adm_scaler_end = 0.0
steps = 8
adm_scaler_positive = 1.0
adm_scaler_negative = 1.0
adm_scaler_end = 0.0
modules.patch.adaptive_cfg = advanced_parameters.adaptive_cfg
print(f'[Parameters] Adaptive CFG = {modules.patch.adaptive_cfg}')
modules.patch.sharpness = sharpness
print(f'[Parameters] Sharpness = {modules.patch.sharpness}')
modules.patch.positive_adm_scale = advanced_parameters.adm_scaler_positive
modules.patch.negative_adm_scale = advanced_parameters.adm_scaler_negative
modules.patch.adm_scaler_end = advanced_parameters.adm_scaler_end
print(f'[Parameters] Adaptive CFG = {adaptive_cfg}')
print(f'[Parameters] Sharpness = {sharpness}')
print(f'[Parameters] ControlNet Softness = {controlnet_softness}')
print(f'[Parameters] ADM Scale = '
f'{modules.patch.positive_adm_scale} : '
f'{modules.patch.negative_adm_scale} : '
f'{modules.patch.adm_scaler_end}')
f'{adm_scaler_positive} : '
f'{adm_scaler_negative} : '
f'{adm_scaler_end}')
patch_settings[pid] = PatchSettings(
sharpness,
adm_scaler_end,
adm_scaler_positive,
adm_scaler_negative,
controlnet_softness,
adaptive_cfg
)
cfg_scale = float(guidance_scale)
print(f'[Parameters] CFG = {cfg_scale}')
@ -222,10 +273,9 @@ def worker():
width, height = int(width), int(height)
skip_prompt_processing = False
refiner_swap_method = advanced_parameters.refiner_swap_method
inpaint_worker.current_task = None
inpaint_parameterized = advanced_parameters.inpaint_engine != 'None'
inpaint_parameterized = inpaint_engine != 'None'
inpaint_image = None
inpaint_mask = None
inpaint_head_model_path = None
@ -239,15 +289,12 @@ def worker():
seed = int(image_seed)
print(f'[Parameters] Seed = {seed}')
sampler_name = advanced_parameters.sampler_name
scheduler_name = advanced_parameters.scheduler_name
goals = []
tasks = []
if input_image_checkbox:
if (current_tab == 'uov' or (
current_tab == 'ip' and advanced_parameters.mixing_image_prompt_and_vary_upscale)) \
current_tab == 'ip' and mixing_image_prompt_and_vary_upscale)) \
and uov_method != flags.disabled and uov_input_image is not None:
uov_input_image = HWC3(uov_input_image)
if 'vary' in uov_method:
@ -257,26 +304,17 @@ def worker():
if 'fast' in uov_method:
skip_prompt_processing = True
else:
steps = 18
if performance_selection == 'Speed':
steps = 18
if performance_selection == 'Quality':
steps = 36
if performance_selection == 'Extreme Speed':
steps = 8
steps = performance_selection.steps_uov()
progressbar(async_task, 1, 'Downloading upscale models ...')
modules.config.downloading_upscale_model()
if (current_tab == 'inpaint' or (
current_tab == 'ip' and advanced_parameters.mixing_image_prompt_and_inpaint)) \
current_tab == 'ip' and mixing_image_prompt_and_inpaint)) \
and isinstance(inpaint_input_image, dict):
inpaint_image = inpaint_input_image['image']
inpaint_mask = inpaint_input_image['mask'][:, :, 0]
if advanced_parameters.inpaint_mask_upload_checkbox:
if inpaint_mask_upload_checkbox:
if isinstance(inpaint_mask_image_upload, np.ndarray):
if inpaint_mask_image_upload.ndim == 3:
H, W, C = inpaint_image.shape
@ -285,10 +323,10 @@ def worker():
inpaint_mask_image_upload = (inpaint_mask_image_upload > 127).astype(np.uint8) * 255
inpaint_mask = np.maximum(inpaint_mask, inpaint_mask_image_upload)
if int(advanced_parameters.inpaint_erode_or_dilate) != 0:
inpaint_mask = erode_or_dilate(inpaint_mask, advanced_parameters.inpaint_erode_or_dilate)
if int(inpaint_erode_or_dilate) != 0:
inpaint_mask = erode_or_dilate(inpaint_mask, inpaint_erode_or_dilate)
if advanced_parameters.invert_mask_checkbox:
if invert_mask_checkbox:
inpaint_mask = 255 - inpaint_mask
inpaint_image = HWC3(inpaint_image)
@ -299,7 +337,7 @@ def worker():
if inpaint_parameterized:
progressbar(async_task, 1, 'Downloading inpainter ...')
inpaint_head_model_path, inpaint_patch_model_path = modules.config.downloading_inpaint_models(
advanced_parameters.inpaint_engine)
inpaint_engine)
base_model_additional_loras += [(inpaint_patch_model_path, 1.0)]
print(f'[Inpaint] Current inpaint model is {inpaint_patch_model_path}')
if refiner_model_name == 'None':
@ -315,8 +353,8 @@ def worker():
prompt = inpaint_additional_prompt + '\n' + prompt
goals.append('inpaint')
if current_tab == 'ip' or \
advanced_parameters.mixing_image_prompt_and_inpaint or \
advanced_parameters.mixing_image_prompt_and_vary_upscale:
mixing_image_prompt_and_vary_upscale or \
mixing_image_prompt_and_inpaint:
goals.append('cn')
progressbar(async_task, 1, 'Downloading control models ...')
if len(cn_tasks[flags.cn_canny]) > 0:
@ -335,19 +373,19 @@ def worker():
ip_adapter.load_ip_adapter(clip_vision_path, ip_negative_path, ip_adapter_path)
ip_adapter.load_ip_adapter(clip_vision_path, ip_negative_path, ip_adapter_face_path)
if advanced_parameters.overwrite_step > 0:
steps = advanced_parameters.overwrite_step
if overwrite_step > 0:
steps = overwrite_step
switch = int(round(steps * refiner_switch))
if advanced_parameters.overwrite_switch > 0:
switch = advanced_parameters.overwrite_switch
if overwrite_switch > 0:
switch = overwrite_switch
if advanced_parameters.overwrite_width > 0:
width = advanced_parameters.overwrite_width
if overwrite_width > 0:
width = overwrite_width
if advanced_parameters.overwrite_height > 0:
height = advanced_parameters.overwrite_height
if overwrite_height > 0:
height = overwrite_height
print(f'[Parameters] Sampler = {sampler_name} - {scheduler_name}')
print(f'[Parameters] Steps = {steps} - {switch}')
@ -376,11 +414,16 @@ def worker():
progressbar(async_task, 3, 'Processing prompts ...')
tasks = []
for i in range(image_number):
task_seed = (seed + i) % (constants.MAX_SEED + 1) # randint is inclusive, % is not
task_rng = random.Random(task_seed) # may bind to inpaint noise in the future
if disable_seed_increment:
task_seed = seed
else:
task_seed = (seed + i) % (constants.MAX_SEED + 1) # randint is inclusive, % is not
task_rng = random.Random(task_seed) # may bind to inpaint noise in the future
task_prompt = apply_wildcards(prompt, task_rng)
task_prompt = apply_arrays(task_prompt, i)
task_negative_prompt = apply_wildcards(negative_prompt, task_rng)
task_extra_positive_prompts = [apply_wildcards(pmt, task_rng) for pmt in extra_positive_prompts]
task_extra_negative_prompts = [apply_wildcards(pmt, task_rng) for pmt in extra_negative_prompts]
@ -446,8 +489,8 @@ def worker():
denoising_strength = 0.5
if 'strong' in uov_method:
denoising_strength = 0.85
if advanced_parameters.overwrite_vary_strength > 0:
denoising_strength = advanced_parameters.overwrite_vary_strength
if overwrite_vary_strength > 0:
denoising_strength = overwrite_vary_strength
shape_ceil = get_image_shape_ceil(uov_input_image)
if shape_ceil < 1024:
@ -511,15 +554,15 @@ def worker():
if direct_return:
d = [('Upscale (Fast)', '2x')]
log(uov_input_image, d)
yield_result(async_task, uov_input_image, do_not_show_finished_images=True)
uov_input_image_path = log(uov_input_image, d, output_format)
yield_result(async_task, uov_input_image_path, do_not_show_finished_images=True)
return
tiled = True
denoising_strength = 0.382
if advanced_parameters.overwrite_upscale_strength > 0:
denoising_strength = advanced_parameters.overwrite_upscale_strength
if overwrite_upscale_strength > 0:
denoising_strength = overwrite_upscale_strength
initial_pixels = core.numpy_to_pytorch(uov_input_image)
progressbar(async_task, 13, 'VAE encoding ...')
@ -563,19 +606,19 @@ def worker():
inpaint_image = np.ascontiguousarray(inpaint_image.copy())
inpaint_mask = np.ascontiguousarray(inpaint_mask.copy())
advanced_parameters.inpaint_strength = 1.0
advanced_parameters.inpaint_respective_field = 1.0
inpaint_strength = 1.0
inpaint_respective_field = 1.0
denoising_strength = advanced_parameters.inpaint_strength
denoising_strength = inpaint_strength
inpaint_worker.current_task = inpaint_worker.InpaintWorker(
image=inpaint_image,
mask=inpaint_mask,
use_fill=denoising_strength > 0.99,
k=advanced_parameters.inpaint_respective_field
k=inpaint_respective_field
)
if advanced_parameters.debugging_inpaint_preprocessor:
if debugging_inpaint_preprocessor:
yield_result(async_task, inpaint_worker.current_task.visualize_mask_processing(),
do_not_show_finished_images=True)
return
@ -621,7 +664,7 @@ def worker():
model=pipeline.final_unet
)
if not advanced_parameters.inpaint_disable_initial_latent:
if not inpaint_disable_initial_latent:
initial_latent = {'samples': latent_fill}
B, C, H, W = latent_fill.shape
@ -634,24 +677,24 @@ def worker():
cn_img, cn_stop, cn_weight = task
cn_img = resize_image(HWC3(cn_img), width=width, height=height)
if not advanced_parameters.skipping_cn_preprocessor:
cn_img = preprocessors.canny_pyramid(cn_img)
if not skipping_cn_preprocessor:
cn_img = preprocessors.canny_pyramid(cn_img, canny_low_threshold, canny_high_threshold)
cn_img = HWC3(cn_img)
task[0] = core.numpy_to_pytorch(cn_img)
if advanced_parameters.debugging_cn_preprocessor:
if debugging_cn_preprocessor:
yield_result(async_task, cn_img, do_not_show_finished_images=True)
return
for task in cn_tasks[flags.cn_cpds]:
cn_img, cn_stop, cn_weight = task
cn_img = resize_image(HWC3(cn_img), width=width, height=height)
if not advanced_parameters.skipping_cn_preprocessor:
if not skipping_cn_preprocessor:
cn_img = preprocessors.cpds(cn_img)
cn_img = HWC3(cn_img)
task[0] = core.numpy_to_pytorch(cn_img)
if advanced_parameters.debugging_cn_preprocessor:
if debugging_cn_preprocessor:
yield_result(async_task, cn_img, do_not_show_finished_images=True)
return
for task in cn_tasks[flags.cn_ip]:
@ -662,21 +705,21 @@ def worker():
cn_img = resize_image(cn_img, width=224, height=224, resize_mode=0)
task[0] = ip_adapter.preprocess(cn_img, ip_adapter_path=ip_adapter_path)
if advanced_parameters.debugging_cn_preprocessor:
if debugging_cn_preprocessor:
yield_result(async_task, cn_img, do_not_show_finished_images=True)
return
for task in cn_tasks[flags.cn_ip_face]:
cn_img, cn_stop, cn_weight = task
cn_img = HWC3(cn_img)
if not advanced_parameters.skipping_cn_preprocessor:
if not skipping_cn_preprocessor:
cn_img = extras.face_crop.crop_image(cn_img)
# https://github.com/tencent-ailab/IP-Adapter/blob/d580c50a291566bbf9fc7ac0f760506607297e6d/README.md?plain=1#L75
cn_img = resize_image(cn_img, width=224, height=224, resize_mode=0)
task[0] = ip_adapter.preprocess(cn_img, ip_adapter_path=ip_adapter_face_path)
if advanced_parameters.debugging_cn_preprocessor:
if debugging_cn_preprocessor:
yield_result(async_task, cn_img, do_not_show_finished_images=True)
return
@ -685,14 +728,14 @@ def worker():
if len(all_ip_tasks) > 0:
pipeline.final_unet = ip_adapter.patch_model(pipeline.final_unet, all_ip_tasks)
if advanced_parameters.freeu_enabled:
if freeu_enabled:
print(f'FreeU is enabled!')
pipeline.final_unet = core.apply_freeu(
pipeline.final_unet,
advanced_parameters.freeu_b1,
advanced_parameters.freeu_b2,
advanced_parameters.freeu_s1,
advanced_parameters.freeu_s2
freeu_b1,
freeu_b2,
freeu_s1,
freeu_s2
)
all_steps = steps * image_number
@ -738,6 +781,8 @@ def worker():
execution_start_time = time.perf_counter()
try:
if async_task.last_stop is not False:
ldm_patched.model_management.interrupt_current_processing()
positive_cond, negative_cond = task['c'], task['uc']
if 'cn' in goals:
@ -765,7 +810,8 @@ def worker():
denoise=denoising_strength,
tiled=tiled,
cfg_scale=cfg_scale,
refiner_swap_method=refiner_swap_method
refiner_swap_method=refiner_swap_method,
disable_preview=disable_preview
)
del task['c'], task['uc'], positive_cond, negative_cond # Save memory
@ -773,37 +819,58 @@ def worker():
if inpaint_worker.current_task is not None:
imgs = [inpaint_worker.current_task.post_process(x) for x in imgs]
img_paths = []
for x in imgs:
d = [
('Prompt', task['log_positive_prompt']),
('Negative Prompt', task['log_negative_prompt']),
('Fooocus V2 Expansion', task['expansion']),
('Styles', str(raw_style_selections)),
('Performance', performance_selection),
('Resolution', str((width, height))),
('Sharpness', sharpness),
('Guidance Scale', guidance_scale),
('ADM Guidance', str((
modules.patch.positive_adm_scale,
modules.patch.negative_adm_scale,
modules.patch.adm_scaler_end))),
('Base Model', base_model_name),
('Refiner Model', refiner_model_name),
('Refiner Switch', refiner_switch),
('Sampler', sampler_name),
('Scheduler', scheduler_name),
('Seed', task['task_seed']),
]
d = [('Prompt', 'prompt', task['log_positive_prompt']),
('Negative Prompt', 'negative_prompt', task['log_negative_prompt']),
('Fooocus V2 Expansion', 'prompt_expansion', task['expansion']),
('Styles', 'styles', str(raw_style_selections)),
('Performance', 'performance', performance_selection.value),
('Resolution', 'resolution', str((width, height))),
('Guidance Scale', 'guidance_scale', guidance_scale),
('Sharpness', 'sharpness', sharpness),
('ADM Guidance', 'adm_guidance', str((
modules.patch.patch_settings[pid].positive_adm_scale,
modules.patch.patch_settings[pid].negative_adm_scale,
modules.patch.patch_settings[pid].adm_scaler_end))),
('Base Model', 'base_model', base_model_name),
('Refiner Model', 'refiner_model', refiner_model_name),
('Refiner Switch', 'refiner_switch', refiner_switch)]
if refiner_model_name != 'None':
if overwrite_switch > 0:
d.append(('Overwrite Switch', 'overwrite_switch', overwrite_switch))
if refiner_swap_method != flags.refiner_swap_method:
d.append(('Refiner Swap Method', 'refiner_swap_method', refiner_swap_method))
if modules.patch.patch_settings[pid].adaptive_cfg != modules.config.default_cfg_tsnr:
d.append(('CFG Mimicking from TSNR', 'adaptive_cfg', modules.patch.patch_settings[pid].adaptive_cfg))
d.append(('Sampler', 'sampler', sampler_name))
d.append(('Scheduler', 'scheduler', scheduler_name))
d.append(('Seed', 'seed', task['task_seed']))
if freeu_enabled:
d.append(('FreeU', 'freeu', str((freeu_b1, freeu_b2, freeu_s1, freeu_s2))))
metadata_parser = None
if save_metadata_to_images:
metadata_parser = modules.meta_parser.get_metadata_parser(metadata_scheme)
metadata_parser.set_data(task['log_positive_prompt'], task['positive'],
task['log_negative_prompt'], task['negative'],
steps, base_model_name, refiner_model_name, loras)
for li, (n, w) in enumerate(loras):
if n != 'None':
d.append((f'LoRA {li + 1}', f'{n} : {w}'))
d.append(('Version', 'v' + fooocus_version.version))
log(x, d)
d.append((f'LoRA {li + 1}', f'lora_combined_{li + 1}', f'{n} : {w}'))
yield_result(async_task, imgs, do_not_show_finished_images=len(tasks) == 1)
d.append(('Version', 'version', 'Fooocus v' + fooocus_version.version))
img_paths.append(log(x, d, metadata_parser, output_format))
yield_result(async_task, img_paths, do_not_show_finished_images=len(tasks) == 1 or disable_intermediate_results)
except ldm_patched.modules.model_management.InterruptProcessingException as e:
if shared.last_stop == 'skip':
if async_task.last_stop == 'skip':
print('User skipped')
async_task.last_stop = False
continue
else:
print('User stopped')
@ -811,21 +878,27 @@ def worker():
execution_time = time.perf_counter() - execution_start_time
print(f'Generating and saving time: {execution_time:.2f} seconds')
async_task.processing = False
return
while True:
time.sleep(0.01)
if len(async_tasks) > 0:
task = async_tasks.pop(0)
generate_image_grid = task.args.pop(0)
try:
handler(task)
build_image_wall(task)
if generate_image_grid:
build_image_wall(task)
task.yields.append(['finish', task.results])
pipeline.prepare_text_encoder(async_call=True)
except:
traceback.print_exc()
task.yields.append(['finish', task.results])
finally:
if pid in modules.patch.patch_settings:
del modules.patch.patch_settings[pid]
pass

View File

@ -7,11 +7,19 @@ import modules.flags
import modules.sdxl_styles
from modules.model_loader import load_file_from_url
from modules.util import get_files_from_folder
from modules.util import get_files_from_folder, makedirs_with_log
from modules.flags import Performance, MetadataScheme
def get_config_path(key, default_value):
env = os.getenv(key)
if env is not None and isinstance(env, str):
print(f"Environment: {key} = {env}")
return env
else:
return os.path.abspath(default_value)
config_path = os.path.abspath("./config.txt")
config_example_path = os.path.abspath("config_modification_tutorial.txt")
config_path = get_config_path('config_path', "./config.txt")
config_example_path = get_config_path('config_example_path', "config_modification_tutorial.txt")
config_dict = {}
always_save_keys = []
visited_keys = []
@ -107,14 +115,14 @@ def get_path_output() -> str:
Checking output path argument and overriding default path.
"""
global config_dict
path_output = get_dir_or_set_default('path_outputs', '../outputs/')
path_output = get_dir_or_set_default('path_outputs', '../outputs/', make_directory=True)
if args_manager.args.output_path:
print(f'[CONFIG] Overriding config value path_outputs with {args_manager.args.output_path}')
config_dict['path_outputs'] = path_output = args_manager.args.output_path
return path_output
def get_dir_or_set_default(key, default_value):
def get_dir_or_set_default(key, default_value, as_array=False, make_directory=False):
global config_dict, visited_keys, always_save_keys
if key not in visited_keys:
@ -123,20 +131,44 @@ def get_dir_or_set_default(key, default_value):
if key not in always_save_keys:
always_save_keys.append(key)
v = config_dict.get(key, None)
if isinstance(v, str) and os.path.exists(v) and os.path.isdir(v):
return v
v = os.getenv(key)
if v is not None:
print(f"Environment: {key} = {v}")
config_dict[key] = v
else:
v = config_dict.get(key, None)
if isinstance(v, str):
if make_directory:
makedirs_with_log(v)
if os.path.exists(v) and os.path.isdir(v):
return v if not as_array else [v]
elif isinstance(v, list):
if make_directory:
for d in v:
makedirs_with_log(d)
if all([os.path.exists(d) and os.path.isdir(d) for d in v]):
return v
if v is not None:
print(f'Failed to load config key: {json.dumps({key:v})} is invalid or does not exist; will use {json.dumps({key:default_value})} instead.')
if isinstance(default_value, list):
dp = []
for path in default_value:
abs_path = os.path.abspath(os.path.join(os.path.dirname(__file__), path))
dp.append(abs_path)
os.makedirs(abs_path, exist_ok=True)
else:
if v is not None:
print(f'Failed to load config key: {json.dumps({key:v})} is invalid or does not exist; will use {json.dumps({key:default_value})} instead.')
dp = os.path.abspath(os.path.join(os.path.dirname(__file__), default_value))
os.makedirs(dp, exist_ok=True)
config_dict[key] = dp
return dp
if as_array:
dp = [dp]
config_dict[key] = dp
return dp
path_checkpoints = get_dir_or_set_default('path_checkpoints', '../models/checkpoints/')
path_loras = get_dir_or_set_default('path_loras', '../models/loras/')
paths_checkpoints = get_dir_or_set_default('path_checkpoints', ['../models/checkpoints/'], True)
paths_loras = get_dir_or_set_default('path_loras', ['../models/loras/'], True)
path_embeddings = get_dir_or_set_default('path_embeddings', '../models/embeddings/')
path_vae_approx = get_dir_or_set_default('path_vae_approx', '../models/vae_approx/')
path_upscale_models = get_dir_or_set_default('path_upscale_models', '../models/upscale_models/')
@ -146,13 +178,17 @@ path_clip_vision = get_dir_or_set_default('path_clip_vision', '../models/clip_vi
path_fooocus_expansion = get_dir_or_set_default('path_fooocus_expansion', '../models/prompt_expansion/fooocus_expansion')
path_outputs = get_path_output()
def get_config_item_or_set_default(key, default_value, validator, disable_empty_as_none=False):
global config_dict, visited_keys
if key not in visited_keys:
visited_keys.append(key)
v = os.getenv(key)
if v is not None:
print(f"Environment: {key} = {v}")
config_dict[key] = v
if key not in config_dict:
config_dict[key] = default_value
return default_value
@ -190,6 +226,16 @@ default_refiner_switch = get_config_item_or_set_default(
default_value=0.8,
validator=lambda x: isinstance(x, numbers.Number) and 0 <= x <= 1
)
default_loras_min_weight = get_config_item_or_set_default(
key='default_loras_min_weight',
default_value=-2,
validator=lambda x: isinstance(x, numbers.Number) and -10 <= x <= 10
)
default_loras_max_weight = get_config_item_or_set_default(
key='default_loras_max_weight',
default_value=2,
validator=lambda x: isinstance(x, numbers.Number) and -10 <= x <= 10
)
default_loras = get_config_item_or_set_default(
key='default_loras',
default_value=[
@ -216,6 +262,11 @@ default_loras = get_config_item_or_set_default(
],
validator=lambda x: isinstance(x, list) and all(len(y) == 2 and isinstance(y[0], str) and isinstance(y[1], numbers.Number) for y in x)
)
default_max_lora_number = get_config_item_or_set_default(
key='default_max_lora_number',
default_value=len(default_loras),
validator=lambda x: isinstance(x, int) and x >= 1
)
default_cfg_scale = get_config_item_or_set_default(
key='default_cfg_scale',
default_value=7.0,
@ -259,8 +310,8 @@ default_prompt = get_config_item_or_set_default(
)
default_performance = get_config_item_or_set_default(
key='default_performance',
default_value='Speed',
validator=lambda x: x in modules.flags.performance_selections
default_value=Performance.SPEED.value,
validator=lambda x: x in Performance.list()
)
default_advanced_checkbox = get_config_item_or_set_default(
key='default_advanced_checkbox',
@ -272,6 +323,11 @@ default_max_image_number = get_config_item_or_set_default(
default_value=32,
validator=lambda x: isinstance(x, int) and x >= 1
)
default_output_format = get_config_item_or_set_default(
key='default_output_format',
default_value='png',
validator=lambda x: x in modules.flags.output_formats
)
default_image_number = get_config_item_or_set_default(
key='default_image_number',
default_value=2,
@ -335,16 +391,34 @@ example_inpaint_prompts = get_config_item_or_set_default(
],
validator=lambda x: isinstance(x, list) and all(isinstance(v, str) for v in x)
)
default_save_metadata_to_images = get_config_item_or_set_default(
key='default_save_metadata_to_images',
default_value=False,
validator=lambda x: isinstance(x, bool)
)
default_metadata_scheme = get_config_item_or_set_default(
key='default_metadata_scheme',
default_value=MetadataScheme.FOOOCUS.value,
validator=lambda x: x in [y[1] for y in modules.flags.metadata_scheme if y[1] == x]
)
metadata_created_by = get_config_item_or_set_default(
key='metadata_created_by',
default_value='',
validator=lambda x: isinstance(x, str)
)
example_inpaint_prompts = [[x] for x in example_inpaint_prompts]
config_dict["default_loras"] = default_loras = default_loras[:5] + [['None', 1.0] for _ in range(5 - len(default_loras))]
config_dict["default_loras"] = default_loras = default_loras[:default_max_lora_number] + [['None', 1.0] for _ in range(default_max_lora_number - len(default_loras))]
possible_preset_keys = [
"default_model",
"default_refiner",
"default_refiner_switch",
"default_loras_min_weight",
"default_loras_max_weight",
"default_loras",
"default_max_lora_number",
"default_cfg_scale",
"default_sample_sharpness",
"default_sampler",
@ -354,6 +428,7 @@ possible_preset_keys = [
"default_prompt_negative",
"default_styles",
"default_aspect_ratio",
"default_save_metadata_to_images",
"checkpoint_downloads",
"embeddings_downloads",
"lora_downloads",
@ -397,21 +472,23 @@ with open(config_example_path, "w", encoding="utf-8") as json_file:
'and there is no "," before the last "}". \n\n\n')
json.dump({k: config_dict[k] for k in visited_keys}, json_file, indent=4)
os.makedirs(path_outputs, exist_ok=True)
model_filenames = []
lora_filenames = []
sdxl_lcm_lora = 'sdxl_lcm_lora.safetensors'
def get_model_filenames(folder_path, name_filter=None):
return get_files_from_folder(folder_path, ['.pth', '.ckpt', '.bin', '.safetensors', '.fooocus.patch'], name_filter)
def get_model_filenames(folder_paths, name_filter=None):
extensions = ['.pth', '.ckpt', '.bin', '.safetensors', '.fooocus.patch']
files = []
for folder in folder_paths:
files += get_files_from_folder(folder, extensions, name_filter)
return files
def update_all_model_names():
global model_filenames, lora_filenames
model_filenames = get_model_filenames(path_checkpoints)
lora_filenames = get_model_filenames(path_loras)
model_filenames = get_model_filenames(paths_checkpoints)
lora_filenames = get_model_filenames(paths_loras)
return
@ -456,10 +533,10 @@ def downloading_inpaint_models(v):
def downloading_sdxl_lcm_lora():
load_file_from_url(
url='https://huggingface.co/lllyasviel/misc/resolve/main/sdxl_lcm_lora.safetensors',
model_dir=path_loras,
file_name='sdxl_lcm_lora.safetensors'
model_dir=paths_loras[0],
file_name=sdxl_lcm_lora
)
return 'sdxl_lcm_lora.safetensors'
return sdxl_lcm_lora
def downloading_controlnet_canny():

View File

@ -1,8 +1,3 @@
from modules.patch import patch_all
patch_all()
import os
import einops
import torch
@ -16,7 +11,6 @@ import ldm_patched.modules.controlnet
import modules.sample_hijack
import ldm_patched.modules.samplers
import ldm_patched.modules.latent_formats
import modules.advanced_parameters
from ldm_patched.modules.sd import load_checkpoint_guess_config
from ldm_patched.contrib.external import VAEDecode, EmptyLatentImage, VAEEncode, VAEEncodeTiled, VAEDecodeTiled, \
@ -24,6 +18,7 @@ from ldm_patched.contrib.external import VAEDecode, EmptyLatentImage, VAEEncode,
from ldm_patched.contrib.external_freelunch import FreeU_V2
from ldm_patched.modules.sample import prepare_mask
from modules.lora import match_lora
from modules.util import get_file_from_folder_list
from ldm_patched.modules.lora import model_lora_keys_unet, model_lora_keys_clip
from modules.config import path_embeddings
from ldm_patched.contrib.external_model_advanced import ModelSamplingDiscrete
@ -85,7 +80,7 @@ class StableDiffusionModel:
if os.path.exists(name):
lora_filename = name
else:
lora_filename = os.path.join(modules.config.path_loras, name)
lora_filename = get_file_from_folder_list(name, modules.config.paths_loras)
if not os.path.exists(lora_filename):
print(f'Lora file not found: {lora_filename}')
@ -268,7 +263,7 @@ def get_previewer(model):
def ksampler(model, positive, negative, latent, seed=None, steps=30, cfg=7.0, sampler_name='dpmpp_2m_sde_gpu',
scheduler='karras', denoise=1.0, disable_noise=False, start_step=None, last_step=None,
force_full_denoise=False, callback_function=None, refiner=None, refiner_switch=-1,
previewer_start=None, previewer_end=None, sigmas=None, noise_mean=None):
previewer_start=None, previewer_end=None, sigmas=None, noise_mean=None, disable_preview=False):
if sigmas is not None:
sigmas = sigmas.clone().to(ldm_patched.modules.model_management.get_torch_device())
@ -299,7 +294,7 @@ def ksampler(model, positive, negative, latent, seed=None, steps=30, cfg=7.0, sa
def callback(step, x0, x, total_steps):
ldm_patched.modules.model_management.throw_exception_if_processing_interrupted()
y = None
if previewer is not None and not modules.advanced_parameters.disable_preview:
if previewer is not None and not disable_preview:
y = previewer(x0, previewer_start + step, previewer_end)
if callback_function is not None:
callback_function(previewer_start + step, x0, x, previewer_end, y)

View File

@ -11,6 +11,7 @@ from extras.expansion import FooocusExpansion
from ldm_patched.modules.model_base import SDXL, SDXLRefiner
from modules.sample_hijack import clip_separate
from modules.util import get_file_from_folder_list
model_base = core.StableDiffusionModel()
@ -60,7 +61,7 @@ def assert_model_integrity():
def refresh_base_model(name):
global model_base
filename = os.path.abspath(os.path.realpath(os.path.join(modules.config.path_checkpoints, name)))
filename = get_file_from_folder_list(name, modules.config.paths_checkpoints)
if model_base.filename == filename:
return
@ -76,7 +77,7 @@ def refresh_base_model(name):
def refresh_refiner_model(name):
global model_refiner
filename = os.path.abspath(os.path.realpath(os.path.join(modules.config.path_checkpoints, name)))
filename = get_file_from_folder_list(name, modules.config.paths_checkpoints)
if model_refiner.filename == filename:
return
@ -315,7 +316,7 @@ def get_candidate_vae(steps, switch, denoise=1.0, refiner_swap_method='joint'):
@torch.no_grad()
@torch.inference_mode()
def process_diffusion(positive_cond, negative_cond, steps, switch, width, height, image_seed, callback, sampler_name, scheduler_name, latent=None, denoise=1.0, tiled=False, cfg_scale=7.0, refiner_swap_method='joint'):
def process_diffusion(positive_cond, negative_cond, steps, switch, width, height, image_seed, callback, sampler_name, scheduler_name, latent=None, denoise=1.0, tiled=False, cfg_scale=7.0, refiner_swap_method='joint', disable_preview=False):
target_unet, target_vae, target_refiner_unet, target_refiner_vae, target_clip \
= final_unet, final_vae, final_refiner_unet, final_refiner_vae, final_clip
@ -374,6 +375,7 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
refiner_switch=switch,
previewer_start=0,
previewer_end=steps,
disable_preview=disable_preview
)
decoded_latent = core.decode_vae(vae=target_vae, latent_image=sampled_latent, tiled=tiled)
@ -392,6 +394,7 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
scheduler=scheduler_name,
previewer_start=0,
previewer_end=steps,
disable_preview=disable_preview
)
print('Refiner swapped by changing ksampler. Noise preserved.')
@ -414,6 +417,7 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
scheduler=scheduler_name,
previewer_start=switch,
previewer_end=steps,
disable_preview=disable_preview
)
target_model = target_refiner_vae
@ -422,7 +426,7 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
decoded_latent = core.decode_vae(vae=target_model, latent_image=sampled_latent, tiled=tiled)
if refiner_swap_method == 'vae':
modules.patch.eps_record = 'vae'
modules.patch.patch_settings[os.getpid()].eps_record = 'vae'
if modules.inpaint_worker.current_task is not None:
modules.inpaint_worker.current_task.unswap()
@ -440,7 +444,8 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
sampler_name=sampler_name,
scheduler=scheduler_name,
previewer_start=0,
previewer_end=steps
previewer_end=steps,
disable_preview=disable_preview
)
print('Fooocus VAE-based swap.')
@ -459,7 +464,7 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
denoise=denoise)[switch:] * k_sigmas
len_sigmas = len(sigmas) - 1
noise_mean = torch.mean(modules.patch.eps_record, dim=1, keepdim=True)
noise_mean = torch.mean(modules.patch.patch_settings[os.getpid()].eps_record, dim=1, keepdim=True)
if modules.inpaint_worker.current_task is not None:
modules.inpaint_worker.current_task.swap()
@ -479,7 +484,8 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
previewer_start=switch,
previewer_end=steps,
sigmas=sigmas,
noise_mean=noise_mean
noise_mean=noise_mean,
disable_preview=disable_preview
)
target_model = target_refiner_vae
@ -488,5 +494,5 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
decoded_latent = core.decode_vae(vae=target_model, latent_image=sampled_latent, tiled=tiled)
images = core.pytorch_to_numpy(decoded_latent)
modules.patch.eps_record = None
modules.patch.patch_settings[os.getpid()].eps_record = None
return images

View File

@ -1,3 +1,5 @@
from enum import IntEnum, Enum
disabled = 'Disabled'
enabled = 'Enabled'
subtle_variation = 'Vary (Subtle)'
@ -10,16 +12,49 @@ uov_list = [
disabled, subtle_variation, strong_variation, upscale_15, upscale_2, upscale_fast
]
KSAMPLER_NAMES = ["euler", "euler_ancestral", "heun", "heunpp2","dpm_2", "dpm_2_ancestral",
"lms", "dpm_fast", "dpm_adaptive", "dpmpp_2s_ancestral", "dpmpp_sde", "dpmpp_sde_gpu",
"dpmpp_2m", "dpmpp_2m_sde", "dpmpp_2m_sde_gpu", "dpmpp_3m_sde", "dpmpp_3m_sde_gpu", "ddpm", "lcm"]
CIVITAI_NO_KARRAS = ["euler", "euler_ancestral", "heun", "dpm_fast", "dpm_adaptive", "ddim", "uni_pc"]
# fooocus: a1111 (Civitai)
KSAMPLER = {
"euler": "Euler",
"euler_ancestral": "Euler a",
"heun": "Heun",
"heunpp2": "",
"dpm_2": "DPM2",
"dpm_2_ancestral": "DPM2 a",
"lms": "LMS",
"dpm_fast": "DPM fast",
"dpm_adaptive": "DPM adaptive",
"dpmpp_2s_ancestral": "DPM++ 2S a",
"dpmpp_sde": "DPM++ SDE",
"dpmpp_sde_gpu": "DPM++ SDE",
"dpmpp_2m": "DPM++ 2M",
"dpmpp_2m_sde": "DPM++ 2M SDE",
"dpmpp_2m_sde_gpu": "DPM++ 2M SDE",
"dpmpp_3m_sde": "",
"dpmpp_3m_sde_gpu": "",
"ddpm": "",
"lcm": "LCM"
}
SAMPLER_EXTRA = {
"ddim": "DDIM",
"uni_pc": "UniPC",
"uni_pc_bh2": ""
}
SAMPLERS = KSAMPLER | SAMPLER_EXTRA
KSAMPLER_NAMES = list(KSAMPLER.keys())
SCHEDULER_NAMES = ["normal", "karras", "exponential", "sgm_uniform", "simple", "ddim_uniform", "lcm", "turbo"]
SAMPLER_NAMES = KSAMPLER_NAMES + ["ddim", "uni_pc", "uni_pc_bh2"]
SAMPLER_NAMES = KSAMPLER_NAMES + list(SAMPLER_EXTRA.keys())
sampler_list = SAMPLER_NAMES
scheduler_list = SCHEDULER_NAMES
refiner_swap_method = 'joint'
cn_ip = "ImagePrompt"
cn_ip_face = "FaceSwap"
cn_canny = "PyraCanny"
@ -32,9 +67,9 @@ default_parameters = {
cn_ip: (0.5, 0.6), cn_ip_face: (0.9, 0.75), cn_canny: (0.5, 1.0), cn_cpds: (0.5, 1.0)
} # stop, weight
inpaint_engine_versions = ['None', 'v1', 'v2.5', 'v2.6']
performance_selections = ['Speed', 'Quality', 'Extreme Speed']
output_formats = ['png', 'jpg', 'webp']
inpaint_engine_versions = ['None', 'v1', 'v2.5', 'v2.6']
inpaint_option_default = 'Inpaint or Outpaint (default)'
inpaint_option_detail = 'Improve Detail (face, hand, eyes, etc.)'
inpaint_option_modify = 'Modify Content (add objects, change background, etc.)'
@ -42,3 +77,49 @@ inpaint_options = [inpaint_option_default, inpaint_option_detail, inpaint_option
desc_type_photo = 'Photograph'
desc_type_anime = 'Art/Anime'
class MetadataScheme(Enum):
FOOOCUS = 'fooocus'
A1111 = 'a1111'
metadata_scheme = [
(f'{MetadataScheme.FOOOCUS.value} (json)', MetadataScheme.FOOOCUS.value),
(f'{MetadataScheme.A1111.value} (plain text)', MetadataScheme.A1111.value),
]
lora_count = 5
controlnet_image_count = 4
class Steps(IntEnum):
QUALITY = 60
SPEED = 30
EXTREME_SPEED = 8
class StepsUOV(IntEnum):
QUALITY = 36
SPEED = 18
EXTREME_SPEED = 8
class Performance(Enum):
QUALITY = 'Quality'
SPEED = 'Speed'
EXTREME_SPEED = 'Extreme Speed'
@classmethod
def list(cls) -> list:
return list(map(lambda c: c.value, cls))
def steps(self) -> int | None:
return Steps[self.name].value if Steps[self.name] else None
def steps_uov(self) -> int | None:
return StepsUOV[self.name].value if Steps[self.name] else None
performance_selections = Performance.list()

View File

@ -112,6 +112,30 @@ progress::after {
margin-left: -5px !important;
}
.lora_enable {
flex-grow: 1 !important;
}
.lora_enable label {
height: 100%;
}
.lora_enable label input {
margin: auto;
}
.lora_enable label span {
display: none;
}
.lora_model {
flex-grow: 5 !important;
}
.lora_weight {
flex-grow: 5 !important;
}
'''
progress_html = '''
<div class="loader-container">

View File

@ -1,45 +1,114 @@
import json
import os
import re
from abc import ABC, abstractmethod
from pathlib import Path
import gradio as gr
from PIL import Image
import fooocus_version
import modules.config
import modules.sdxl_styles
from modules.flags import MetadataScheme, Performance, Steps
from modules.flags import SAMPLERS, CIVITAI_NO_KARRAS
from modules.util import quote, unquote, extract_styles_from_prompt, is_json, get_file_from_folder_list, calculate_sha256
re_param_code = r'\s*(\w[\w \-/]+):\s*("(?:\\.|[^\\"])+"|[^,]*)(?:,|$)'
re_param = re.compile(re_param_code)
re_imagesize = re.compile(r"^(\d+)x(\d+)$")
hash_cache = {}
def load_parameter_button_click(raw_prompt_txt, is_generating):
loaded_parameter_dict = json.loads(raw_prompt_txt)
def load_parameter_button_click(raw_metadata: dict | str, is_generating: bool):
loaded_parameter_dict = raw_metadata
if isinstance(raw_metadata, str):
loaded_parameter_dict = json.loads(raw_metadata)
assert isinstance(loaded_parameter_dict, dict)
results = [True, 1]
results = [len(loaded_parameter_dict) > 0, 1]
get_str('prompt', 'Prompt', loaded_parameter_dict, results)
get_str('negative_prompt', 'Negative Prompt', loaded_parameter_dict, results)
get_list('styles', 'Styles', loaded_parameter_dict, results)
get_str('performance', 'Performance', loaded_parameter_dict, results)
get_steps('steps', 'Steps', loaded_parameter_dict, results)
get_float('overwrite_switch', 'Overwrite Switch', loaded_parameter_dict, results)
get_resolution('resolution', 'Resolution', loaded_parameter_dict, results)
get_float('guidance_scale', 'Guidance Scale', loaded_parameter_dict, results)
get_float('sharpness', 'Sharpness', loaded_parameter_dict, results)
get_adm_guidance('adm_guidance', 'ADM Guidance', loaded_parameter_dict, results)
get_str('refiner_swap_method', 'Refiner Swap Method', loaded_parameter_dict, results)
get_float('adaptive_cfg', 'CFG Mimicking from TSNR', loaded_parameter_dict, results)
get_str('base_model', 'Base Model', loaded_parameter_dict, results)
get_str('refiner_model', 'Refiner Model', loaded_parameter_dict, results)
get_float('refiner_switch', 'Refiner Switch', loaded_parameter_dict, results)
get_str('sampler', 'Sampler', loaded_parameter_dict, results)
get_str('scheduler', 'Scheduler', loaded_parameter_dict, results)
get_seed('seed', 'Seed', loaded_parameter_dict, results)
if is_generating:
results.append(gr.update())
else:
results.append(gr.update(visible=True))
results.append(gr.update(visible=False))
get_freeu('freeu', 'FreeU', loaded_parameter_dict, results)
for i in range(modules.config.default_max_lora_number):
get_lora(f'lora_combined_{i + 1}', f'LoRA {i + 1}', loaded_parameter_dict, results)
return results
def get_str(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try:
h = loaded_parameter_dict.get('Prompt', None)
h = source_dict.get(key, source_dict.get(fallback, default))
assert isinstance(h, str)
results.append(h)
except:
results.append(gr.update())
try:
h = loaded_parameter_dict.get('Negative Prompt', None)
assert isinstance(h, str)
results.append(h)
except:
results.append(gr.update())
def get_list(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try:
h = loaded_parameter_dict.get('Styles', None)
h = source_dict.get(key, source_dict.get(fallback, default))
h = eval(h)
assert isinstance(h, list)
results.append(h)
except:
results.append(gr.update())
def get_float(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try:
h = loaded_parameter_dict.get('Performance', None)
assert isinstance(h, str)
h = source_dict.get(key, source_dict.get(fallback, default))
assert h is not None
h = float(h)
results.append(h)
except:
results.append(gr.update())
def get_steps(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try:
h = loaded_parameter_dict.get('Resolution', None)
h = source_dict.get(key, source_dict.get(fallback, default))
assert h is not None
h = int(h)
# if not in steps or in steps and performance is not the same
if h not in iter(Steps) or Steps(h).name.casefold() != source_dict.get('performance', '').replace(' ', '_').casefold():
results.append(h)
return
results.append(-1)
except:
results.append(-1)
def get_resolution(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try:
h = source_dict.get(key, source_dict.get(fallback, default))
width, height = eval(h)
formatted = modules.config.add_ratio(f'{width}*{height}')
if formatted in modules.config.available_aspect_ratios:
@ -55,24 +124,22 @@ def load_parameter_button_click(raw_prompt_txt, is_generating):
results.append(gr.update())
results.append(gr.update())
def get_seed(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try:
h = loaded_parameter_dict.get('Sharpness', None)
h = source_dict.get(key, source_dict.get(fallback, default))
assert h is not None
h = float(h)
h = int(h)
results.append(False)
results.append(h)
except:
results.append(gr.update())
try:
h = loaded_parameter_dict.get('Guidance Scale', None)
assert h is not None
h = float(h)
results.append(h)
except:
results.append(gr.update())
def get_adm_guidance(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try:
h = loaded_parameter_dict.get('ADM Guidance', None)
h = source_dict.get(key, source_dict.get(fallback, default))
p, n, e = eval(h)
results.append(float(p))
results.append(float(n))
@ -82,67 +149,425 @@ def load_parameter_button_click(raw_prompt_txt, is_generating):
results.append(gr.update())
results.append(gr.update())
try:
h = loaded_parameter_dict.get('Base Model', None)
assert isinstance(h, str)
results.append(h)
except:
results.append(gr.update())
def get_freeu(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try:
h = loaded_parameter_dict.get('Refiner Model', None)
assert isinstance(h, str)
results.append(h)
h = source_dict.get(key, source_dict.get(fallback, default))
b1, b2, s1, s2 = eval(h)
results.append(True)
results.append(float(b1))
results.append(float(b2))
results.append(float(s1))
results.append(float(s2))
except:
results.append(gr.update())
try:
h = loaded_parameter_dict.get('Refiner Switch', None)
assert h is not None
h = float(h)
results.append(h)
except:
results.append(gr.update())
try:
h = loaded_parameter_dict.get('Sampler', None)
assert isinstance(h, str)
results.append(h)
except:
results.append(gr.update())
try:
h = loaded_parameter_dict.get('Scheduler', None)
assert isinstance(h, str)
results.append(h)
except:
results.append(gr.update())
try:
h = loaded_parameter_dict.get('Seed', None)
assert h is not None
h = int(h)
results.append(False)
results.append(h)
results.append(gr.update())
results.append(gr.update())
results.append(gr.update())
results.append(gr.update())
def get_lora(key: str, fallback: str | None, source_dict: dict, results: list):
try:
n, w = source_dict.get(key, source_dict.get(fallback)).split(' : ')
w = float(w)
results.append(True)
results.append(n)
results.append(w)
except:
results.append(gr.update())
results.append(gr.update())
results.append(True)
results.append('None')
results.append(1)
if is_generating:
results.append(gr.update())
else:
results.append(gr.update(visible=True))
results.append(gr.update(visible=False))
for i in range(1, 6):
try:
n, w = loaded_parameter_dict.get(f'LoRA {i}').split(' : ')
w = float(w)
results.append(n)
results.append(w)
except:
results.append(gr.update())
results.append(gr.update())
def get_sha256(filepath):
global hash_cache
if filepath not in hash_cache:
hash_cache[filepath] = calculate_sha256(filepath)
return results
return hash_cache[filepath]
def parse_meta_from_preset(preset_content):
assert isinstance(preset_content, dict)
preset_prepared = {}
items = preset_content
for settings_key, meta_key in modules.config.possible_preset_keys.items():
if settings_key == "default_loras":
loras = getattr(modules.config, settings_key)
if settings_key in items:
loras = items[settings_key]
for index, lora in enumerate(loras[:5]):
preset_prepared[f'lora_combined_{index + 1}'] = ' : '.join(map(str, lora))
elif settings_key == "default_aspect_ratio":
if settings_key in items and items[settings_key] is not None:
default_aspect_ratio = items[settings_key]
width, height = default_aspect_ratio.split('*')
else:
default_aspect_ratio = getattr(modules.config, settings_key)
width, height = default_aspect_ratio.split('×')
height = height[:height.index(" ")]
preset_prepared[meta_key] = (width, height)
else:
preset_prepared[meta_key] = items[settings_key] if settings_key in items and items[
settings_key] is not None else getattr(modules.config, settings_key)
if settings_key == "default_styles" or settings_key == "default_aspect_ratio":
preset_prepared[meta_key] = str(preset_prepared[meta_key])
return preset_prepared
class MetadataParser(ABC):
def __init__(self):
self.raw_prompt: str = ''
self.full_prompt: str = ''
self.raw_negative_prompt: str = ''
self.full_negative_prompt: str = ''
self.steps: int = 30
self.base_model_name: str = ''
self.base_model_hash: str = ''
self.refiner_model_name: str = ''
self.refiner_model_hash: str = ''
self.loras: list = []
@abstractmethod
def get_scheme(self) -> MetadataScheme:
raise NotImplementedError
@abstractmethod
def parse_json(self, metadata: dict | str) -> dict:
raise NotImplementedError
@abstractmethod
def parse_string(self, metadata: dict) -> str:
raise NotImplementedError
def set_data(self, raw_prompt, full_prompt, raw_negative_prompt, full_negative_prompt, steps, base_model_name,
refiner_model_name, loras):
self.raw_prompt = raw_prompt
self.full_prompt = full_prompt
self.raw_negative_prompt = raw_negative_prompt
self.full_negative_prompt = full_negative_prompt
self.steps = steps
self.base_model_name = Path(base_model_name).stem
base_model_path = get_file_from_folder_list(base_model_name, modules.config.paths_checkpoints)
self.base_model_hash = get_sha256(base_model_path)
if refiner_model_name not in ['', 'None']:
self.refiner_model_name = Path(refiner_model_name).stem
refiner_model_path = get_file_from_folder_list(refiner_model_name, modules.config.paths_checkpoints)
self.refiner_model_hash = get_sha256(refiner_model_path)
self.loras = []
for (lora_name, lora_weight) in loras:
if lora_name != 'None':
lora_path = get_file_from_folder_list(lora_name, modules.config.paths_loras)
lora_hash = get_sha256(lora_path)
self.loras.append((Path(lora_name).stem, lora_weight, lora_hash))
class A1111MetadataParser(MetadataParser):
def get_scheme(self) -> MetadataScheme:
return MetadataScheme.A1111
fooocus_to_a1111 = {
'raw_prompt': 'Raw prompt',
'raw_negative_prompt': 'Raw negative prompt',
'negative_prompt': 'Negative prompt',
'styles': 'Styles',
'performance': 'Performance',
'steps': 'Steps',
'sampler': 'Sampler',
'scheduler': 'Scheduler',
'guidance_scale': 'CFG scale',
'seed': 'Seed',
'resolution': 'Size',
'sharpness': 'Sharpness',
'adm_guidance': 'ADM Guidance',
'refiner_swap_method': 'Refiner Swap Method',
'adaptive_cfg': 'Adaptive CFG',
'overwrite_switch': 'Overwrite Switch',
'freeu': 'FreeU',
'base_model': 'Model',
'base_model_hash': 'Model hash',
'refiner_model': 'Refiner',
'refiner_model_hash': 'Refiner hash',
'lora_hashes': 'Lora hashes',
'lora_weights': 'Lora weights',
'created_by': 'User',
'version': 'Version'
}
def parse_json(self, metadata: str) -> dict:
metadata_prompt = ''
metadata_negative_prompt = ''
done_with_prompt = False
*lines, lastline = metadata.strip().split("\n")
if len(re_param.findall(lastline)) < 3:
lines.append(lastline)
lastline = ''
for line in lines:
line = line.strip()
if line.startswith(f"{self.fooocus_to_a1111['negative_prompt']}:"):
done_with_prompt = True
line = line[len(f"{self.fooocus_to_a1111['negative_prompt']}:"):].strip()
if done_with_prompt:
metadata_negative_prompt += ('' if metadata_negative_prompt == '' else "\n") + line
else:
metadata_prompt += ('' if metadata_prompt == '' else "\n") + line
found_styles, prompt, negative_prompt = extract_styles_from_prompt(metadata_prompt, metadata_negative_prompt)
data = {
'prompt': prompt,
'negative_prompt': negative_prompt
}
for k, v in re_param.findall(lastline):
try:
if v != '' and v[0] == '"' and v[-1] == '"':
v = unquote(v)
m = re_imagesize.match(v)
if m is not None:
data['resolution'] = str((m.group(1), m.group(2)))
else:
data[list(self.fooocus_to_a1111.keys())[list(self.fooocus_to_a1111.values()).index(k)]] = v
except Exception:
print(f"Error parsing \"{k}: {v}\"")
# workaround for multiline prompts
if 'raw_prompt' in data:
data['prompt'] = data['raw_prompt']
raw_prompt = data['raw_prompt'].replace("\n", ', ')
if metadata_prompt != raw_prompt and modules.sdxl_styles.fooocus_expansion not in found_styles:
found_styles.append(modules.sdxl_styles.fooocus_expansion)
if 'raw_negative_prompt' in data:
data['negative_prompt'] = data['raw_negative_prompt']
data['styles'] = str(found_styles)
# try to load performance based on steps, fallback for direct A1111 imports
if 'steps' in data and 'performance' not in data:
try:
data['performance'] = Performance[Steps(int(data['steps'])).name].value
except ValueError | KeyError:
pass
if 'sampler' in data:
data['sampler'] = data['sampler'].replace(' Karras', '')
# get key
for k, v in SAMPLERS.items():
if v == data['sampler']:
data['sampler'] = k
break
for key in ['base_model', 'refiner_model']:
if key in data:
for filename in modules.config.model_filenames:
path = Path(filename)
if data[key] == path.stem:
data[key] = filename
break
if 'lora_hashes' in data:
lora_filenames = modules.config.lora_filenames.copy()
if modules.config.sdxl_lcm_lora in lora_filenames:
lora_filenames.remove(modules.config.sdxl_lcm_lora)
for li, lora in enumerate(data['lora_hashes'].split(', ')):
lora_name, lora_hash, lora_weight = lora.split(': ')
for filename in lora_filenames:
path = Path(filename)
if lora_name == path.stem:
data[f'lora_combined_{li + 1}'] = f'{filename} : {lora_weight}'
break
return data
def parse_string(self, metadata: dict) -> str:
data = {k: v for _, k, v in metadata}
width, height = eval(data['resolution'])
sampler = data['sampler']
scheduler = data['scheduler']
if sampler in SAMPLERS and SAMPLERS[sampler] != '':
sampler = SAMPLERS[sampler]
if sampler not in CIVITAI_NO_KARRAS and scheduler == 'karras':
sampler += f' Karras'
generation_params = {
self.fooocus_to_a1111['steps']: self.steps,
self.fooocus_to_a1111['sampler']: sampler,
self.fooocus_to_a1111['seed']: data['seed'],
self.fooocus_to_a1111['resolution']: f'{width}x{height}',
self.fooocus_to_a1111['guidance_scale']: data['guidance_scale'],
self.fooocus_to_a1111['sharpness']: data['sharpness'],
self.fooocus_to_a1111['adm_guidance']: data['adm_guidance'],
self.fooocus_to_a1111['base_model']: Path(data['base_model']).stem,
self.fooocus_to_a1111['base_model_hash']: self.base_model_hash,
self.fooocus_to_a1111['performance']: data['performance'],
self.fooocus_to_a1111['scheduler']: scheduler,
# workaround for multiline prompts
self.fooocus_to_a1111['raw_prompt']: self.raw_prompt,
self.fooocus_to_a1111['raw_negative_prompt']: self.raw_negative_prompt,
}
if self.refiner_model_name not in ['', 'None']:
generation_params |= {
self.fooocus_to_a1111['refiner_model']: self.refiner_model_name,
self.fooocus_to_a1111['refiner_model_hash']: self.refiner_model_hash
}
for key in ['adaptive_cfg', 'overwrite_switch', 'refiner_swap_method', 'freeu']:
if key in data:
generation_params[self.fooocus_to_a1111[key]] = data[key]
lora_hashes = []
for index, (lora_name, lora_weight, lora_hash) in enumerate(self.loras):
# workaround for Fooocus not knowing LoRA name in LoRA metadata
lora_hashes.append(f'{lora_name}: {lora_hash}: {lora_weight}')
lora_hashes_string = ', '.join(lora_hashes)
generation_params |= {
self.fooocus_to_a1111['lora_hashes']: lora_hashes_string,
self.fooocus_to_a1111['version']: data['version']
}
if modules.config.metadata_created_by != '':
generation_params[self.fooocus_to_a1111['created_by']] = modules.config.metadata_created_by
generation_params_text = ", ".join(
[k if k == v else f'{k}: {quote(v)}' for k, v in generation_params.items() if
v is not None])
positive_prompt_resolved = ', '.join(self.full_prompt)
negative_prompt_resolved = ', '.join(self.full_negative_prompt)
negative_prompt_text = f"\nNegative prompt: {negative_prompt_resolved}" if negative_prompt_resolved else ""
return f"{positive_prompt_resolved}{negative_prompt_text}\n{generation_params_text}".strip()
class FooocusMetadataParser(MetadataParser):
def get_scheme(self) -> MetadataScheme:
return MetadataScheme.FOOOCUS
def parse_json(self, metadata: dict) -> dict:
model_filenames = modules.config.model_filenames.copy()
lora_filenames = modules.config.lora_filenames.copy()
if modules.config.sdxl_lcm_lora in lora_filenames:
lora_filenames.remove(modules.config.sdxl_lcm_lora)
for key, value in metadata.items():
if value in ['', 'None']:
continue
if key in ['base_model', 'refiner_model']:
metadata[key] = self.replace_value_with_filename(key, value, model_filenames)
elif key.startswith('lora_combined_'):
metadata[key] = self.replace_value_with_filename(key, value, lora_filenames)
else:
continue
return metadata
def parse_string(self, metadata: list) -> str:
for li, (label, key, value) in enumerate(metadata):
# remove model folder paths from metadata
if key.startswith('lora_combined_'):
name, weight = value.split(' : ')
name = Path(name).stem
value = f'{name} : {weight}'
metadata[li] = (label, key, value)
res = {k: v for _, k, v in metadata}
res['full_prompt'] = self.full_prompt
res['full_negative_prompt'] = self.full_negative_prompt
res['steps'] = self.steps
res['base_model'] = self.base_model_name
res['base_model_hash'] = self.base_model_hash
if self.refiner_model_name not in ['', 'None']:
res['refiner_model'] = self.refiner_model_name
res['refiner_model_hash'] = self.refiner_model_hash
res['loras'] = self.loras
if modules.config.metadata_created_by != '':
res['created_by'] = modules.config.metadata_created_by
return json.dumps(dict(sorted(res.items())))
@staticmethod
def replace_value_with_filename(key, value, filenames):
for filename in filenames:
path = Path(filename)
if key.startswith('lora_combined_'):
name, weight = value.split(' : ')
if name == path.stem:
return f'{filename} : {weight}'
elif value == path.stem:
return filename
def get_metadata_parser(metadata_scheme: MetadataScheme) -> MetadataParser:
match metadata_scheme:
case MetadataScheme.FOOOCUS:
return FooocusMetadataParser()
case MetadataScheme.A1111:
return A1111MetadataParser()
case _:
raise NotImplementedError
def read_info_from_image(filepath) -> tuple[str | None, MetadataScheme | None]:
with Image.open(filepath) as image:
items = (image.info or {}).copy()
parameters = items.pop('parameters', None)
metadata_scheme = items.pop('fooocus_scheme', None)
exif = items.pop('exif', None)
if parameters is not None and is_json(parameters):
parameters = json.loads(parameters)
elif exif is not None:
exif = image.getexif()
# 0x9286 = UserComment
parameters = exif.get(0x9286, None)
# 0x927C = MakerNote
metadata_scheme = exif.get(0x927C, None)
if is_json(parameters):
parameters = json.loads(parameters)
try:
metadata_scheme = MetadataScheme(metadata_scheme)
except ValueError:
metadata_scheme = None
# broad fallback
if isinstance(parameters, dict):
metadata_scheme = MetadataScheme.FOOOCUS
if isinstance(parameters, str):
metadata_scheme = MetadataScheme.A1111
return parameters, metadata_scheme
def get_exif(metadata: str | None, metadata_scheme: str):
exif = Image.Exif()
# tags see see https://github.com/python-pillow/Pillow/blob/9.2.x/src/PIL/ExifTags.py
# 0x9286 = UserComment
exif[0x9286] = metadata
# 0x0131 = Software
exif[0x0131] = 'Fooocus v' + fooocus_version.version
# 0x927C = MakerNote
exif[0x927C] = metadata_scheme
return exif

View File

@ -17,7 +17,6 @@ import ldm_patched.controlnet.cldm
import ldm_patched.modules.model_patcher
import ldm_patched.modules.samplers
import ldm_patched.modules.args_parser
import modules.advanced_parameters as advanced_parameters
import warnings
import safetensors.torch
import modules.constants as constants
@ -29,15 +28,25 @@ from modules.patch_precision import patch_all_precision
from modules.patch_clip import patch_all_clip
sharpness = 2.0
class PatchSettings:
def __init__(self,
sharpness=2.0,
adm_scaler_end=0.3,
positive_adm_scale=1.5,
negative_adm_scale=0.8,
controlnet_softness=0.25,
adaptive_cfg=7.0):
self.sharpness = sharpness
self.adm_scaler_end = adm_scaler_end
self.positive_adm_scale = positive_adm_scale
self.negative_adm_scale = negative_adm_scale
self.controlnet_softness = controlnet_softness
self.adaptive_cfg = adaptive_cfg
self.global_diffusion_progress = 0
self.eps_record = None
adm_scaler_end = 0.3
positive_adm_scale = 1.5
negative_adm_scale = 0.8
adaptive_cfg = 7.0
global_diffusion_progress = 0
eps_record = None
patch_settings = {}
def calculate_weight_patched(self, patches, weight, key):
@ -201,14 +210,13 @@ class BrownianTreeNoiseSamplerPatched:
def compute_cfg(uncond, cond, cfg_scale, t):
global adaptive_cfg
mimic_cfg = float(adaptive_cfg)
pid = os.getpid()
mimic_cfg = float(patch_settings[pid].adaptive_cfg)
real_cfg = float(cfg_scale)
real_eps = uncond + real_cfg * (cond - uncond)
if cfg_scale > adaptive_cfg:
if cfg_scale > patch_settings[pid].adaptive_cfg:
mimicked_eps = uncond + mimic_cfg * (cond - uncond)
return real_eps * t + mimicked_eps * (1 - t)
else:
@ -216,13 +224,13 @@ def compute_cfg(uncond, cond, cfg_scale, t):
def patched_sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options=None, seed=None):
global eps_record
pid = os.getpid()
if math.isclose(cond_scale, 1.0) and not model_options.get("disable_cfg1_optimization", False):
final_x0 = calc_cond_uncond_batch(model, cond, None, x, timestep, model_options)[0]
if eps_record is not None:
eps_record = ((x - final_x0) / timestep).cpu()
if patch_settings[pid].eps_record is not None:
patch_settings[pid].eps_record = ((x - final_x0) / timestep).cpu()
return final_x0
@ -231,16 +239,16 @@ def patched_sampling_function(model, x, timestep, uncond, cond, cond_scale, mode
positive_eps = x - positive_x0
negative_eps = x - negative_x0
alpha = 0.001 * sharpness * global_diffusion_progress
alpha = 0.001 * patch_settings[pid].sharpness * patch_settings[pid].global_diffusion_progress
positive_eps_degraded = anisotropic.adaptive_anisotropic_filter(x=positive_eps, g=positive_x0)
positive_eps_degraded_weighted = positive_eps_degraded * alpha + positive_eps * (1.0 - alpha)
final_eps = compute_cfg(uncond=negative_eps, cond=positive_eps_degraded_weighted,
cfg_scale=cond_scale, t=global_diffusion_progress)
cfg_scale=cond_scale, t=patch_settings[pid].global_diffusion_progress)
if eps_record is not None:
eps_record = (final_eps / timestep).cpu()
if patch_settings[pid].eps_record is not None:
patch_settings[pid].eps_record = (final_eps / timestep).cpu()
return x - final_eps
@ -255,20 +263,19 @@ def round_to_64(x):
def sdxl_encode_adm_patched(self, **kwargs):
global positive_adm_scale, negative_adm_scale
clip_pooled = ldm_patched.modules.model_base.sdxl_pooled(kwargs, self.noise_augmentor)
width = kwargs.get("width", 1024)
height = kwargs.get("height", 1024)
target_width = width
target_height = height
pid = os.getpid()
if kwargs.get("prompt_type", "") == "negative":
width = float(width) * negative_adm_scale
height = float(height) * negative_adm_scale
width = float(width) * patch_settings[pid].negative_adm_scale
height = float(height) * patch_settings[pid].negative_adm_scale
elif kwargs.get("prompt_type", "") == "positive":
width = float(width) * positive_adm_scale
height = float(height) * positive_adm_scale
width = float(width) * patch_settings[pid].positive_adm_scale
height = float(height) * patch_settings[pid].positive_adm_scale
def embedder(number_list):
h = self.embedder(torch.tensor(number_list, dtype=torch.float32))
@ -322,7 +329,7 @@ def patched_KSamplerX0Inpaint_forward(self, x, sigma, uncond, cond, cond_scale,
def timed_adm(y, timesteps):
if isinstance(y, torch.Tensor) and int(y.dim()) == 2 and int(y.shape[1]) == 5632:
y_mask = (timesteps > 999.0 * (1.0 - float(adm_scaler_end))).to(y)[..., None]
y_mask = (timesteps > 999.0 * (1.0 - float(patch_settings[os.getpid()].adm_scaler_end))).to(y)[..., None]
y_with_adm = y[..., :2816].clone()
y_without_adm = y[..., 2816:].clone()
return y_with_adm * y_mask + y_without_adm * (1.0 - y_mask)
@ -332,6 +339,7 @@ def timed_adm(y, timesteps):
def patched_cldm_forward(self, x, hint, timesteps, context, y=None, **kwargs):
t_emb = ldm_patched.ldm.modules.diffusionmodules.openaimodel.timestep_embedding(timesteps, self.model_channels, repeat_only=False).to(x.dtype)
emb = self.time_embed(t_emb)
pid = os.getpid()
guided_hint = self.input_hint_block(hint, emb, context)
@ -357,19 +365,17 @@ def patched_cldm_forward(self, x, hint, timesteps, context, y=None, **kwargs):
h = self.middle_block(h, emb, context)
outs.append(self.middle_block_out(h, emb, context))
if advanced_parameters.controlnet_softness > 0:
if patch_settings[pid].controlnet_softness > 0:
for i in range(10):
k = 1.0 - float(i) / 9.0
outs[i] = outs[i] * (1.0 - advanced_parameters.controlnet_softness * k)
outs[i] = outs[i] * (1.0 - patch_settings[pid].controlnet_softness * k)
return outs
def patched_unet_forward(self, x, timesteps=None, context=None, y=None, control=None, transformer_options={}, **kwargs):
global global_diffusion_progress
self.current_step = 1.0 - timesteps.to(x) / 999.0
global_diffusion_progress = float(self.current_step.detach().cpu().numpy().tolist()[0])
patch_settings[os.getpid()].global_diffusion_progress = float(self.current_step.detach().cpu().numpy().tolist()[0])
y = timed_adm(y, timesteps)
@ -483,7 +489,7 @@ def patch_all():
if ldm_patched.modules.model_management.directml_enabled:
ldm_patched.modules.model_management.lowvram_available = True
ldm_patched.modules.model_management.OOM_EXCEPTION = Exception
patch_all_precision()
patch_all_clip()

View File

@ -5,26 +5,48 @@ import json
import urllib.parse
from PIL import Image
from PIL.PngImagePlugin import PngInfo
from modules.util import generate_temp_filename
from modules.meta_parser import MetadataParser, get_exif
log_cache = {}
def get_current_html_path():
def get_current_html_path(output_format=None):
output_format = output_format if output_format else modules.config.default_output_format
date_string, local_temp_filename, only_name = generate_temp_filename(folder=modules.config.path_outputs,
extension='png')
extension=output_format)
html_name = os.path.join(os.path.dirname(local_temp_filename), 'log.html')
return html_name
def log(img, dic):
if args_manager.args.disable_image_log:
return
date_string, local_temp_filename, only_name = generate_temp_filename(folder=modules.config.path_outputs, extension='png')
def log(img, metadata, metadata_parser: MetadataParser | None = None, output_format=None) -> str:
path_outputs = args_manager.args.temp_path if args_manager.args.disable_image_log else modules.config.path_outputs
output_format = output_format if output_format else modules.config.default_output_format
date_string, local_temp_filename, only_name = generate_temp_filename(folder=path_outputs, extension=output_format)
os.makedirs(os.path.dirname(local_temp_filename), exist_ok=True)
Image.fromarray(img).save(local_temp_filename)
parsed_parameters = metadata_parser.parse_string(metadata) if metadata_parser is not None else ''
image = Image.fromarray(img)
if output_format == 'png':
if parsed_parameters != '':
pnginfo = PngInfo()
pnginfo.add_text('parameters', parsed_parameters)
pnginfo.add_text('fooocus_scheme', metadata_parser.get_scheme().value)
else:
pnginfo = None
image.save(local_temp_filename, pnginfo=pnginfo)
elif output_format == 'jpg':
image.save(local_temp_filename, quality=95, optimize=True, progressive=True, exif=get_exif(parsed_parameters, metadata_parser.get_scheme().value) if metadata_parser else Image.Exif())
elif output_format == 'webp':
image.save(local_temp_filename, quality=95, lossless=False, exif=get_exif(parsed_parameters, metadata_parser.get_scheme().value) if metadata_parser else Image.Exif())
else:
image.save(local_temp_filename)
if args_manager.args.disable_image_log:
return local_temp_filename
html_name = os.path.join(os.path.dirname(local_temp_filename), 'log.html')
css_styles = (
@ -32,7 +54,7 @@ def log(img, dic):
"body { background-color: #121212; color: #E0E0E0; } "
"a { color: #BB86FC; } "
".metadata { border-collapse: collapse; width: 100%; } "
".metadata .key { width: 15%; } "
".metadata .label { width: 15%; } "
".metadata .value { width: 85%; font-weight: bold; } "
".metadata th, .metadata td { border: 1px solid #4d4d4d; padding: 4px; } "
".image-container img { height: auto; max-width: 512px; display: block; padding-right:10px; } "
@ -85,13 +107,13 @@ def log(img, dic):
item = f"<div id=\"{div_name}\" class=\"image-container\"><hr><table><tr>\n"
item += f"<td><a href=\"{only_name}\" target=\"_blank\"><img src='{only_name}' onerror=\"this.closest('.image-container').style.display='none';\" loading='lazy'/></a><div>{only_name}</div></td>"
item += "<td><table class='metadata'>"
for key, value in dic:
value_txt = str(value).replace('\n', ' <br/> ')
item += f"<tr><td class='key'>{key}</td><td class='value'>{value_txt}</td></tr>\n"
for label, key, value in metadata:
value_txt = str(value).replace('\n', ' </br> ')
item += f"<tr><td class='label'>{label}</td><td class='value'>{value_txt}</td></tr>\n"
item += "</table>"
js_txt = urllib.parse.quote(json.dumps({k: v for k, v in dic}, indent=0), safe='')
item += f"<br/><button onclick=\"to_clipboard('{js_txt}')\">Copy to Clipboard</button>"
js_txt = urllib.parse.quote(json.dumps({k: v for _, k, v in metadata}, indent=0), safe='')
item += f"</br><button onclick=\"to_clipboard('{js_txt}')\">Copy to Clipboard</button>"
item += "</td>"
item += "</tr></table></div>\n\n"
@ -105,4 +127,4 @@ def log(img, dic):
log_cache[html_name] = middle_part
return
return local_temp_filename

View File

@ -1,6 +1,7 @@
import os
import re
import json
import math
from modules.util import get_files_from_folder
@ -80,3 +81,38 @@ def apply_wildcards(wildcard_text, rng, directory=wildcards_path):
print(f'[Wildcards] BFS stack overflow. Current text: {wildcard_text}')
return wildcard_text
def get_words(arrays, totalMult, index):
if(len(arrays) == 1):
return [arrays[0].split(',')[index]]
else:
words = arrays[0].split(',')
word = words[index % len(words)]
index -= index % len(words)
index /= len(words)
index = math.floor(index)
return [word] + get_words(arrays[1:], math.floor(totalMult/len(words)), index)
def apply_arrays(text, index):
arrays = re.findall(r'\[\[([\s,\w-]+)\]\]', text)
if len(arrays) == 0:
return text
print(f'[Arrays] processing: {text}')
mult = 1
for arr in arrays:
words = arr.split(',')
mult *= len(words)
index %= mult
chosen_words = get_words(arrays, mult, index)
i = 0
for arr in arrays:
text = text.replace(f'[[{arr}]]', chosen_words[i], 1)
i = i+1
return text

View File

@ -1,15 +1,20 @@
import typing
import numpy as np
import datetime
import random
import math
import os
import cv2
import json
from PIL import Image
from hashlib import sha256
import modules.sdxl_styles
LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS)
HASH_SHA256_LENGTH = 10
def erode_or_dilate(x, k):
k = int(k)
@ -155,7 +160,7 @@ def generate_temp_filename(folder='./outputs/', extension='png'):
random_number = random.randint(1000, 9999)
filename = f"{time_string}_{random_number}.{extension}"
result = os.path.join(folder, date_string, filename)
return date_string, os.path.abspath(os.path.realpath(result)), filename
return date_string, os.path.abspath(result), filename
def get_files_from_folder(folder_path, exensions=None, name_filter=None):
@ -168,14 +173,190 @@ def get_files_from_folder(folder_path, exensions=None, name_filter=None):
relative_path = os.path.relpath(root, folder_path)
if relative_path == ".":
relative_path = ""
for filename in sorted(files):
for filename in sorted(files, key=lambda s: s.casefold()):
_, file_extension = os.path.splitext(filename)
if (exensions == None or file_extension.lower() in exensions) and (name_filter == None or name_filter in _):
if (exensions is None or file_extension.lower() in exensions) and (name_filter is None or name_filter in _):
path = os.path.join(relative_path, filename)
filenames.append(path)
return filenames
def calculate_sha256(filename, length=HASH_SHA256_LENGTH) -> str:
hash_sha256 = sha256()
blksize = 1024 * 1024
with open(filename, "rb") as f:
for chunk in iter(lambda: f.read(blksize), b""):
hash_sha256.update(chunk)
res = hash_sha256.hexdigest()
return res[:length] if length else res
def quote(text):
if ',' not in str(text) and '\n' not in str(text) and ':' not in str(text):
return text
return json.dumps(text, ensure_ascii=False)
def unquote(text):
if len(text) == 0 or text[0] != '"' or text[-1] != '"':
return text
try:
return json.loads(text)
except Exception:
return text
def unwrap_style_text_from_prompt(style_text, prompt):
"""
Checks the prompt to see if the style text is wrapped around it. If so,
returns True plus the prompt text without the style text. Otherwise, returns
False with the original prompt.
Note that the "cleaned" version of the style text is only used for matching
purposes here. It isn't returned; the original style text is not modified.
"""
stripped_prompt = prompt
stripped_style_text = style_text
if "{prompt}" in stripped_style_text:
# Work out whether the prompt is wrapped in the style text. If so, we
# return True and the "inner" prompt text that isn't part of the style.
try:
left, right = stripped_style_text.split("{prompt}", 2)
except ValueError as e:
# If the style text has multple "{prompt}"s, we can't split it into
# two parts. This is an error, but we can't do anything about it.
print(f"Unable to compare style text to prompt:\n{style_text}")
print(f"Error: {e}")
return False, prompt, ''
left_pos = stripped_prompt.find(left)
right_pos = stripped_prompt.find(right)
if 0 <= left_pos < right_pos:
real_prompt = stripped_prompt[left_pos + len(left):right_pos]
prompt = stripped_prompt.replace(left + real_prompt + right, '', 1)
if prompt.startswith(", "):
prompt = prompt[2:]
if prompt.endswith(", "):
prompt = prompt[:-2]
return True, prompt, real_prompt
else:
# Work out whether the given prompt starts with the style text. If so, we
# return True and the prompt text up to where the style text starts.
if stripped_prompt.endswith(stripped_style_text):
prompt = stripped_prompt[: len(stripped_prompt) - len(stripped_style_text)]
if prompt.endswith(", "):
prompt = prompt[:-2]
return True, prompt, prompt
return False, prompt, ''
def extract_original_prompts(style, prompt, negative_prompt):
"""
Takes a style and compares it to the prompt and negative prompt. If the style
matches, returns True plus the prompt and negative prompt with the style text
removed. Otherwise, returns False with the original prompt and negative prompt.
"""
if not style.prompt and not style.negative_prompt:
return False, prompt, negative_prompt
match_positive, extracted_positive, real_prompt = unwrap_style_text_from_prompt(
style.prompt, prompt
)
if not match_positive:
return False, prompt, negative_prompt, ''
match_negative, extracted_negative, _ = unwrap_style_text_from_prompt(
style.negative_prompt, negative_prompt
)
if not match_negative:
return False, prompt, negative_prompt, ''
return True, extracted_positive, extracted_negative, real_prompt
def extract_styles_from_prompt(prompt, negative_prompt):
extracted = []
applicable_styles = []
for style_name, (style_prompt, style_negative_prompt) in modules.sdxl_styles.styles.items():
applicable_styles.append(PromptStyle(name=style_name, prompt=style_prompt, negative_prompt=style_negative_prompt))
real_prompt = ''
while True:
found_style = None
for style in applicable_styles:
is_match, new_prompt, new_neg_prompt, new_real_prompt = extract_original_prompts(
style, prompt, negative_prompt
)
if is_match:
found_style = style
prompt = new_prompt
negative_prompt = new_neg_prompt
if real_prompt == '' and new_real_prompt != '' and new_real_prompt != prompt:
real_prompt = new_real_prompt
break
if not found_style:
break
applicable_styles.remove(found_style)
extracted.append(found_style.name)
# add prompt expansion if not all styles could be resolved
if prompt != '':
if real_prompt != '':
extracted.append(modules.sdxl_styles.fooocus_expansion)
else:
# find real_prompt when only prompt expansion is selected
first_word = prompt.split(', ')[0]
first_word_positions = [i for i in range(len(prompt)) if prompt.startswith(first_word, i)]
if len(first_word_positions) > 1:
real_prompt = prompt[:first_word_positions[-1]]
extracted.append(modules.sdxl_styles.fooocus_expansion)
if real_prompt.endswith(', '):
real_prompt = real_prompt[:-2]
return list(reversed(extracted)), real_prompt, negative_prompt
class PromptStyle(typing.NamedTuple):
name: str
prompt: str
negative_prompt: str
def is_json(data: str) -> bool:
try:
loaded_json = json.loads(data)
assert isinstance(loaded_json, dict)
except (ValueError, AssertionError):
return False
return True
def get_file_from_folder_list(name, folders):
for folder in folders:
filename = os.path.abspath(os.path.realpath(os.path.join(folder, name)))
if os.path.isfile(filename):
return filename
return os.path.abspath(os.path.realpath(os.path.join(folders[0], name)))
def ordinal_suffix(number: int) -> str:
return 'th' if 10 <= number % 100 <= 20 else {1: 'st', 2: 'nd', 3: 'rd'}.get(number % 10, 'th')
def makedirs_with_log(path):
try:
os.makedirs(path, exist_ok=True)
except OSError as error:
print(f'Directory {path} could not be created, reason: {error}')

View File

@ -237,6 +237,10 @@ You can install Fooocus on Apple Mac silicon (M1 or M2) with macOS 'Catalina' or
Use `python entry_with_update.py --preset anime` or `python entry_with_update.py --preset realistic` for Fooocus Anime/Realistic Edition.
### Docker
See [docker.md](docker.md)
### Download Previous Version
See the guidelines [here](https://github.com/lllyasviel/Fooocus/discussions/1405).
@ -293,7 +297,7 @@ In both ways the access is unauthenticated by default. You can add basic authent
The below things are already inside the software, and **users do not need to do anything about these**.
1. GPT2-based [prompt expansion as a dynamic style "Fooocus V2".](https://github.com/lllyasviel/Fooocus/discussions/117#raw) (similar to Midjourney's hidden pre-processsing and "raw" mode, or the LeonardoAI's Prompt Magic).
1. GPT2-based [prompt expansion as a dynamic style "Fooocus V2".](https://github.com/lllyasviel/Fooocus/discussions/117#raw) (similar to Midjourney's hidden pre-processing and "raw" mode, or the LeonardoAI's Prompt Magic).
2. Native refiner swap inside one single k-sampler. The advantage is that the refiner model can now reuse the base model's momentum (or ODE's history parameters) collected from k-sampling to achieve more coherent sampling. In Automatic1111's high-res fix and ComfyUI's node system, the base model and refiner use two independent k-samplers, which means the momentum is largely wasted, and the sampling continuity is broken. Fooocus uses its own advanced k-diffusion sampling that ensures seamless, native, and continuous swap in a refiner setup. (Update Aug 13: Actually, I discussed this with Automatic1111 several days ago, and it seems that the “native refiner swap inside one single k-sampler” is [merged]( https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12371) into the dev branch of webui. Great!)
3. Negative ADM guidance. Because the highest resolution level of XL Base does not have cross attentions, the positive and negative signals for XL's highest resolution level cannot receive enough contrasts during the CFG sampling, causing the results to look a bit plastic or overly smooth in certain cases. Fortunately, since the XL's highest resolution level is still conditioned on image aspect ratios (ADM), we can modify the adm on the positive/negative side to compensate for the lack of CFG contrast in the highest resolution level. (Update Aug 16, the IOS App [Draw Things](https://apps.apple.com/us/app/draw-things-ai-generation/id6444050820) will support Negative ADM Guidance. Great!)
4. We implemented a carefully tuned variation of Section 5.1 of ["Improving Sample Quality of Diffusion Models Using Self-Attention Guidance"](https://arxiv.org/pdf/2210.00939.pdf). The weight is set to very low, but this is Fooocus's final guarantee to make sure that the XL will never yield an overly smooth or plastic appearance (examples [here](https://github.com/lllyasviel/Fooocus/discussions/117#sharpness)). This can almost eliminate all cases for which XL still occasionally produces overly smooth results, even with negative ADM guidance. (Update 2023 Aug 18, the Gaussian kernel of SAG is changed to an anisotropic kernel for better structure preservation and fewer artifacts.)
@ -370,7 +374,7 @@ entry_with_update.py [-h] [--listen [IP]] [--port PORT]
[--attention-split | --attention-quad | --attention-pytorch]
[--disable-xformers]
[--always-gpu | --always-high-vram | --always-normal-vram |
--always-low-vram | --always-no-vram | --always-cpu]
--always-low-vram | --always-no-vram | --always-cpu [CPU_NUM_THREADS]]
[--always-offload-from-vram] [--disable-server-log]
[--debug-mode] [--is-windows-embedded-python]
[--disable-server-info] [--share] [--preset PRESET]

5
requirements_docker.txt Normal file
View File

@ -0,0 +1,5 @@
torch==2.0.1
torchvision==0.15.2
torchaudio==2.0.2
torchtext==0.15.2
torchdata==0.6.1

View File

@ -1,2 +1 @@
gradio_root = None
last_stop = None
gradio_root = None

View File

@ -1,3 +1,16 @@
# [2.2.0](https://github.com/lllyasviel/Fooocus/releases/tag/2.2.0)
* Isolate every image generation to truly allow multi-user usage
* Add array support, changes the main prompt when increasing the image number. Syntax: `[[red, green, blue]] flower`
* Add optional metadata to images, allowing you to regenerate and modify them later with the same parameters
* Now supports native PNG, JPG and WEBP image generation
* Add Docker support
# [2.1.865](https://github.com/lllyasviel/Fooocus/releases/tag/2.1.865)
* Various bugfixes
* Add authentication to --listen
# 2.1.864
* New model list. See also discussions.

209
webui.py
View File

@ -11,7 +11,6 @@ import modules.async_worker as worker
import modules.constants as constants
import modules.flags as flags
import modules.gradio_hijack as grh
import modules.advanced_parameters as advanced_parameters
import modules.style_sorter as style_sorter
import modules.meta_parser
import args_manager
@ -21,18 +20,21 @@ from modules.sdxl_styles import legal_style_names
from modules.private_logger import get_current_html_path
from modules.ui_gradio_extensions import reload_javascript
from modules.auth import auth_enabled, check_auth
from modules.util import is_json
def get_task(*args):
args = list(args)
args.pop(0)
def generate_clicked(*args):
return worker.AsyncTask(args=args)
def generate_clicked(task):
import ldm_patched.modules.model_management as model_management
with model_management.interrupt_processing_mutex:
model_management.interrupt_processing = False
# outputs=[progress_html, progress_window, progress_gallery, gallery]
execution_start_time = time.perf_counter()
task = worker.AsyncTask(args=list(args))
finished = False
yield gr.update(visible=True, value=modules.html.make_progress_html(1, 'Waiting for task to start ...')), \
@ -71,6 +73,11 @@ def generate_clicked(*args):
gr.update(visible=True, value=product)
finished = True
# delete Fooocus temp images, only keep gradio temp images
if args_manager.args.disable_image_log:
for filepath in product:
os.remove(filepath)
execution_time = time.perf_counter() - execution_start_time
print(f'Total time: {execution_time:.2f} seconds')
return
@ -88,6 +95,7 @@ shared.gradio_root = gr.Blocks(
css=modules.html.css).queue()
with shared.gradio_root:
currentTask = gr.State(worker.AsyncTask(args=[]))
with gr.Row():
with gr.Column(scale=2):
with gr.Row():
@ -115,21 +123,22 @@ with shared.gradio_root:
skip_button = gr.Button(label="Skip", value="Skip", elem_classes='type_row_half', visible=False)
stop_button = gr.Button(label="Stop", value="Stop", elem_classes='type_row_half', elem_id='stop_button', visible=False)
def stop_clicked():
def stop_clicked(currentTask):
import ldm_patched.modules.model_management as model_management
shared.last_stop = 'stop'
model_management.interrupt_current_processing()
return [gr.update(interactive=False)] * 2
currentTask.last_stop = 'stop'
if (currentTask.processing):
model_management.interrupt_current_processing()
return currentTask
def skip_clicked():
def skip_clicked(currentTask):
import ldm_patched.modules.model_management as model_management
shared.last_stop = 'skip'
model_management.interrupt_current_processing()
return
currentTask.last_stop = 'skip'
if (currentTask.processing):
model_management.interrupt_current_processing()
return currentTask
stop_button.click(stop_clicked, outputs=[skip_button, stop_button],
queue=False, show_progress=False, _js='cancelGenerateForever')
skip_button.click(skip_clicked, queue=False, show_progress=False)
stop_button.click(stop_clicked, inputs=currentTask, outputs=currentTask, queue=False, show_progress=False, _js='cancelGenerateForever')
skip_button.click(skip_clicked, inputs=currentTask, outputs=currentTask, queue=False, show_progress=False)
with gr.Row(elem_classes='advanced_check_row'):
input_image_checkbox = gr.Checkbox(label='Input Image', value=False, container=False, elem_classes='min_check')
advanced_checkbox = gr.Checkbox(label='Advanced', value=modules.config.default_advanced_checkbox, container=False, elem_classes='min_check')
@ -150,7 +159,7 @@ with shared.gradio_root:
ip_weights = []
ip_ctrls = []
ip_ad_cols = []
for _ in range(4):
for _ in range(flags.controlnet_image_count):
with gr.Column():
ip_image = grh.Image(label='Image', source='upload', type='numpy', show_label=False, height=300)
ip_images.append(ip_image)
@ -208,6 +217,27 @@ with shared.gradio_root:
value=flags.desc_type_photo)
desc_btn = gr.Button(value='Describe this Image into Prompt')
gr.HTML('<a href="https://github.com/lllyasviel/Fooocus/discussions/1363" target="_blank">\U0001F4D4 Document</a>')
with gr.TabItem(label='Metadata') as load_tab:
with gr.Column():
metadata_input_image = grh.Image(label='Drag any image generated by Fooocus here', source='upload', type='filepath')
metadata_json = gr.JSON(label='Metadata')
metadata_import_button = gr.Button(value='Apply Metadata')
def trigger_metadata_preview(filepath):
parameters, metadata_scheme = modules.meta_parser.read_info_from_image(filepath)
results = {}
if parameters is not None:
results['parameters'] = parameters
if isinstance(metadata_scheme, flags.MetadataScheme):
results['metadata_scheme'] = metadata_scheme.value
return results
metadata_input_image.upload(trigger_metadata_preview, inputs=metadata_input_image,
outputs=metadata_json, queue=False, show_progress=True)
switch_js = "(x) => {if(x){viewer_to_bottom(100);viewer_to_bottom(500);}else{viewer_to_top();} return x;}"
down_js = "() => {viewer_to_bottom();}"
@ -230,6 +260,11 @@ with shared.gradio_root:
value=modules.config.default_aspect_ratio, info='width × height',
elem_classes='aspect_ratios')
image_number = gr.Slider(label='Image Number', minimum=1, maximum=modules.config.default_max_image_number, step=1, value=modules.config.default_image_number)
output_format = gr.Radio(label='Output Format',
choices=modules.flags.output_formats,
value=modules.config.default_output_format)
negative_prompt = gr.Textbox(label='Negative Prompt', show_label=True, placeholder="Type prompt here.",
info='Describing what you do not want to see.', lines=2,
elem_id='negative_prompt',
@ -259,7 +294,7 @@ with shared.gradio_root:
if args_manager.args.disable_image_log:
return gr.update(value='')
return gr.update(value=f'<a href="file={get_current_html_path()}" target="_blank">\U0001F4DA History Log</a>')
return gr.update(value=f'<a href="file={get_current_html_path(output_format)}" target="_blank">\U0001F4DA History Log</a>')
history_link = gr.HTML()
shared.gradio_root.load(update_history_link, outputs=history_link, queue=False, show_progress=False)
@ -319,11 +354,15 @@ with shared.gradio_root:
for i, (n, v) in enumerate(modules.config.default_loras):
with gr.Row():
lora_enabled = gr.Checkbox(label='Enable', value=True,
elem_classes=['lora_enable', 'min_check'])
lora_model = gr.Dropdown(label=f'LoRA {i + 1}',
choices=['None'] + modules.config.lora_filenames, value=n)
lora_weight = gr.Slider(label='Weight', minimum=-2, maximum=2, step=0.01, value=v,
choices=['None'] + modules.config.lora_filenames, value=n,
elem_classes='lora_model')
lora_weight = gr.Slider(label='Weight', minimum=modules.config.default_loras_min_weight,
maximum=modules.config.default_loras_max_weight, step=0.01, value=v,
elem_classes='lora_weight')
lora_ctrls += [lora_model, lora_weight]
lora_ctrls += [lora_enabled, lora_model, lora_weight]
with gr.Row():
model_refresh = gr.Button(label='Refresh', value='\U0001f504 Refresh All Files', variant='secondary', elem_classes='refresh_button')
@ -347,7 +386,7 @@ with shared.gradio_root:
step=0.001, value=0.3,
info='When to end the guidance from positive/negative ADM. ')
refiner_swap_method = gr.Dropdown(label='Refiner swap method', value='joint',
refiner_swap_method = gr.Dropdown(label='Refiner swap method', value=flags.refiner_swap_method,
choices=['joint', 'separate', 'vae'])
adaptive_cfg = gr.Slider(label='CFG Mimicking from TSNR', minimum=1.0, maximum=30.0, step=0.01,
@ -387,6 +426,23 @@ with shared.gradio_root:
info='Set as negative number to disable. For developer debugging.')
disable_preview = gr.Checkbox(label='Disable Preview', value=False,
info='Disable preview during generation.')
disable_intermediate_results = gr.Checkbox(label='Disable Intermediate Results',
value=modules.config.default_performance == 'Extreme Speed',
interactive=modules.config.default_performance != 'Extreme Speed',
info='Disable intermediate results during generation, only show final gallery.')
disable_seed_increment = gr.Checkbox(label='Disable seed increment',
info='Disable automatic seed increment when image number is > 1.',
value=False)
if not args_manager.args.disable_metadata:
save_metadata_to_images = gr.Checkbox(label='Save Metadata to Images', value=modules.config.default_save_metadata_to_images,
info='Adds parameters to generated images allowing manual regeneration.')
metadata_scheme = gr.Radio(label='Metadata Scheme', choices=flags.metadata_scheme, value=modules.config.default_metadata_scheme,
info='Image Prompt parameters are not included. Use a1111 for compatibility with Civitai.',
visible=modules.config.default_save_metadata_to_images)
save_metadata_to_images.change(lambda x: gr.update(visible=x), inputs=[save_metadata_to_images], outputs=[metadata_scheme],
queue=False, show_progress=False)
with gr.Tab(label='Control'):
debugging_cn_preprocessor = gr.Checkbox(label='Debug Preprocessors', value=False,
@ -435,7 +491,7 @@ with shared.gradio_root:
'(default is 0, always process before any mask invert)')
inpaint_mask_upload_checkbox = gr.Checkbox(label='Enable Mask Upload', value=False)
invert_mask_checkbox = gr.Checkbox(label='Invert Mask', value=False)
inpaint_ctrls = [debugging_inpaint_preprocessor, inpaint_disable_initial_latent, inpaint_engine,
inpaint_strength, inpaint_respective_field,
inpaint_mask_upload_checkbox, invert_mask_checkbox, inpaint_erode_or_dilate]
@ -452,15 +508,6 @@ with shared.gradio_root:
freeu_s2 = gr.Slider(label='S2', minimum=0, maximum=4, step=0.01, value=0.95)
freeu_ctrls = [freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2]
adps = [disable_preview, adm_scaler_positive, adm_scaler_negative, adm_scaler_end, adaptive_cfg, sampler_name,
scheduler_name, generate_image_grid, overwrite_step, overwrite_switch, overwrite_width, overwrite_height,
overwrite_vary_strength, overwrite_upscale_strength,
mixing_image_prompt_and_vary_upscale, mixing_image_prompt_and_inpaint,
debugging_cn_preprocessor, skipping_cn_preprocessor, controlnet_softness,
canny_low_threshold, canny_high_threshold, refiner_swap_method]
adps += freeu_ctrls
adps += inpaint_ctrls
def dev_mode_checked(r):
return gr.update(visible=r)
@ -470,24 +517,27 @@ with shared.gradio_root:
def model_refresh_clicked():
modules.config.update_all_model_names()
results = []
results += [gr.update(choices=modules.config.model_filenames), gr.update(choices=['None'] + modules.config.model_filenames)]
for i in range(5):
results += [gr.update(choices=['None'] + modules.config.lora_filenames), gr.update()]
results = [gr.update(choices=modules.config.model_filenames)]
results += [gr.update(choices=['None'] + modules.config.model_filenames)]
for i in range(modules.config.default_max_lora_number):
results += [gr.update(interactive=True), gr.update(choices=['None'] + modules.config.lora_filenames), gr.update()]
return results
model_refresh.click(model_refresh_clicked, [], [base_model, refiner_model] + lora_ctrls,
queue=False, show_progress=False)
performance_selection.change(lambda x: [gr.update(interactive=x != 'Extreme Speed')] * 11 +
[gr.update(visible=x != 'Extreme Speed')] * 1,
[gr.update(visible=x != 'Extreme Speed')] * 1 +
[gr.update(interactive=x != 'Extreme Speed', value=x == 'Extreme Speed', )] * 1,
inputs=performance_selection,
outputs=[
guidance_scale, sharpness, adm_scaler_end, adm_scaler_positive,
adm_scaler_negative, refiner_switch, refiner_model, sampler_name,
scheduler_name, adaptive_cfg, refiner_swap_method, negative_prompt
scheduler_name, adaptive_cfg, refiner_swap_method, negative_prompt, disable_intermediate_results
], queue=False, show_progress=False)
output_format.input(lambda x: gr.update(output_format=x), inputs=output_format)
advanced_checkbox.change(lambda x: gr.update(visible=x), advanced_checkbox, advanced_column,
queue=False, show_progress=False) \
.then(fn=lambda: None, _js='refresh_grid_delayed', queue=False, show_progress=False)
@ -525,29 +575,37 @@ with shared.gradio_root:
inpaint_strength, inpaint_respective_field
], show_progress=False, queue=False)
ctrls = [
ctrls = [currentTask, generate_image_grid]
ctrls += [
prompt, negative_prompt, style_selections,
performance_selection, aspect_ratios_selection, image_number, image_seed, sharpness, guidance_scale
performance_selection, aspect_ratios_selection, image_number, output_format, image_seed, sharpness, guidance_scale
]
ctrls += [base_model, refiner_model, refiner_switch] + lora_ctrls
ctrls += [input_image_checkbox, current_tab]
ctrls += [uov_method, uov_input_image]
ctrls += [outpaint_selections, inpaint_input_image, inpaint_additional_prompt, inpaint_mask_image]
ctrls += [disable_preview, disable_intermediate_results, disable_seed_increment]
ctrls += [adm_scaler_positive, adm_scaler_negative, adm_scaler_end, adaptive_cfg]
ctrls += [sampler_name, scheduler_name]
ctrls += [overwrite_step, overwrite_switch, overwrite_width, overwrite_height, overwrite_vary_strength]
ctrls += [overwrite_upscale_strength, mixing_image_prompt_and_vary_upscale, mixing_image_prompt_and_inpaint]
ctrls += [debugging_cn_preprocessor, skipping_cn_preprocessor, canny_low_threshold, canny_high_threshold]
ctrls += [refiner_swap_method, controlnet_softness]
ctrls += freeu_ctrls
ctrls += inpaint_ctrls
if not args_manager.args.disable_metadata:
ctrls += [save_metadata_to_images, metadata_scheme]
ctrls += ip_ctrls
state_is_generating = gr.State(False)
def parse_meta(raw_prompt_txt, is_generating):
loaded_json = None
try:
if '{' in raw_prompt_txt:
if '}' in raw_prompt_txt:
if ':' in raw_prompt_txt:
loaded_json = json.loads(raw_prompt_txt)
assert isinstance(loaded_json, dict)
except:
loaded_json = None
if is_json(raw_prompt_txt):
loaded_json = json.loads(raw_prompt_txt)
if loaded_json is None:
if is_generating:
@ -559,37 +617,35 @@ with shared.gradio_root:
prompt.input(parse_meta, inputs=[prompt, state_is_generating], outputs=[prompt, generate_button, load_parameter_button], queue=False, show_progress=False)
load_parameter_button.click(modules.meta_parser.load_parameter_button_click, inputs=[prompt, state_is_generating], outputs=[
advanced_checkbox,
image_number,
prompt,
negative_prompt,
style_selections,
performance_selection,
aspect_ratios_selection,
overwrite_width,
overwrite_height,
sharpness,
guidance_scale,
adm_scaler_positive,
adm_scaler_negative,
adm_scaler_end,
base_model,
refiner_model,
refiner_switch,
sampler_name,
scheduler_name,
seed_random,
image_seed,
generate_button,
load_parameter_button
] + lora_ctrls, queue=False, show_progress=False)
load_data_outputs = [advanced_checkbox, image_number, prompt, negative_prompt, style_selections,
performance_selection, overwrite_step, overwrite_switch, aspect_ratios_selection,
overwrite_width, overwrite_height, guidance_scale, sharpness, adm_scaler_positive,
adm_scaler_negative, adm_scaler_end, refiner_swap_method, adaptive_cfg, base_model,
refiner_model, refiner_switch, sampler_name, scheduler_name, seed_random, image_seed,
generate_button, load_parameter_button] + freeu_ctrls + lora_ctrls
load_parameter_button.click(modules.meta_parser.load_parameter_button_click, inputs=[prompt, state_is_generating], outputs=load_data_outputs, queue=False, show_progress=False)
def trigger_metadata_import(filepath, state_is_generating):
parameters, metadata_scheme = modules.meta_parser.read_info_from_image(filepath)
if parameters is None:
print('Could not find metadata in the image!')
parsed_parameters = {}
else:
metadata_parser = modules.meta_parser.get_metadata_parser(metadata_scheme)
parsed_parameters = metadata_parser.parse_json(parameters)
return modules.meta_parser.load_parameter_button_click(parsed_parameters, state_is_generating)
metadata_import_button.click(trigger_metadata_import, inputs=[metadata_input_image, state_is_generating], outputs=load_data_outputs, queue=False, show_progress=True) \
.then(style_sorter.sort_styles, inputs=style_selections, outputs=style_selections, queue=False, show_progress=False)
generate_button.click(lambda: (gr.update(visible=True, interactive=True), gr.update(visible=True, interactive=True), gr.update(visible=False, interactive=False), [], True),
outputs=[stop_button, skip_button, generate_button, gallery, state_is_generating]) \
.then(fn=refresh_seed, inputs=[seed_random, image_seed], outputs=image_seed) \
.then(advanced_parameters.set_all_advanced_parameters, inputs=adps) \
.then(fn=generate_clicked, inputs=ctrls, outputs=[progress_html, progress_window, progress_gallery, gallery]) \
.then(fn=get_task, inputs=ctrls, outputs=currentTask) \
.then(fn=generate_clicked, inputs=currentTask, outputs=[progress_html, progress_window, progress_gallery, gallery]) \
.then(lambda: (gr.update(visible=True, interactive=True), gr.update(visible=False, interactive=False), gr.update(visible=False, interactive=False), False),
outputs=[generate_button, stop_button, skip_button, state_is_generating]) \
.then(fn=update_history_link, outputs=history_link) \
@ -626,5 +682,6 @@ shared.gradio_root.launch(
server_port=args_manager.args.port,
share=args_manager.args.share,
auth=check_auth if (args_manager.args.share or args_manager.args.listen) and auth_enabled else None,
allowed_paths=[modules.config.path_outputs],
blocked_paths=[constants.AUTH_FILENAME]
)

100
wildcards/animal.txt Normal file
View File

@ -0,0 +1,100 @@
Alligator
Ant
Antelope
Armadillo
Badger
Bat
Bear
Beaver
Bison
Boar
Bobcat
Bull
Camel
Chameleon
Cheetah
Chicken
Chihuahua
Chimpanzee
Chinchilla
Chipmunk
Comodo Dragon
Cow
Coyote
Crocodile
Crow
Deer
Dinosaur
Dolphin
Donkey
Duck
Eagle
Eel
Elephant
Elk
Emu
Falcon
Ferret
Flamingo
Flying Squirrel
Giraffe
Goose
Guinea pig
Hawk
Hedgehog
Hippopotamus
Horse
Hummingbird
Hyena
Jackal
Jaguar
Jellyfish
Kangaroo
King Cobra
Koala bear
Leopard
Lion
Lizard
Magpie
Marten
Meerkat
Mole
Monkey
Moose
Mouse
Octopus
Okapi
Orangutan
Ostrich
Otter
Owl
Panda
Pangolin
Panther
Penguin
Pig
Porcupine
Possum
Puma
Quokka
Rabbit
Raccoon
Raven
Reindeer
Rhinoceros
Seal
Shark
Sheep
Snail
Snake
Sparrow
Spider
Squirrel
Swallow
Tiger
Walrus
Whale
Wolf
Wombat
Yak
Zebra