Compare commits

..

No commits in common. "main" and "2.2.0-rc1" have entirely different histories.

96 changed files with 1498 additions and 5723 deletions

View File

@ -1,54 +1 @@
__pycache__ .idea
*.ckpt
*.safetensors
*.pth
*.pt
*.bin
*.patch
*.backup
*.corrupted
*.partial
*.onnx
sorted_styles.json
/input
/cache
/language/default.json
/test_imgs
config.txt
config_modification_tutorial.txt
user_path_config.txt
user_path_config-deprecated.txt
/modules/*.png
/repositories
/fooocus_env
/venv
/tmp
/ui-config.json
/outputs
/config.json
/log
/webui.settings.bat
/embeddings
/styles.csv
/params.txt
/styles.csv.bak
/webui-user.bat
/webui-user.sh
/interrogate
/user.css
/.idea
/notification.ogg
/notification.mp3
/SwinIR
/textual_inversion
.vscode
/extensions
/test/stdout.txt
/test/stderr.txt
/cache.json*
/config_states/
/node_modules
/package-lock.json
/.coverage*
/auth.json
.DS_Store

3
.gitattributes vendored
View File

@ -1,3 +0,0 @@
# Ensure that shell scripts always use lf line endings, e.g. entrypoint.sh for docker
* text=auto
*.sh text eol=lf

View File

@ -16,12 +16,11 @@ body:
description: | description: |
Please perform basic debugging to see if your configuration is the cause of the issue. Please perform basic debugging to see if your configuration is the cause of the issue.
Basic debug procedure Basic debug procedure
 1. Update Fooocus - sometimes things just need to be updated  2. Update Fooocus - sometimes things just need to be updated
 2. Backup and remove your config.txt - check if the issue is caused by bad configuration  3. Backup and remove your config.txt - check if the issue is caused by bad configuration
 3. Try a fresh installation of Fooocus in a different directory - see if a clean installation solves the issue  5. Try a fresh installation of Fooocus in a different directory - see if a clean installation solves the issue
Before making a issue report please, check that the issue hasn't been reported recently. Before making a issue report please, check that the issue hasn't been reported recently.
options: options:
- label: The issue has not been resolved by following the [troubleshooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md)
- label: The issue exists on a clean installation of Fooocus - label: The issue exists on a clean installation of Fooocus
- label: The issue exists in the current version of Fooocus - label: The issue exists in the current version of Fooocus
- label: The issue has not been reported before recently - label: The issue has not been reported before recently

View File

@ -1,6 +0,0 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "monthly"

View File

@ -1,47 +0,0 @@
name: Docker image build
on:
push:
branches:
- main
tags:
- v*
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v5
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository_owner }}/${{ github.event.repository.name }}
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=edge,branch=main
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
file: ./Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

1
.gitignore vendored
View File

@ -10,7 +10,6 @@ __pycache__
*.partial *.partial
*.onnx *.onnx
sorted_styles.json sorted_styles.json
hash_cache.txt
/input /input
/cache /cache
/language/default.json /language/default.json

View File

@ -1,4 +1,4 @@
FROM nvidia/cuda:12.4.1-base-ubuntu22.04 FROM nvidia/cuda:12.3.1-base-ubuntu22.04
ENV DEBIAN_FRONTEND noninteractive ENV DEBIAN_FRONTEND noninteractive
ENV CMDARGS --listen ENV CMDARGS --listen
@ -10,7 +10,7 @@ RUN apt-get update -y && \
COPY requirements_docker.txt requirements_versions.txt /tmp/ COPY requirements_docker.txt requirements_versions.txt /tmp/
RUN pip install --no-cache-dir -r /tmp/requirements_docker.txt -r /tmp/requirements_versions.txt && \ RUN pip install --no-cache-dir -r /tmp/requirements_docker.txt -r /tmp/requirements_versions.txt && \
rm -f /tmp/requirements_docker.txt /tmp/requirements_versions.txt rm -f /tmp/requirements_docker.txt /tmp/requirements_versions.txt
RUN pip install --no-cache-dir xformers==0.0.23 --no-dependencies RUN pip install --no-cache-dir xformers==0.0.22 --no-dependencies
RUN curl -fsL -o /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2 https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64 && \ RUN curl -fsL -o /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2 https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64 && \
chmod +x /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2 chmod +x /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2
@ -23,7 +23,7 @@ RUN chown -R user:user /content
WORKDIR /content WORKDIR /content
USER user USER user
COPY --chown=user:user . /content/app RUN git clone https://github.com/lllyasviel/Fooocus /content/app
RUN mv /content/app/models /content/app/models.org RUN mv /content/app/models /content/app/models.org
CMD [ "sh", "-c", "/content/entrypoint.sh ${CMDARGS}" ] CMD [ "sh", "-c", "/content/entrypoint.sh ${CMDARGS}" ]

View File

@ -1,10 +1,10 @@
import ldm_patched.modules.args_parser as args_parser import ldm_patched.modules.args_parser as args_parser
import os
from tempfile import gettempdir
args_parser.parser.add_argument("--share", action='store_true', help="Set whether to share on Gradio.") args_parser.parser.add_argument("--share", action='store_true', help="Set whether to share on Gradio.")
args_parser.parser.add_argument("--preset", type=str, default=None, help="Apply specified UI preset.") args_parser.parser.add_argument("--preset", type=str, default=None, help="Apply specified UI preset.")
args_parser.parser.add_argument("--disable-preset-selection", action='store_true',
help="Disables preset selection in Gradio.")
args_parser.parser.add_argument("--language", type=str, default='default', args_parser.parser.add_argument("--language", type=str, default='default',
help="Translate UI using json files in [language] folder. " help="Translate UI using json files in [language] folder. "
@ -17,7 +17,7 @@ args_parser.parser.add_argument("--disable-offload-from-vram", action="store_tru
args_parser.parser.add_argument("--theme", type=str, help="launches the UI with light or dark theme", default=None) args_parser.parser.add_argument("--theme", type=str, help="launches the UI with light or dark theme", default=None)
args_parser.parser.add_argument("--disable-image-log", action='store_true', args_parser.parser.add_argument("--disable-image-log", action='store_true',
help="Prevent writing images and logs to the outputs folder.") help="Prevent writing images and logs to hard drive.")
args_parser.parser.add_argument("--disable-analytics", action='store_true', args_parser.parser.add_argument("--disable-analytics", action='store_true',
help="Disables analytics for Gradio.") help="Disables analytics for Gradio.")
@ -28,17 +28,8 @@ args_parser.parser.add_argument("--disable-metadata", action='store_true',
args_parser.parser.add_argument("--disable-preset-download", action='store_true', args_parser.parser.add_argument("--disable-preset-download", action='store_true',
help="Disables downloading models for presets", default=False) help="Disables downloading models for presets", default=False)
args_parser.parser.add_argument("--disable-enhance-output-sorting", action='store_true',
help="Disables enhance output sorting for final image gallery.")
args_parser.parser.add_argument("--enable-auto-describe-image", action='store_true',
help="Enables automatic description of uov and enhance image when prompt is empty", default=False)
args_parser.parser.add_argument("--always-download-new-model", action='store_true', args_parser.parser.add_argument("--always-download-new-model", action='store_true',
help="Always download newer models", default=False) help="Always download newer models ", default=False)
args_parser.parser.add_argument("--rebuild-hash-cache", help="Generates missing model and LoRA hashes.",
type=int, nargs="?", metavar="CPU_NUM_THREADS", const=-1)
args_parser.parser.set_defaults( args_parser.parser.set_defaults(
disable_cuda_malloc=True, disable_cuda_malloc=True,
@ -58,4 +49,7 @@ if args_parser.args.disable_analytics:
if args_parser.args.disable_in_browser: if args_parser.args.disable_in_browser:
args_parser.args.in_browser = False args_parser.args.in_browser = False
if args_parser.args.temp_path is None:
args_parser.args.temp_path = os.path.join(gettempdir(), 'Fooocus')
args = args_parser.args args = args_parser.args

View File

@ -1,150 +1,5 @@
/* based on https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/v1.6.0/style.css */ /* based on https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/v1.6.0/style.css */
.loader-container {
display: flex; /* Use flex to align items horizontally */
align-items: center; /* Center items vertically within the container */
white-space: nowrap; /* Prevent line breaks within the container */
}
.loader {
border: 8px solid #f3f3f3; /* Light grey */
border-top: 8px solid #3498db; /* Blue */
border-radius: 50%;
width: 30px;
height: 30px;
animation: spin 2s linear infinite;
}
@keyframes spin {
0% { transform: rotate(0deg); }
100% { transform: rotate(360deg); }
}
/* Style the progress bar */
progress {
appearance: none; /* Remove default styling */
height: 20px; /* Set the height of the progress bar */
border-radius: 5px; /* Round the corners of the progress bar */
background-color: #f3f3f3; /* Light grey background */
width: 100%;
vertical-align: middle !important;
}
/* Style the progress bar container */
.progress-container {
margin-left: 20px;
margin-right: 20px;
flex-grow: 1; /* Allow the progress container to take up remaining space */
}
/* Set the color of the progress bar fill */
progress::-webkit-progress-value {
background-color: #3498db; /* Blue color for the fill */
}
progress::-moz-progress-bar {
background-color: #3498db; /* Blue color for the fill in Firefox */
}
/* Style the text on the progress bar */
progress::after {
content: attr(value '%'); /* Display the progress value followed by '%' */
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
color: white; /* Set text color */
font-size: 14px; /* Set font size */
}
/* Style other texts */
.loader-container > span {
margin-left: 5px; /* Add spacing between the progress bar and the text */
}
.progress-bar > .generating {
display: none !important;
}
.progress-bar{
height: 30px !important;
}
.progress-bar span {
text-align: right;
width: 215px;
}
div:has(> #positive_prompt) {
border: none;
}
#positive_prompt {
padding: 1px;
background: var(--background-fill-primary);
}
.type_row {
height: 84px !important;
}
.type_row_half {
height: 34px !important;
}
.refresh_button {
border: none !important;
background: none !important;
font-size: none !important;
box-shadow: none !important;
}
.advanced_check_row {
width: 330px !important;
}
.min_check {
min-width: min(1px, 100%) !important;
}
.resizable_area {
resize: vertical;
overflow: auto !important;
}
.performance_selection label {
width: 140px !important;
}
.aspect_ratios label {
flex: calc(50% - 5px) !important;
}
.aspect_ratios label span {
white-space: nowrap !important;
}
.aspect_ratios label input {
margin-left: -5px !important;
}
.lora_enable label {
height: 100%;
}
.lora_enable label input {
margin: auto;
}
.lora_enable label span {
display: none;
}
@-moz-document url-prefix() {
.lora_weight input[type=number] {
width: 80px;
}
}
#context-menu{ #context-menu{
z-index:9999; z-index:9999;
position:absolute; position:absolute;
@ -363,56 +218,3 @@ div:has(> #positive_prompt) {
#stylePreviewOverlay.lower-half { #stylePreviewOverlay.lower-half {
transform: translate(-140px, -140px); transform: translate(-140px, -140px);
} }
/* scrollable box for style selections */
.contain .tabs {
height: 100%;
}
.contain .tabs .tabitem.style_selections_tab {
height: 100%;
}
.contain .tabs .tabitem.style_selections_tab > div:first-child {
height: 100%;
}
.contain .tabs .tabitem.style_selections_tab .style_selections {
min-height: 200px;
height: 100%;
}
.contain .tabs .tabitem.style_selections_tab .style_selections .wrap[data-testid="checkbox-group"] {
position: absolute; /* remove this to disable scrolling within the checkbox-group */
overflow: auto;
padding-right: 2px;
max-height: 100%;
}
.contain .tabs .tabitem.style_selections_tab .style_selections .wrap[data-testid="checkbox-group"] label {
/* max-width: calc(35% - 15px) !important; */ /* add this to enable 3 columns layout */
flex: calc(50% - 5px) !important;
}
.contain .tabs .tabitem.style_selections_tab .style_selections .wrap[data-testid="checkbox-group"] label span {
/* white-space:nowrap; */ /* add this to disable text wrapping (better choice for 3 columns layout) */
overflow: hidden;
text-overflow: ellipsis;
}
/* styles preview tooltip */
.preview-tooltip {
background-color: #fff8;
font-family: monospace;
text-align: center;
border-radius: 5px 5px 0px 0px;
display: none; /* remove this to enable tooltip in preview image */
}
#inpaint_canvas .canvas-tooltip-info {
top: 2px;
}
#inpaint_brush_color input[type=color]{
background: none;
}

View File

@ -1,11 +0,0 @@
## Running unit tests
Native python:
```
python -m unittest tests/
```
Embedded python (Windows zip file installation method):
```
..\python_embeded\python.exe -m unittest
```

View File

@ -1,10 +1,12 @@
version: '3.9'
volumes: volumes:
fooocus-data: fooocus-data:
services: services:
app: app:
build: . build: .
image: ghcr.io/lllyasviel/fooocus image: fooocus
ports: ports:
- "7865:7865" - "7865:7865"
environment: environment:

View File

@ -1,99 +1,35 @@
# Fooocus on Docker # Fooocus on Docker
The docker image is based on NVIDIA CUDA 12.4 and PyTorch 2.1, see [Dockerfile](Dockerfile) and [requirements_docker.txt](requirements_docker.txt) for details. The docker image is based on NVIDIA CUDA 12.3 and PyTorch 2.0, see [Dockerfile](Dockerfile) and [requirements_docker.txt](requirements_docker.txt) for details.
## Requirements
- A computer with specs good enough to run Fooocus, and proprietary Nvidia drivers
- Docker, Docker Compose, or Podman
## Quick start ## Quick start
**More information in the [notes](#notes).** **This is just an easy way for testing. Please find more information in the [notes](#notes).**
### Running with Docker Compose
1. Clone this repository 1. Clone this repository
2. Run the docker container with `docker compose up`. 2. Build the image with `docker compose build`
3. Run the docker container with `docker compose up`. Building the image takes some time.
### Running with Docker
```sh
docker run -p 7865:7865 -v fooocus-data:/content/data -it \
--gpus all \
-e CMDARGS=--listen \
-e DATADIR=/content/data \
-e config_path=/content/data/config.txt \
-e config_example_path=/content/data/config_modification_tutorial.txt \
-e path_checkpoints=/content/data/models/checkpoints/ \
-e path_loras=/content/data/models/loras/ \
-e path_embeddings=/content/data/models/embeddings/ \
-e path_vae_approx=/content/data/models/vae_approx/ \
-e path_upscale_models=/content/data/models/upscale_models/ \
-e path_inpaint=/content/data/models/inpaint/ \
-e path_controlnet=/content/data/models/controlnet/ \
-e path_clip_vision=/content/data/models/clip_vision/ \
-e path_fooocus_expansion=/content/data/models/prompt_expansion/fooocus_expansion/ \
-e path_outputs=/content/app/outputs/ \
ghcr.io/lllyasviel/fooocus
```
### Running with Podman
```sh
podman run -p 7865:7865 -v fooocus-data:/content/data -it \
--security-opt=no-new-privileges --cap-drop=ALL --security-opt label=type:nvidia_container_t --device=nvidia.com/gpu=all \
-e CMDARGS=--listen \
-e DATADIR=/content/data \
-e config_path=/content/data/config.txt \
-e config_example_path=/content/data/config_modification_tutorial.txt \
-e path_checkpoints=/content/data/models/checkpoints/ \
-e path_loras=/content/data/models/loras/ \
-e path_embeddings=/content/data/models/embeddings/ \
-e path_vae_approx=/content/data/models/vae_approx/ \
-e path_upscale_models=/content/data/models/upscale_models/ \
-e path_inpaint=/content/data/models/inpaint/ \
-e path_controlnet=/content/data/models/controlnet/ \
-e path_clip_vision=/content/data/models/clip_vision/ \
-e path_fooocus_expansion=/content/data/models/prompt_expansion/fooocus_expansion/ \
-e path_outputs=/content/app/outputs/ \
ghcr.io/lllyasviel/fooocus
```
When you see the message `Use the app with http://0.0.0.0:7865/` in the console, you can access the URL in your browser. When you see the message `Use the app with http://0.0.0.0:7865/` in the console, you can access the URL in your browser.
Your models and outputs are stored in the `fooocus-data` volume, which, depending on OS, is stored in `/var/lib/docker/volumes/` (or `~/.local/share/containers/storage/volumes/` when using `podman`). Your models and outputs are stored in the `fooocus-data` volume, which, depending on OS, is stored in `/var/lib/docker/volumes`.
## Building the container locally
Clone the repository first, and open a terminal in the folder.
Build with `docker`:
```sh
docker build . -t fooocus
```
Build with `podman`:
```sh
podman build . -t fooocus
```
## Details ## Details
### Update the container manually (`docker compose`) ### Update the container manually
When you are using `docker compose up` continuously, the container is not updated to the latest version of Fooocus automatically. When you are using `docker compose up` continuously, the container is not updated to the latest version of Fooocus automatically.
Run `git pull` before executing `docker compose build --no-cache` to build an image with the latest Fooocus version. Run `git pull` before executing `docker compose build --no-cache` to build an image with the latest Fooocus version.
You can then start it with `docker compose up` You can then start it with `docker compose up`
### Import models, outputs ### Import models, outputs
If you want to import files from models or the outputs folder, you can uncomment the following settings in the [docker-compose.yml](docker-compose.yml):
If you want to import files from models or the outputs folder, you can add the following bind mounts in the [docker-compose.yml](docker-compose.yml) or your preferred method of running the container:
``` ```
#- ./models:/import/models # Once you import files, you don't need to mount again. #- ./models:/import/models # Once you import files, you don't need to mount again.
#- ./outputs:/import/outputs # Once you import files, you don't need to mount again. #- ./outputs:/import/outputs # Once you import files, you don't need to mount again.
``` ```
After running the container, your files will be copied into `/content/data/models` and `/content/data/outputs` After running `docker compose up`, your files will be copied into `/content/data/models` and `/content/data/outputs`
Since `/content/data` is a persistent volume folder, your files will be persisted even when you re-run the container without the above mounts. Since `/content/data` is a persistent volume folder, your files will be persisted even when you re-run `docker compose up --build` without above volume settings.
### Paths inside the container ### Paths inside the container
@ -118,7 +54,6 @@ Docker specified environments are there. They are used by 'entrypoint.sh'
|CMDARGS|Arguments for [entry_with_update.py](entry_with_update.py) which is called by [entrypoint.sh](entrypoint.sh)| |CMDARGS|Arguments for [entry_with_update.py](entry_with_update.py) which is called by [entrypoint.sh](entrypoint.sh)|
|config_path|'config.txt' location| |config_path|'config.txt' location|
|config_example_path|'config_modification_tutorial.txt' location| |config_example_path|'config_modification_tutorial.txt' location|
|HF_MIRROR| huggingface mirror site domain|
You can also use the same json key names and values explained in the 'config_modification_tutorial.txt' as the environments. You can also use the same json key names and values explained in the 'config_modification_tutorial.txt' as the environments.
See examples in the [docker-compose.yml](docker-compose.yml) See examples in the [docker-compose.yml](docker-compose.yml)

View File

@ -1,24 +0,0 @@
# https://github.com/sail-sg/EditAnything/blob/main/sam2groundingdino_edit.py
import numpy as np
from PIL import Image
from extras.inpaint_mask import SAMOptions, generate_mask_from_image
original_image = Image.open('cat.webp')
image = np.array(original_image, dtype=np.uint8)
sam_options = SAMOptions(
dino_prompt='eye',
dino_box_threshold=0.3,
dino_text_threshold=0.25,
dino_erode_or_dilate=0,
dino_debug=False,
max_detections=2,
model_type='vit_b'
)
mask_image, _, _, _ = generate_mask_from_image(image, sam_options=sam_options)
merged_masks_img = Image.fromarray(mask_image)
merged_masks_img.show()

View File

@ -216,9 +216,9 @@ def is_url(url_or_filename):
def load_checkpoint(model,url_or_filename): def load_checkpoint(model,url_or_filename):
if is_url(url_or_filename): if is_url(url_or_filename):
cached_file = download_cached_file(url_or_filename, check_hash=False, progress=True) cached_file = download_cached_file(url_or_filename, check_hash=False, progress=True)
checkpoint = torch.load(cached_file, map_location='cpu', weights_only=True) checkpoint = torch.load(cached_file, map_location='cpu')
elif os.path.isfile(url_or_filename): elif os.path.isfile(url_or_filename):
checkpoint = torch.load(url_or_filename, map_location='cpu', weights_only=True) checkpoint = torch.load(url_or_filename, map_location='cpu')
else: else:
raise RuntimeError('checkpoint url or path is invalid') raise RuntimeError('checkpoint url or path is invalid')

View File

@ -78,9 +78,9 @@ def blip_nlvr(pretrained='',**kwargs):
def load_checkpoint(model,url_or_filename): def load_checkpoint(model,url_or_filename):
if is_url(url_or_filename): if is_url(url_or_filename):
cached_file = download_cached_file(url_or_filename, check_hash=False, progress=True) cached_file = download_cached_file(url_or_filename, check_hash=False, progress=True)
checkpoint = torch.load(cached_file, map_location='cpu', weights_only=True) checkpoint = torch.load(cached_file, map_location='cpu')
elif os.path.isfile(url_or_filename): elif os.path.isfile(url_or_filename):
checkpoint = torch.load(url_or_filename, map_location='cpu', weights_only=True) checkpoint = torch.load(url_or_filename, map_location='cpu')
else: else:
raise RuntimeError('checkpoint url or path is invalid') raise RuntimeError('checkpoint url or path is invalid')
state_dict = checkpoint['model'] state_dict = checkpoint['model']

View File

@ -1,43 +0,0 @@
batch_size = 1
modelname = "groundingdino"
backbone = "swin_T_224_1k"
position_embedding = "sine"
pe_temperatureH = 20
pe_temperatureW = 20
return_interm_indices = [1, 2, 3]
backbone_freeze_keywords = None
enc_layers = 6
dec_layers = 6
pre_norm = False
dim_feedforward = 2048
hidden_dim = 256
dropout = 0.0
nheads = 8
num_queries = 900
query_dim = 4
num_patterns = 0
num_feature_levels = 4
enc_n_points = 4
dec_n_points = 4
two_stage_type = "standard"
two_stage_bbox_embed_share = False
two_stage_class_embed_share = False
transformer_activation = "relu"
dec_pred_bbox_embed_share = True
dn_box_noise_scale = 1.0
dn_label_noise_ratio = 0.5
dn_label_coef = 1.0
dn_bbox_coef = 1.0
embed_init_tgt = True
dn_labelbook_size = 2000
max_text_len = 256
text_encoder_type = "bert-base-uncased"
use_text_enhancer = True
use_fusion_layer = True
use_checkpoint = True
use_transformer_ckpt = True
use_text_cross_attention = True
text_dropout = 0.0
fusion_dropout = 0.0
fusion_droppath = 0.1
sub_sentence_present = True

View File

@ -1,100 +0,0 @@
from typing import Tuple, List
import ldm_patched.modules.model_management as model_management
from ldm_patched.modules.model_patcher import ModelPatcher
from modules.config import path_inpaint
from modules.model_loader import load_file_from_url
import numpy as np
import supervision as sv
import torch
from groundingdino.util.inference import Model
from groundingdino.util.inference import load_model, preprocess_caption, get_phrases_from_posmap
class GroundingDinoModel(Model):
def __init__(self):
self.config_file = 'extras/GroundingDINO/config/GroundingDINO_SwinT_OGC.py'
self.model = None
self.load_device = torch.device('cpu')
self.offload_device = torch.device('cpu')
@torch.no_grad()
@torch.inference_mode()
def predict_with_caption(
self,
image: np.ndarray,
caption: str,
box_threshold: float = 0.35,
text_threshold: float = 0.25
) -> Tuple[sv.Detections, torch.Tensor, torch.Tensor, List[str]]:
if self.model is None:
filename = load_file_from_url(
url="https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth",
file_name='groundingdino_swint_ogc.pth',
model_dir=path_inpaint)
model = load_model(model_config_path=self.config_file, model_checkpoint_path=filename)
self.load_device = model_management.text_encoder_device()
self.offload_device = model_management.text_encoder_offload_device()
model.to(self.offload_device)
self.model = ModelPatcher(model, load_device=self.load_device, offload_device=self.offload_device)
model_management.load_model_gpu(self.model)
processed_image = GroundingDinoModel.preprocess_image(image_bgr=image).to(self.load_device)
boxes, logits, phrases = predict(
model=self.model,
image=processed_image,
caption=caption,
box_threshold=box_threshold,
text_threshold=text_threshold,
device=self.load_device)
source_h, source_w, _ = image.shape
detections = GroundingDinoModel.post_process_result(
source_h=source_h,
source_w=source_w,
boxes=boxes,
logits=logits)
return detections, boxes, logits, phrases
def predict(
model,
image: torch.Tensor,
caption: str,
box_threshold: float,
text_threshold: float,
device: str = "cuda"
) -> Tuple[torch.Tensor, torch.Tensor, List[str]]:
caption = preprocess_caption(caption=caption)
# override to use model wrapped by patcher
model = model.model.to(device)
image = image.to(device)
with torch.no_grad():
outputs = model(image[None], captions=[caption])
prediction_logits = outputs["pred_logits"].cpu().sigmoid()[0] # prediction_logits.shape = (nq, 256)
prediction_boxes = outputs["pred_boxes"].cpu()[0] # prediction_boxes.shape = (nq, 4)
mask = prediction_logits.max(dim=1)[0] > box_threshold
logits = prediction_logits[mask] # logits.shape = (n, 256)
boxes = prediction_boxes[mask] # boxes.shape = (n, 4)
tokenizer = model.tokenizer
tokenized = tokenizer(caption)
phrases = [
get_phrases_from_posmap(logit > text_threshold, tokenized, tokenizer).replace('.', '')
for logit
in logits
]
return boxes, logits.max(dim=1)[0], phrases
default_groundingdino = GroundingDinoModel().predict_with_caption

View File

@ -1,60 +0,0 @@
import os
import numpy as np
import torch
from transformers import CLIPConfig, CLIPImageProcessor
import ldm_patched.modules.model_management as model_management
import modules.config
from extras.safety_checker.models.safety_checker import StableDiffusionSafetyChecker
from ldm_patched.modules.model_patcher import ModelPatcher
safety_checker_repo_root = os.path.join(os.path.dirname(__file__), 'safety_checker')
config_path = os.path.join(safety_checker_repo_root, "configs", "config.json")
preprocessor_config_path = os.path.join(safety_checker_repo_root, "configs", "preprocessor_config.json")
class Censor:
def __init__(self):
self.safety_checker_model: ModelPatcher | None = None
self.clip_image_processor: CLIPImageProcessor | None = None
self.load_device = torch.device('cpu')
self.offload_device = torch.device('cpu')
def init(self):
if self.safety_checker_model is None and self.clip_image_processor is None:
safety_checker_model = modules.config.downloading_safety_checker_model()
self.clip_image_processor = CLIPImageProcessor.from_json_file(preprocessor_config_path)
clip_config = CLIPConfig.from_json_file(config_path)
model = StableDiffusionSafetyChecker.from_pretrained(safety_checker_model, config=clip_config)
model.eval()
self.load_device = model_management.text_encoder_device()
self.offload_device = model_management.text_encoder_offload_device()
model.to(self.offload_device)
self.safety_checker_model = ModelPatcher(model, load_device=self.load_device, offload_device=self.offload_device)
def censor(self, images: list | np.ndarray) -> list | np.ndarray:
self.init()
model_management.load_model_gpu(self.safety_checker_model)
single = False
if not isinstance(images, (list, np.ndarray)):
images = [images]
single = True
safety_checker_input = self.clip_image_processor(images, return_tensors="pt")
safety_checker_input.to(device=self.load_device)
checked_images, has_nsfw_concept = self.safety_checker_model.model(images=images,
clip_input=safety_checker_input.pixel_values)
checked_images = [image.astype(np.uint8) for image in checked_images]
if single:
checked_images = checked_images[0]
return checked_images
default_censor = Censor().censor

View File

@ -19,7 +19,7 @@ def init_detection_model(model_name, half=False, device='cuda', model_rootpath=N
url=model_url, model_dir='facexlib/weights', progress=True, file_name=None, save_dir=model_rootpath) url=model_url, model_dir='facexlib/weights', progress=True, file_name=None, save_dir=model_rootpath)
# TODO: clean pretrained model # TODO: clean pretrained model
load_net = torch.load(model_path, map_location=lambda storage, loc: storage, weights_only=True) load_net = torch.load(model_path, map_location=lambda storage, loc: storage)
# remove unnecessary 'module.' # remove unnecessary 'module.'
for k, v in deepcopy(load_net).items(): for k, v in deepcopy(load_net).items():
if k.startswith('module.'): if k.startswith('module.'):

View File

@ -17,7 +17,7 @@ def init_parsing_model(model_name='bisenet', half=False, device='cuda', model_ro
model_path = load_file_from_url( model_path = load_file_from_url(
url=model_url, model_dir='facexlib/weights', progress=True, file_name=None, save_dir=model_rootpath) url=model_url, model_dir='facexlib/weights', progress=True, file_name=None, save_dir=model_rootpath)
load_net = torch.load(model_path, map_location=lambda storage, loc: storage, weights_only=True) load_net = torch.load(model_path, map_location=lambda storage, loc: storage)
model.load_state_dict(load_net, strict=True) model.load_state_dict(load_net, strict=True)
model.eval() model.eval()
model = model.to(device) model = model.to(device)

View File

@ -1,130 +0,0 @@
import sys
import modules.config
import numpy as np
import torch
from extras.GroundingDINO.util.inference import default_groundingdino
from extras.sam.predictor import SamPredictor
from rembg import remove, new_session
from segment_anything import sam_model_registry
from segment_anything.utils.amg import remove_small_regions
class SAMOptions:
def __init__(self,
# GroundingDINO
dino_prompt: str = '',
dino_box_threshold=0.3,
dino_text_threshold=0.25,
dino_erode_or_dilate=0,
dino_debug=False,
# SAM
max_detections=2,
model_type='vit_b'
):
self.dino_prompt = dino_prompt
self.dino_box_threshold = dino_box_threshold
self.dino_text_threshold = dino_text_threshold
self.dino_erode_or_dilate = dino_erode_or_dilate
self.dino_debug = dino_debug
self.max_detections = max_detections
self.model_type = model_type
def optimize_masks(masks: torch.Tensor) -> torch.Tensor:
"""
removes small disconnected regions and holes
"""
fine_masks = []
for mask in masks.to('cpu').numpy(): # masks: [num_masks, 1, h, w]
fine_masks.append(remove_small_regions(mask[0], 400, mode="holes")[0])
masks = np.stack(fine_masks, axis=0)[:, np.newaxis]
return torch.from_numpy(masks)
def generate_mask_from_image(image: np.ndarray, mask_model: str = 'sam', extras=None,
sam_options: SAMOptions | None = SAMOptions) -> tuple[np.ndarray | None, int | None, int | None, int | None]:
dino_detection_count = 0
sam_detection_count = 0
sam_detection_on_mask_count = 0
if image is None:
return None, dino_detection_count, sam_detection_count, sam_detection_on_mask_count
if extras is None:
extras = {}
if 'image' in image:
image = image['image']
if mask_model != 'sam' or sam_options is None:
result = remove(
image,
session=new_session(mask_model, **extras),
only_mask=True,
**extras
)
return result, dino_detection_count, sam_detection_count, sam_detection_on_mask_count
detections, boxes, logits, phrases = default_groundingdino(
image=image,
caption=sam_options.dino_prompt,
box_threshold=sam_options.dino_box_threshold,
text_threshold=sam_options.dino_text_threshold
)
H, W = image.shape[0], image.shape[1]
boxes = boxes * torch.Tensor([W, H, W, H])
boxes[:, :2] = boxes[:, :2] - boxes[:, 2:] / 2
boxes[:, 2:] = boxes[:, 2:] + boxes[:, :2]
sam_checkpoint = modules.config.download_sam_model(sam_options.model_type)
sam = sam_model_registry[sam_options.model_type](checkpoint=sam_checkpoint)
sam_predictor = SamPredictor(sam)
final_mask_tensor = torch.zeros((image.shape[0], image.shape[1]))
dino_detection_count = boxes.size(0)
if dino_detection_count > 0:
sam_predictor.set_image(image)
if sam_options.dino_erode_or_dilate != 0:
for index in range(boxes.size(0)):
assert boxes.size(1) == 4
boxes[index][0] -= sam_options.dino_erode_or_dilate
boxes[index][1] -= sam_options.dino_erode_or_dilate
boxes[index][2] += sam_options.dino_erode_or_dilate
boxes[index][3] += sam_options.dino_erode_or_dilate
if sam_options.dino_debug:
from PIL import ImageDraw, Image
debug_dino_image = Image.new("RGB", (image.shape[1], image.shape[0]), color="black")
draw = ImageDraw.Draw(debug_dino_image)
for box in boxes.numpy():
draw.rectangle(box.tolist(), fill="white")
return np.array(debug_dino_image), dino_detection_count, sam_detection_count, sam_detection_on_mask_count
transformed_boxes = sam_predictor.transform.apply_boxes_torch(boxes, image.shape[:2])
masks, _, _ = sam_predictor.predict_torch(
point_coords=None,
point_labels=None,
boxes=transformed_boxes,
multimask_output=False,
)
masks = optimize_masks(masks)
sam_detection_count = len(masks)
if sam_options.max_detections == 0:
sam_options.max_detections = sys.maxsize
sam_objects = min(len(logits), sam_options.max_detections)
for obj_ind in range(sam_objects):
mask_tensor = masks[obj_ind][0]
final_mask_tensor += mask_tensor
sam_detection_on_mask_count += 1
final_mask_tensor = (final_mask_tensor > 0).to('cpu').numpy()
mask_image = np.dstack((final_mask_tensor, final_mask_tensor, final_mask_tensor)) * 255
mask_image = np.array(mask_image, dtype=np.uint8)
return mask_image, dino_detection_count, sam_detection_count, sam_detection_on_mask_count

View File

@ -104,7 +104,7 @@ def load_ip_adapter(clip_vision_path, ip_negative_path, ip_adapter_path):
offload_device = torch.device('cpu') offload_device = torch.device('cpu')
use_fp16 = model_management.should_use_fp16(device=load_device) use_fp16 = model_management.should_use_fp16(device=load_device)
ip_state_dict = torch.load(ip_adapter_path, map_location="cpu", weights_only=True) ip_state_dict = torch.load(ip_adapter_path, map_location="cpu")
plus = "latents" in ip_state_dict["image_proj"] plus = "latents" in ip_state_dict["image_proj"]
cross_attention_dim = ip_state_dict["ip_adapter"]["1.to_k_ip.weight"].shape[1] cross_attention_dim = ip_state_dict["ip_adapter"]["1.to_k_ip.weight"].shape[1]
sdxl = cross_attention_dim == 2048 sdxl = cross_attention_dim == 2048

View File

@ -1,171 +0,0 @@
{
"_name_or_path": "clip-vit-large-patch14/",
"architectures": [
"SafetyChecker"
],
"initializer_factor": 1.0,
"logit_scale_init_value": 2.6592,
"model_type": "clip",
"projection_dim": 768,
"text_config": {
"_name_or_path": "",
"add_cross_attention": false,
"architectures": null,
"attention_dropout": 0.0,
"bad_words_ids": null,
"bos_token_id": 0,
"chunk_size_feed_forward": 0,
"cross_attention_hidden_size": null,
"decoder_start_token_id": null,
"diversity_penalty": 0.0,
"do_sample": false,
"dropout": 0.0,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": 2,
"exponential_decay_length_penalty": null,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"hidden_act": "quick_gelu",
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_factor": 1.0,
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 77,
"min_length": 0,
"model_type": "clip_text_model",
"no_repeat_ngram_size": 0,
"num_attention_heads": 12,
"num_beam_groups": 1,
"num_beams": 1,
"num_hidden_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": 1,
"prefix": null,
"problem_type": null,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"transformers_version": "4.21.0.dev0",
"typical_p": 1.0,
"use_bfloat16": false,
"vocab_size": 49408
},
"text_config_dict": {
"hidden_size": 768,
"intermediate_size": 3072,
"num_attention_heads": 12,
"num_hidden_layers": 12
},
"torch_dtype": "float32",
"transformers_version": null,
"vision_config": {
"_name_or_path": "",
"add_cross_attention": false,
"architectures": null,
"attention_dropout": 0.0,
"bad_words_ids": null,
"bos_token_id": null,
"chunk_size_feed_forward": 0,
"cross_attention_hidden_size": null,
"decoder_start_token_id": null,
"diversity_penalty": 0.0,
"do_sample": false,
"dropout": 0.0,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": null,
"exponential_decay_length_penalty": null,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"hidden_act": "quick_gelu",
"hidden_size": 1024,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"image_size": 224,
"initializer_factor": 1.0,
"initializer_range": 0.02,
"intermediate_size": 4096,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"min_length": 0,
"model_type": "clip_vision_model",
"no_repeat_ngram_size": 0,
"num_attention_heads": 16,
"num_beam_groups": 1,
"num_beams": 1,
"num_hidden_layers": 24,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": null,
"patch_size": 14,
"prefix": null,
"problem_type": null,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"transformers_version": "4.21.0.dev0",
"typical_p": 1.0,
"use_bfloat16": false
},
"vision_config_dict": {
"hidden_size": 1024,
"intermediate_size": 4096,
"num_attention_heads": 16,
"num_hidden_layers": 24,
"patch_size": 14
}
}

View File

@ -1,20 +0,0 @@
{
"crop_size": 224,
"do_center_crop": true,
"do_convert_rgb": true,
"do_normalize": true,
"do_resize": true,
"feature_extractor_type": "CLIPFeatureExtractor",
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"resample": 3,
"size": 224
}

View File

@ -1,126 +0,0 @@
# from https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py
# Copyright 2024 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import torch
import torch.nn as nn
from transformers import CLIPConfig, CLIPVisionModel, PreTrainedModel
from transformers.utils import logging
logger = logging.get_logger(__name__)
def cosine_distance(image_embeds, text_embeds):
normalized_image_embeds = nn.functional.normalize(image_embeds)
normalized_text_embeds = nn.functional.normalize(text_embeds)
return torch.mm(normalized_image_embeds, normalized_text_embeds.t())
class StableDiffusionSafetyChecker(PreTrainedModel):
config_class = CLIPConfig
main_input_name = "clip_input"
_no_split_modules = ["CLIPEncoderLayer"]
def __init__(self, config: CLIPConfig):
super().__init__(config)
self.vision_model = CLIPVisionModel(config.vision_config)
self.visual_projection = nn.Linear(config.vision_config.hidden_size, config.projection_dim, bias=False)
self.concept_embeds = nn.Parameter(torch.ones(17, config.projection_dim), requires_grad=False)
self.special_care_embeds = nn.Parameter(torch.ones(3, config.projection_dim), requires_grad=False)
self.concept_embeds_weights = nn.Parameter(torch.ones(17), requires_grad=False)
self.special_care_embeds_weights = nn.Parameter(torch.ones(3), requires_grad=False)
@torch.no_grad()
def forward(self, clip_input, images):
pooled_output = self.vision_model(clip_input)[1] # pooled_output
image_embeds = self.visual_projection(pooled_output)
# we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds).cpu().float().numpy()
cos_dist = cosine_distance(image_embeds, self.concept_embeds).cpu().float().numpy()
result = []
batch_size = image_embeds.shape[0]
for i in range(batch_size):
result_img = {"special_scores": {}, "special_care": [], "concept_scores": {}, "bad_concepts": []}
# increase this value to create a stronger `nfsw` filter
# at the cost of increasing the possibility of filtering benign images
adjustment = 0.0
for concept_idx in range(len(special_cos_dist[0])):
concept_cos = special_cos_dist[i][concept_idx]
concept_threshold = self.special_care_embeds_weights[concept_idx].item()
result_img["special_scores"][concept_idx] = round(concept_cos - concept_threshold + adjustment, 3)
if result_img["special_scores"][concept_idx] > 0:
result_img["special_care"].append({concept_idx, result_img["special_scores"][concept_idx]})
adjustment = 0.01
for concept_idx in range(len(cos_dist[0])):
concept_cos = cos_dist[i][concept_idx]
concept_threshold = self.concept_embeds_weights[concept_idx].item()
result_img["concept_scores"][concept_idx] = round(concept_cos - concept_threshold + adjustment, 3)
if result_img["concept_scores"][concept_idx] > 0:
result_img["bad_concepts"].append(concept_idx)
result.append(result_img)
has_nsfw_concepts = [len(res["bad_concepts"]) > 0 for res in result]
for idx, has_nsfw_concept in enumerate(has_nsfw_concepts):
if has_nsfw_concept:
if torch.is_tensor(images) or torch.is_tensor(images[0]):
images[idx] = torch.zeros_like(images[idx]) # black image
else:
images[idx] = np.zeros(images[idx].shape) # black image
if any(has_nsfw_concepts):
logger.warning(
"Potential NSFW content was detected in one or more images. A black image will be returned instead."
" Try again with a different prompt and/or seed."
)
return images, has_nsfw_concepts
@torch.no_grad()
def forward_onnx(self, clip_input: torch.Tensor, images: torch.Tensor):
pooled_output = self.vision_model(clip_input)[1] # pooled_output
image_embeds = self.visual_projection(pooled_output)
special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds)
cos_dist = cosine_distance(image_embeds, self.concept_embeds)
# increase this value to create a stronger `nsfw` filter
# at the cost of increasing the possibility of filtering benign images
adjustment = 0.0
special_scores = special_cos_dist - self.special_care_embeds_weights + adjustment
# special_scores = special_scores.round(decimals=3)
special_care = torch.any(special_scores > 0, dim=1)
special_adjustment = special_care * 0.01
special_adjustment = special_adjustment.unsqueeze(1).expand(-1, cos_dist.shape[1])
concept_scores = (cos_dist - self.concept_embeds_weights) + special_adjustment
# concept_scores = concept_scores.round(decimals=3)
has_nsfw_concepts = torch.any(concept_scores > 0, dim=1)
images[has_nsfw_concepts] = 0.0 # black image
return images, has_nsfw_concepts

View File

@ -1,288 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import numpy as np
import torch
from ldm_patched.modules import model_management
from ldm_patched.modules.model_patcher import ModelPatcher
from segment_anything.modeling import Sam
from typing import Optional, Tuple
from segment_anything.utils.transforms import ResizeLongestSide
class SamPredictor:
def __init__(
self,
model: Sam,
load_device=model_management.text_encoder_device(),
offload_device=model_management.text_encoder_offload_device()
) -> None:
"""
Uses SAM to calculate the image embedding for an image, and then
allow repeated, efficient mask prediction given prompts.
Arguments:
model (Sam): The model to use for mask prediction.
"""
super().__init__()
self.load_device = load_device
self.offload_device = offload_device
# can't use model.half() here as slow_conv2d_cpu is not implemented for half
model.to(self.offload_device)
self.patcher = ModelPatcher(model, load_device=self.load_device, offload_device=self.offload_device)
self.transform = ResizeLongestSide(model.image_encoder.img_size)
self.reset_image()
def set_image(
self,
image: np.ndarray,
image_format: str = "RGB",
) -> None:
"""
Calculates the image embeddings for the provided image, allowing
masks to be predicted with the 'predict' method.
Arguments:
image (np.ndarray): The image for calculating masks. Expects an
image in HWC uint8 format, with pixel values in [0, 255].
image_format (str): The color format of the image, in ['RGB', 'BGR'].
"""
assert image_format in [
"RGB",
"BGR",
], f"image_format must be in ['RGB', 'BGR'], is {image_format}."
if image_format != self.patcher.model.image_format:
image = image[..., ::-1]
# Transform the image to the form expected by the model
input_image = self.transform.apply_image(image)
input_image_torch = torch.as_tensor(input_image, device=self.load_device)
input_image_torch = input_image_torch.permute(2, 0, 1).contiguous()[None, :, :, :]
self.set_torch_image(input_image_torch, image.shape[:2])
@torch.no_grad()
def set_torch_image(
self,
transformed_image: torch.Tensor,
original_image_size: Tuple[int, ...],
) -> None:
"""
Calculates the image embeddings for the provided image, allowing
masks to be predicted with the 'predict' method. Expects the input
image to be already transformed to the format expected by the model.
Arguments:
transformed_image (torch.Tensor): The input image, with shape
1x3xHxW, which has been transformed with ResizeLongestSide.
original_image_size (tuple(int, int)): The size of the image
before transformation, in (H, W) format.
"""
assert (
len(transformed_image.shape) == 4
and transformed_image.shape[1] == 3
and max(*transformed_image.shape[2:]) == self.patcher.model.image_encoder.img_size
), f"set_torch_image input must be BCHW with long side {self.patcher.model.image_encoder.img_size}."
self.reset_image()
self.original_size = original_image_size
self.input_size = tuple(transformed_image.shape[-2:])
model_management.load_model_gpu(self.patcher)
input_image = self.patcher.model.preprocess(transformed_image.to(self.load_device))
self.features = self.patcher.model.image_encoder(input_image)
self.is_image_set = True
def predict(
self,
point_coords: Optional[np.ndarray] = None,
point_labels: Optional[np.ndarray] = None,
box: Optional[np.ndarray] = None,
mask_input: Optional[np.ndarray] = None,
multimask_output: bool = True,
return_logits: bool = False,
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
Predict masks for the given input prompts, using the currently set image.
Arguments:
point_coords (np.ndarray or None): A Nx2 array of point prompts to the
model. Each point is in (X,Y) in pixels.
point_labels (np.ndarray or None): A length N array of labels for the
point prompts. 1 indicates a foreground point and 0 indicates a
background point.
box (np.ndarray or None): A length 4 array given a box prompt to the
model, in XYXY format.
mask_input (np.ndarray): A low resolution mask input to the model, typically
coming from a previous prediction iteration. Has form 1xHxW, where
for SAM, H=W=256.
multimask_output (bool): If true, the model will return three masks.
For ambiguous input prompts (such as a single click), this will often
produce better masks than a single prediction. If only a single
mask is needed, the model's predicted quality score can be used
to select the best mask. For non-ambiguous prompts, such as multiple
input prompts, multimask_output=False can give better results.
return_logits (bool): If true, returns un-thresholded masks logits
instead of a binary mask.
Returns:
(np.ndarray): The output masks in CxHxW format, where C is the
number of masks, and (H, W) is the original image size.
(np.ndarray): An array of length C containing the model's
predictions for the quality of each mask.
(np.ndarray): An array of shape CxHxW, where C is the number
of masks and H=W=256. These low resolution logits can be passed to
a subsequent iteration as mask input.
"""
if not self.is_image_set:
raise RuntimeError("An image must be set with .set_image(...) before mask prediction.")
# Transform input prompts
coords_torch, labels_torch, box_torch, mask_input_torch = None, None, None, None
if point_coords is not None:
assert (
point_labels is not None
), "point_labels must be supplied if point_coords is supplied."
point_coords = self.transform.apply_coords(point_coords, self.original_size)
coords_torch = torch.as_tensor(point_coords, dtype=torch.float, device=self.load_device)
labels_torch = torch.as_tensor(point_labels, dtype=torch.int, device=self.load_device)
coords_torch, labels_torch = coords_torch[None, :, :], labels_torch[None, :]
if box is not None:
box = self.transform.apply_boxes(box, self.original_size)
box_torch = torch.as_tensor(box, dtype=torch.float, device=self.load_device)
box_torch = box_torch[None, :]
if mask_input is not None:
mask_input_torch = torch.as_tensor(mask_input, dtype=torch.float, device=self.load_device)
mask_input_torch = mask_input_torch[None, :, :, :]
masks, iou_predictions, low_res_masks = self.predict_torch(
coords_torch,
labels_torch,
box_torch,
mask_input_torch,
multimask_output,
return_logits=return_logits,
)
masks = masks[0].detach().cpu().numpy()
iou_predictions = iou_predictions[0].detach().cpu().numpy()
low_res_masks = low_res_masks[0].detach().cpu().numpy()
return masks, iou_predictions, low_res_masks
@torch.no_grad()
def predict_torch(
self,
point_coords: Optional[torch.Tensor],
point_labels: Optional[torch.Tensor],
boxes: Optional[torch.Tensor] = None,
mask_input: Optional[torch.Tensor] = None,
multimask_output: bool = True,
return_logits: bool = False,
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
Predict masks for the given input prompts, using the currently set image.
Input prompts are batched torch tensors and are expected to already be
transformed to the input frame using ResizeLongestSide.
Arguments:
point_coords (torch.Tensor or None): A BxNx2 array of point prompts to the
model. Each point is in (X,Y) in pixels.
point_labels (torch.Tensor or None): A BxN array of labels for the
point prompts. 1 indicates a foreground point and 0 indicates a
background point.
box (np.ndarray or None): A Bx4 array given a box prompt to the
model, in XYXY format.
mask_input (np.ndarray): A low resolution mask input to the model, typically
coming from a previous prediction iteration. Has form Bx1xHxW, where
for SAM, H=W=256. Masks returned by a previous iteration of the
predict method do not need further transformation.
multimask_output (bool): If true, the model will return three masks.
For ambiguous input prompts (such as a single click), this will often
produce better masks than a single prediction. If only a single
mask is needed, the model's predicted quality score can be used
to select the best mask. For non-ambiguous prompts, such as multiple
input prompts, multimask_output=False can give better results.
return_logits (bool): If true, returns un-thresholded masks logits
instead of a binary mask.
Returns:
(torch.Tensor): The output masks in BxCxHxW format, where C is the
number of masks, and (H, W) is the original image size.
(torch.Tensor): An array of shape BxC containing the model's
predictions for the quality of each mask.
(torch.Tensor): An array of shape BxCxHxW, where C is the number
of masks and H=W=256. These low res logits can be passed to
a subsequent iteration as mask input.
"""
if not self.is_image_set:
raise RuntimeError("An image must be set with .set_image(...) before mask prediction.")
if point_coords is not None:
points = (point_coords.to(self.load_device), point_labels.to(self.load_device))
else:
points = None
# load
if boxes is not None:
boxes = boxes.to(self.load_device)
if mask_input is not None:
mask_input = mask_input.to(self.load_device)
model_management.load_model_gpu(self.patcher)
# Embed prompts
sparse_embeddings, dense_embeddings = self.patcher.model.prompt_encoder(
points=points,
boxes=boxes,
masks=mask_input,
)
# Predict masks
low_res_masks, iou_predictions = self.patcher.model.mask_decoder(
image_embeddings=self.features,
image_pe=self.patcher.model.prompt_encoder.get_dense_pe(),
sparse_prompt_embeddings=sparse_embeddings,
dense_prompt_embeddings=dense_embeddings,
multimask_output=multimask_output,
)
# Upscale the masks to the original image resolution
masks = self.patcher.model.postprocess_masks(low_res_masks, self.input_size, self.original_size)
if not return_logits:
masks = masks > self.patcher.model.mask_threshold
return masks, iou_predictions, low_res_masks
def get_image_embedding(self) -> torch.Tensor:
"""
Returns the image embeddings for the currently set image, with
shape 1xCxHxW, where C is the embedding dimension and (H,W) are
the embedding spatial dimension of SAM (typically C=256, H=W=64).
"""
if not self.is_image_set:
raise RuntimeError(
"An image must be set with .set_image(...) to generate an embedding."
)
assert self.features is not None, "Features must exist if an image has been set."
return self.features
@property
def device(self) -> torch.device:
return self.patcher.model.device
def reset_image(self) -> None:
"""Resets the currently set image."""
self.is_image_set = False
self.features = None
self.orig_h = None
self.orig_w = None
self.input_h = None
self.input_w = None

View File

@ -1,85 +1,69 @@
# https://github.com/city96/SD-Latent-Interposer/blob/main/interposer.py # https://github.com/city96/SD-Latent-Interposer/blob/main/interposer.py
import os import os
import safetensors.torch as sf
import torch import torch
import safetensors.torch as sf
import torch.nn as nn import torch.nn as nn
import ldm_patched.modules.model_management import ldm_patched.modules.model_management
from ldm_patched.modules.model_patcher import ModelPatcher from ldm_patched.modules.model_patcher import ModelPatcher
from modules.config import path_vae_approx from modules.config import path_vae_approx
class ResBlock(nn.Module): class Block(nn.Module):
"""Block with residuals""" def __init__(self, size):
def __init__(self, ch):
super().__init__() super().__init__()
self.join = nn.ReLU() self.join = nn.ReLU()
self.norm = nn.BatchNorm2d(ch)
self.long = nn.Sequential( self.long = nn.Sequential(
nn.Conv2d(ch, ch, kernel_size=3, stride=1, padding=1), nn.Conv2d(size, size, kernel_size=3, stride=1, padding=1),
nn.SiLU(), nn.LeakyReLU(0.1),
nn.Conv2d(ch, ch, kernel_size=3, stride=1, padding=1), nn.Conv2d(size, size, kernel_size=3, stride=1, padding=1),
nn.SiLU(), nn.LeakyReLU(0.1),
nn.Conv2d(ch, ch, kernel_size=3, stride=1, padding=1), nn.Conv2d(size, size, kernel_size=3, stride=1, padding=1),
nn.Dropout(0.1)
) )
def forward(self, x): def forward(self, x):
x = self.norm(x) y = self.long(x)
return self.join(self.long(x) + x) z = self.join(y + x)
return z
class ExtractBlock(nn.Module): class Interposer(nn.Module):
"""Increase no. of channels by [out/in]""" def __init__(self):
def __init__(self, ch_in, ch_out):
super().__init__() super().__init__()
self.join = nn.ReLU() self.chan = 4
self.short = nn.Conv2d(ch_in, ch_out, kernel_size=3, stride=1, padding=1) self.hid = 128
self.long = nn.Sequential(
nn.Conv2d(ch_in, ch_out, kernel_size=3, stride=1, padding=1), self.head_join = nn.ReLU()
nn.SiLU(), self.head_short = nn.Conv2d(self.chan, self.hid, kernel_size=3, stride=1, padding=1)
nn.Conv2d(ch_out, ch_out, kernel_size=3, stride=1, padding=1), self.head_long = nn.Sequential(
nn.SiLU(), nn.Conv2d(self.chan, self.hid, kernel_size=3, stride=1, padding=1),
nn.Conv2d(ch_out, ch_out, kernel_size=3, stride=1, padding=1), nn.LeakyReLU(0.1),
nn.Dropout(0.1) nn.Conv2d(self.hid, self.hid, kernel_size=3, stride=1, padding=1),
nn.LeakyReLU(0.1),
nn.Conv2d(self.hid, self.hid, kernel_size=3, stride=1, padding=1),
) )
def forward(self, x):
return self.join(self.long(x) + self.short(x))
class InterposerModel(nn.Module):
"""Main neural network"""
def __init__(self, ch_in=4, ch_out=4, ch_mid=64, scale=1.0, blocks=12):
super().__init__()
self.ch_in = ch_in
self.ch_out = ch_out
self.ch_mid = ch_mid
self.blocks = blocks
self.scale = scale
self.head = ExtractBlock(self.ch_in, self.ch_mid)
self.core = nn.Sequential( self.core = nn.Sequential(
nn.Upsample(scale_factor=self.scale, mode="nearest"), Block(self.hid),
*[ResBlock(self.ch_mid) for _ in range(blocks)], Block(self.hid),
nn.BatchNorm2d(self.ch_mid), Block(self.hid),
nn.SiLU(), )
self.tail = nn.Sequential(
nn.ReLU(),
nn.Conv2d(self.hid, self.chan, kernel_size=3, stride=1, padding=1)
) )
self.tail = nn.Conv2d(self.ch_mid, self.ch_out, kernel_size=3, stride=1, padding=1)
def forward(self, x): def forward(self, x):
y = self.head(x) y = self.head_join(
self.head_long(x) +
self.head_short(x)
)
z = self.core(y) z = self.core(y)
return self.tail(z) return self.tail(z)
vae_approx_model = None vae_approx_model = None
vae_approx_filename = os.path.join(path_vae_approx, 'xl-to-v1_interposer-v4.0.safetensors') vae_approx_filename = os.path.join(path_vae_approx, 'xl-to-v1_interposer-v3.1.safetensors')
def parse(x): def parse(x):
@ -88,7 +72,7 @@ def parse(x):
x_origin = x.clone() x_origin = x.clone()
if vae_approx_model is None: if vae_approx_model is None:
model = InterposerModel() model = Interposer()
model.eval() model.eval()
sd = sf.load_file(vae_approx_filename) sd = sf.load_file(vae_approx_filename)
model.load_state_dict(sd) model.load_state_dict(sd)

View File

@ -8,11 +8,11 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"!pip install pygit2==1.15.1\n", "!pip install pygit2==1.12.2\n",
"%cd /content\n", "%cd /content\n",
"!git clone https://github.com/lllyasviel/Fooocus.git\n", "!git clone https://github.com/lllyasviel/Fooocus.git\n",
"%cd /content/Fooocus\n", "%cd /content/Fooocus\n",
"!python entry_with_update.py --share --always-high-vram\n" "!python entry_with_update.py --share\n"
] ]
} }
], ],

View File

@ -1 +1 @@
version = '2.5.5' version = '2.2.0-rc1'

View File

@ -80,15 +80,6 @@ function refresh_style_localization() {
processNode(document.querySelector('.style_selections')); processNode(document.querySelector('.style_selections'));
} }
function refresh_aspect_ratios_label(value) {
label = document.querySelector('#aspect_ratios_accordion div span');
translation = getTranslation("Aspect Ratios");
if (typeof translation == "undefined") {
translation = "Aspect Ratios";
}
label.textContent = translation + " " + htmlDecode(value);
}
function localizeWholePage() { function localizeWholePage() {
processNode(gradioApp()); processNode(gradioApp());

View File

@ -122,43 +122,6 @@ document.addEventListener("DOMContentLoaded", function() {
initStylePreviewOverlay(); initStylePreviewOverlay();
}); });
var onAppend = function(elem, f) {
var observer = new MutationObserver(function(mutations) {
mutations.forEach(function(m) {
if (m.addedNodes.length) {
f(m.addedNodes);
}
});
});
observer.observe(elem, {childList: true});
}
function addObserverIfDesiredNodeAvailable(querySelector, callback) {
var elem = document.querySelector(querySelector);
if (!elem) {
window.setTimeout(() => addObserverIfDesiredNodeAvailable(querySelector, callback), 1000);
return;
}
onAppend(elem, callback);
}
/**
* Show reset button on toast "Connection errored out."
*/
addObserverIfDesiredNodeAvailable(".toast-wrap", function(added) {
added.forEach(function(element) {
if (element.innerText.includes("Connection errored out.")) {
window.setTimeout(function() {
document.getElementById("reset_button").classList.remove("hidden");
document.getElementById("generate_button").classList.add("hidden");
document.getElementById("skip_button").classList.add("hidden");
document.getElementById("stop_button").classList.add("hidden");
});
}
});
});
/** /**
* Add a ctrl+enter as a shortcut to start a generation * Add a ctrl+enter as a shortcut to start a generation
*/ */
@ -187,12 +150,9 @@ function initStylePreviewOverlay() {
let overlayVisible = false; let overlayVisible = false;
const samplesPath = document.querySelector("meta[name='samples-path']").getAttribute("content") const samplesPath = document.querySelector("meta[name='samples-path']").getAttribute("content")
const overlay = document.createElement('div'); const overlay = document.createElement('div');
const tooltip = document.createElement('div');
tooltip.className = 'preview-tooltip';
overlay.appendChild(tooltip);
overlay.id = 'stylePreviewOverlay'; overlay.id = 'stylePreviewOverlay';
document.body.appendChild(overlay); document.body.appendChild(overlay);
document.addEventListener('mouseover', function (e) { document.addEventListener('mouseover', function(e) {
const label = e.target.closest('.style_selections label'); const label = e.target.closest('.style_selections label');
if (!label) return; if (!label) return;
label.removeEventListener("mouseout", onMouseLeave); label.removeEventListener("mouseout", onMouseLeave);
@ -205,9 +165,6 @@ function initStylePreviewOverlay() {
"fooocus_v2", "fooocus_v2",
name.toLowerCase().replaceAll(" ", "_") name.toLowerCase().replaceAll(" ", "_")
).replaceAll("\\", "\\\\")}")`; ).replaceAll("\\", "\\\\")}")`;
tooltip.textContent = name;
function onMouseLeave() { function onMouseLeave() {
overlayVisible = false; overlayVisible = false;
overlay.style.opacity = "0"; overlay.style.opacity = "0";
@ -215,8 +172,8 @@ function initStylePreviewOverlay() {
label.removeEventListener("mouseout", onMouseLeave); label.removeEventListener("mouseout", onMouseLeave);
} }
}); });
document.addEventListener('mousemove', function (e) { document.addEventListener('mousemove', function(e) {
if (!overlayVisible) return; if(!overlayVisible) return;
overlay.style.left = `${e.clientX}px`; overlay.style.left = `${e.clientX}px`;
overlay.style.top = `${e.clientY}px`; overlay.style.top = `${e.clientY}px`;
overlay.className = e.clientY > window.innerHeight / 2 ? "lower-half" : "upper-half"; overlay.className = e.clientY > window.innerHeight / 2 ? "lower-half" : "upper-half";
@ -256,8 +213,3 @@ function set_theme(theme) {
window.location.replace(gradioURL + '?__theme=' + theme); window.location.replace(gradioURL + '?__theme=' + theme);
} }
} }
function htmlDecode(input) {
var doc = new DOMParser().parseFromString(input, "text/html");
return doc.documentElement.textContent;
}

View File

@ -642,5 +642,4 @@ onUiLoaded(async() => {
} }
applyZoomAndPan("#inpaint_canvas"); applyZoomAndPan("#inpaint_canvas");
applyZoomAndPan("#inpaint_mask_canvas");
}); });

View File

@ -4,22 +4,12 @@
"Generate": "Generate", "Generate": "Generate",
"Skip": "Skip", "Skip": "Skip",
"Stop": "Stop", "Stop": "Stop",
"Reconnect": "Reconnect",
"Input Image": "Input Image", "Input Image": "Input Image",
"Advanced": "Advanced", "Advanced": "Advanced",
"Upscale or Variation": "Upscale or Variation", "Upscale or Variation": "Upscale or Variation",
"Image Prompt": "Image Prompt", "Image Prompt": "Image Prompt",
"Inpaint or Outpaint": "Inpaint or Outpaint", "Inpaint or Outpaint (beta)": "Inpaint or Outpaint (beta)",
"Outpaint Direction": "Outpaint Direction", "Drag above image to here": "Drag above image to here",
"Enable Advanced Masking Features": "Enable Advanced Masking Features",
"Method": "Method",
"Describe": "Describe",
"Content Type": "Content Type",
"Photograph": "Photograph",
"Art/Anime": "Art/Anime",
"Apply Styles": "Apply Styles",
"Describe this Image into Prompt": "Describe this Image into Prompt",
"Image Size and Recommended Size": "Image Size and Recommended Size",
"Upscale or Variation:": "Upscale or Variation:", "Upscale or Variation:": "Upscale or Variation:",
"Disabled": "Disabled", "Disabled": "Disabled",
"Vary (Subtle)": "Vary (Subtle)", "Vary (Subtle)": "Vary (Subtle)",
@ -27,7 +17,7 @@
"Upscale (1.5x)": "Upscale (1.5x)", "Upscale (1.5x)": "Upscale (1.5x)",
"Upscale (2x)": "Upscale (2x)", "Upscale (2x)": "Upscale (2x)",
"Upscale (Fast 2x)": "Upscale (Fast 2x)", "Upscale (Fast 2x)": "Upscale (Fast 2x)",
"\ud83d\udcd4 Documentation": "\uD83D\uDCD4 Documentation", "\ud83d\udcd4 Document": "\uD83D\uDCD4 Document",
"Image": "Image", "Image": "Image",
"Stop At": "Stop At", "Stop At": "Stop At",
"Weight": "Weight", "Weight": "Weight",
@ -46,17 +36,11 @@
"Top": "Top", "Top": "Top",
"Bottom": "Bottom", "Bottom": "Bottom",
"* \"Inpaint or Outpaint\" is powered by the sampler \"DPMPP Fooocus Seamless 2M SDE Karras Inpaint Sampler\" (beta)": "* \"Inpaint or Outpaint\" is powered by the sampler \"DPMPP Fooocus Seamless 2M SDE Karras Inpaint Sampler\" (beta)", "* \"Inpaint or Outpaint\" is powered by the sampler \"DPMPP Fooocus Seamless 2M SDE Karras Inpaint Sampler\" (beta)": "* \"Inpaint or Outpaint\" is powered by the sampler \"DPMPP Fooocus Seamless 2M SDE Karras Inpaint Sampler\" (beta)",
"Advanced options": "Advanced options", "Setting": "Setting",
"Generate mask from image": "Generate mask from image",
"Settings": "Settings",
"Style": "Style", "Style": "Style",
"Styles": "Styles",
"Preset": "Preset",
"Performance": "Performance", "Performance": "Performance",
"Speed": "Speed", "Speed": "Speed",
"Quality": "Quality", "Quality": "Quality",
"Extreme Speed": "Extreme Speed",
"Lightning": "Lightning",
"Aspect Ratios": "Aspect Ratios", "Aspect Ratios": "Aspect Ratios",
"width \u00d7 height": "width \u00d7 height", "width \u00d7 height": "width \u00d7 height",
"Image Number": "Image Number", "Image Number": "Image Number",
@ -66,16 +50,9 @@
"Seed": "Seed", "Seed": "Seed",
"Disable seed increment": "Disable seed increment", "Disable seed increment": "Disable seed increment",
"Disable automatic seed increment when image number is > 1.": "Disable automatic seed increment when image number is > 1.", "Disable automatic seed increment when image number is > 1.": "Disable automatic seed increment when image number is > 1.",
"Read wildcards in order": "Read wildcards in order",
"Black Out NSFW": "Black Out NSFW",
"Use black image if NSFW is detected.": "Use black image if NSFW is detected.",
"Save only final enhanced image": "Save only final enhanced image",
"Save Metadata to Images": "Save Metadata to Images",
"Adds parameters to generated images allowing manual regeneration.": "Adds parameters to generated images allowing manual regeneration.",
"\ud83d\udcda History Log": "\uD83D\uDCDA History Log", "\ud83d\udcda History Log": "\uD83D\uDCDA History Log",
"Image Style": "Image Style", "Image Style": "Image Style",
"Fooocus V2": "Fooocus V2", "Fooocus V2": "Fooocus V2",
"Random Style": "Random Style",
"Default (Slightly Cinematic)": "Default (Slightly Cinematic)", "Default (Slightly Cinematic)": "Default (Slightly Cinematic)",
"Fooocus Masterpiece": "Fooocus Masterpiece", "Fooocus Masterpiece": "Fooocus Masterpiece",
"Fooocus Photograph": "Fooocus Photograph", "Fooocus Photograph": "Fooocus Photograph",
@ -287,7 +264,7 @@
"Volumetric Lighting": "Volumetric Lighting", "Volumetric Lighting": "Volumetric Lighting",
"Watercolor 2": "Watercolor 2", "Watercolor 2": "Watercolor 2",
"Whimsical And Playful": "Whimsical And Playful", "Whimsical And Playful": "Whimsical And Playful",
"Models": "Models", "Model": "Model",
"Base Model (SDXL only)": "Base Model (SDXL only)", "Base Model (SDXL only)": "Base Model (SDXL only)",
"sd_xl_base_1.0_0.9vae.safetensors": "sd_xl_base_1.0_0.9vae.safetensors", "sd_xl_base_1.0_0.9vae.safetensors": "sd_xl_base_1.0_0.9vae.safetensors",
"bluePencilXL_v009.safetensors": "bluePencilXL_v009.safetensors", "bluePencilXL_v009.safetensors": "bluePencilXL_v009.safetensors",
@ -328,8 +305,6 @@
"vae": "vae", "vae": "vae",
"CFG Mimicking from TSNR": "CFG Mimicking from TSNR", "CFG Mimicking from TSNR": "CFG Mimicking from TSNR",
"Enabling Fooocus's implementation of CFG mimicking for TSNR (effective when real CFG > mimicked CFG).": "Enabling Fooocus's implementation of CFG mimicking for TSNR (effective when real CFG > mimicked CFG).", "Enabling Fooocus's implementation of CFG mimicking for TSNR (effective when real CFG > mimicked CFG).": "Enabling Fooocus's implementation of CFG mimicking for TSNR (effective when real CFG > mimicked CFG).",
"CLIP Skip": "CLIP Skip",
"Bypass CLIP layers to avoid overfitting (use 1 to not skip any layers, 2 is recommended).": "Bypass CLIP layers to avoid overfitting (use 1 to not skip any layers, 2 is recommended).",
"Sampler": "Sampler", "Sampler": "Sampler",
"dpmpp_2m_sde_gpu": "dpmpp_2m_sde_gpu", "dpmpp_2m_sde_gpu": "dpmpp_2m_sde_gpu",
"Only effective in non-inpaint mode.": "Only effective in non-inpaint mode.", "Only effective in non-inpaint mode.": "Only effective in non-inpaint mode.",
@ -360,8 +335,6 @@
"sgm_uniform": "sgm_uniform", "sgm_uniform": "sgm_uniform",
"simple": "simple", "simple": "simple",
"ddim_uniform": "ddim_uniform", "ddim_uniform": "ddim_uniform",
"VAE": "VAE",
"Default (model)": "Default (model)",
"Forced Overwrite of Sampling Step": "Forced Overwrite of Sampling Step", "Forced Overwrite of Sampling Step": "Forced Overwrite of Sampling Step",
"Set as -1 to disable. For developer debugging.": "Set as -1 to disable. For developer debugging.", "Set as -1 to disable. For developer debugging.": "Set as -1 to disable. For developer debugging.",
"Forced Overwrite of Refiner Switch Step": "Forced Overwrite of Refiner Switch Step", "Forced Overwrite of Refiner Switch Step": "Forced Overwrite of Refiner Switch Step",
@ -375,14 +348,10 @@
"Disable preview during generation.": "Disable preview during generation.", "Disable preview during generation.": "Disable preview during generation.",
"Disable Intermediate Results": "Disable Intermediate Results", "Disable Intermediate Results": "Disable Intermediate Results",
"Disable intermediate results during generation, only show final gallery.": "Disable intermediate results during generation, only show final gallery.", "Disable intermediate results during generation, only show final gallery.": "Disable intermediate results during generation, only show final gallery.",
"Debug Inpaint Preprocessing": "Debug Inpaint Preprocessing",
"Debug GroundingDINO": "Debug GroundingDINO",
"Used for SAM object detection and box generation": "Used for SAM object detection and box generation",
"GroundingDINO Box Erode or Dilate": "GroundingDINO Box Erode or Dilate",
"Inpaint Engine": "Inpaint Engine", "Inpaint Engine": "Inpaint Engine",
"v1": "v1", "v1": "v1",
"Version of Fooocus inpaint model": "Version of Fooocus inpaint model",
"v2.5": "v2.5", "v2.5": "v2.5",
"v2.6": "v2.6",
"Control Debug": "Control Debug", "Control Debug": "Control Debug",
"Debug Preprocessors": "Debug Preprocessors", "Debug Preprocessors": "Debug Preprocessors",
"Mixing Image Prompt and Vary/Upscale": "Mixing Image Prompt and Vary/Upscale", "Mixing Image Prompt and Vary/Upscale": "Mixing Image Prompt and Vary/Upscale",
@ -398,6 +367,7 @@
"B2": "B2", "B2": "B2",
"S1": "S1", "S1": "S1",
"S2": "S2", "S2": "S2",
"Extreme Speed": "Extreme Speed",
"\uD83D\uDD0E Type here to search styles ...": "\uD83D\uDD0E Type here to search styles ...", "\uD83D\uDD0E Type here to search styles ...": "\uD83D\uDD0E Type here to search styles ...",
"Type prompt here.": "Type prompt here.", "Type prompt here.": "Type prompt here.",
"Outpaint Expansion Direction:": "Outpaint Expansion Direction:", "Outpaint Expansion Direction:": "Outpaint Expansion Direction:",
@ -405,81 +375,11 @@
"Fooocus Enhance": "Fooocus Enhance", "Fooocus Enhance": "Fooocus Enhance",
"Fooocus Cinematic": "Fooocus Cinematic", "Fooocus Cinematic": "Fooocus Cinematic",
"Fooocus Sharp": "Fooocus Sharp", "Fooocus Sharp": "Fooocus Sharp",
"For images created by Fooocus": "For images created by Fooocus", "Drag any image generated by Fooocus here": "Drag any image generated by Fooocus here",
"Metadata": "Metadata", "Metadata": "Metadata",
"Apply Metadata": "Apply Metadata", "Apply Metadata": "Apply Metadata",
"Metadata Scheme": "Metadata Scheme", "Metadata Scheme": "Metadata Scheme",
"Image Prompt parameters are not included. Use png and a1111 for compatibility with Civitai.": "Image Prompt parameters are not included. Use png and a1111 for compatibility with Civitai.", "Image Prompt parameters are not included. Use a1111 for compatibility with Civitai.": "Image Prompt parameters are not included. Use a1111 for compatibility with Civitai.",
"fooocus (json)": "fooocus (json)", "fooocus (json)": "fooocus (json)",
"a1111 (plain text)": "a1111 (plain text)", "a1111 (plain text)": "a1111 (plain text)"
"Unsupported image type in input": "Unsupported image type in input",
"Enhance": "Enhance",
"Detection prompt": "Detection prompt",
"Detection Prompt Quick List": "Detection Prompt Quick List",
"Maximum number of detections": "Maximum number of detections",
"Use with Enhance, skips image generation": "Use with Enhance, skips image generation",
"Order of Processing": "Order of Processing",
"Use before to enhance small details and after to enhance large areas.": "Use before to enhance small details and after to enhance large areas.",
"Before First Enhancement": "Before First Enhancement",
"After Last Enhancement": "After Last Enhancement",
"Prompt Type": "Prompt Type",
"Choose which prompt to use for Upscale or Variation.": "Choose which prompt to use for Upscale or Variation.",
"Original Prompts": "Original Prompts",
"Last Filled Enhancement Prompts": "Last Filled Enhancement Prompts",
"Enable": "Enable",
"Describe what you want to detect.": "Describe what you want to detect.",
"Enhancement positive prompt": "Enhancement positive prompt",
"Uses original prompt instead if empty.": "Uses original prompt instead if empty.",
"Enhancement negative prompt": "Enhancement negative prompt",
"Uses original negative prompt instead if empty.": "Uses original negative prompt instead if empty.",
"Detection": "Detection",
"u2net": "u2net",
"u2netp": "u2netp",
"u2net_human_seg": "u2net_human_seg",
"u2net_cloth_seg": "u2net_cloth_seg",
"silueta": "silueta",
"isnet-general-use": "isnet-general-use",
"isnet-anime": "isnet-anime",
"sam": "sam",
"Mask generation model": "Mask generation model",
"Cloth category": "Cloth category",
"Use singular whenever possible": "Use singular whenever possible",
"full": "full",
"upper": "upper",
"lower": "lower",
"SAM Options": "SAM Options",
"SAM model": "SAM model",
"vit_b": "vit_b",
"vit_l": "vit_l",
"vit_h": "vit_h",
"Box Threshold": "Box Threshold",
"Text Threshold": "Text Threshold",
"Set to 0 to detect all": "Set to 0 to detect all",
"Inpaint": "Inpaint",
"Inpaint or Outpaint (default)": "Inpaint or Outpaint (default)",
"Improve Detail (face, hand, eyes, etc.)": "Improve Detail (face, hand, eyes, etc.)",
"Modify Content (add objects, change background, etc.)": "Modify Content (add objects, change background, etc.)",
"Disable initial latent in inpaint": "Disable initial latent in inpaint",
"Version of Fooocus inpaint model. If set, use performance Quality or Speed (no performance LoRAs) for best results.": "Version of Fooocus inpaint model. If set, use performance Quality or Speed (no performance LoRAs) for best results.",
"Inpaint Denoising Strength": "Inpaint Denoising Strength",
"Same as the denoising strength in A1111 inpaint. Only used in inpaint, not used in outpaint. (Outpaint always use 1.0)": "Same as the denoising strength in A1111 inpaint. Only used in inpaint, not used in outpaint. (Outpaint always use 1.0)",
"Inpaint Respective Field": "Inpaint Respective Field",
"The area to inpaint. Value 0 is same as \"Only Masked\" in A1111. Value 1 is same as \"Whole Image\" in A1111. Only used in inpaint, not used in outpaint. (Outpaint always use 1.0)": "The area to inpaint. Value 0 is same as \"Only Masked\" in A1111. Value 1 is same as \"Whole Image\" in A1111. Only used in inpaint, not used in outpaint. (Outpaint always use 1.0)",
"Mask Erode or Dilate": "Mask Erode or Dilate",
"Positive value will make white area in the mask larger, negative value will make white area smaller. (default is 0, always processed before any mask invert)": "Positive value will make white area in the mask larger, negative value will make white area smaller. (default is 0, always processed before any mask invert)",
"Invert Mask When Generating": "Invert Mask When Generating",
"Debug Enhance Masks": "Debug Enhance Masks",
"Show enhance masks in preview and final results": "Show enhance masks in preview and final results",
"Use GroundingDINO boxes instead of more detailed SAM masks": "Use GroundingDINO boxes instead of more detailed SAM masks",
"highly detailed face": "highly detailed face",
"detailed girl face": "detailed girl face",
"detailed man face": "detailed man face",
"detailed hand": "detailed hand",
"beautiful eyes": "beautiful eyes",
"face": "face",
"eye": "eye",
"mouth": "mouth",
"hair": "hair",
"hand": "hand",
"body": "body"
} }

View File

@ -1,6 +1,6 @@
import os import os
import ssl
import sys import sys
import ssl
print('[System ARGV] ' + str(sys.argv)) print('[System ARGV] ' + str(sys.argv))
@ -15,13 +15,15 @@ if "GRADIO_SERVER_PORT" not in os.environ:
ssl._create_default_https_context = ssl._create_unverified_context ssl._create_default_https_context = ssl._create_unverified_context
import platform import platform
import fooocus_version import fooocus_version
from build_launcher import build_launcher from build_launcher import build_launcher
from modules.launch_util import is_installed, run, python, run_pip, requirements_met, delete_folder_content from modules.launch_util import is_installed, run, python, run_pip, requirements_met
from modules.model_loader import load_file_from_url from modules.model_loader import load_file_from_url
REINSTALL_ALL = False REINSTALL_ALL = False
TRY_INSTALL_XFORMERS = False TRY_INSTALL_XFORMERS = False
@ -40,7 +42,7 @@ def prepare_environment():
if TRY_INSTALL_XFORMERS: if TRY_INSTALL_XFORMERS:
if REINSTALL_ALL or not is_installed("xformers"): if REINSTALL_ALL or not is_installed("xformers"):
xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.23') xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.20')
if platform.system() == "Windows": if platform.system() == "Windows":
if platform.python_version().startswith("3.10"): if platform.python_version().startswith("3.10"):
run_pip(f"install -U -I --no-deps {xformers_package}", "xformers", live=True) run_pip(f"install -U -I --no-deps {xformers_package}", "xformers", live=True)
@ -62,11 +64,10 @@ def prepare_environment():
vae_approx_filenames = [ vae_approx_filenames = [
('xlvaeapp.pth', 'https://huggingface.co/lllyasviel/misc/resolve/main/xlvaeapp.pth'), ('xlvaeapp.pth', 'https://huggingface.co/lllyasviel/misc/resolve/main/xlvaeapp.pth'),
('vaeapp_sd15.pth', 'https://huggingface.co/lllyasviel/misc/resolve/main/vaeapp_sd15.pt'), ('vaeapp_sd15.pth', 'https://huggingface.co/lllyasviel/misc/resolve/main/vaeapp_sd15.pt'),
('xl-to-v1_interposer-v4.0.safetensors', ('xl-to-v1_interposer-v3.1.safetensors',
'https://huggingface.co/mashb1t/misc/resolve/main/xl-to-v1_interposer-v4.0.safetensors') 'https://huggingface.co/lllyasviel/misc/resolve/main/xl-to-v1_interposer-v3.1.safetensors')
] ]
def ini_args(): def ini_args():
from args_manager import args from args_manager import args
return args return args
@ -76,33 +77,15 @@ prepare_environment()
build_launcher() build_launcher()
args = ini_args() args = ini_args()
if args.gpu_device_id is not None: if args.gpu_device_id is not None:
os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu_device_id) os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu_device_id)
print("Set device to:", args.gpu_device_id) print("Set device to:", args.gpu_device_id)
if args.hf_mirror is not None:
os.environ['HF_MIRROR'] = str(args.hf_mirror)
print("Set hf_mirror to:", args.hf_mirror)
from modules import config from modules import config
from modules.hash_cache import init_cache
os.environ["U2NET_HOME"] = config.path_inpaint
os.environ['GRADIO_TEMP_DIR'] = config.temp_path
if config.temp_path_cleanup_on_launch:
print(f'[Cleanup] Attempting to delete content of temp dir {config.temp_path}')
result = delete_folder_content(config.temp_path, '[Cleanup] ')
if result:
print("[Cleanup] Cleanup successful")
else:
print(f"[Cleanup] Failed to delete content of temp dir.")
def download_models(default_model, previous_default_models, checkpoint_downloads, embeddings_downloads, lora_downloads, vae_downloads):
from modules.util import get_file_from_folder_list
def download_models():
for file_name, url in vae_approx_filenames: for file_name, url in vae_approx_filenames:
load_file_from_url(url=url, model_dir=config.path_vae_approx, file_name=file_name) load_file_from_url(url=url, model_dir=config.path_vae_approx, file_name=file_name)
@ -114,39 +97,31 @@ def download_models(default_model, previous_default_models, checkpoint_downloads
if args.disable_preset_download: if args.disable_preset_download:
print('Skipped model download.') print('Skipped model download.')
return default_model, checkpoint_downloads return
if not args.always_download_new_model: if not args.always_download_new_model:
if not os.path.isfile(get_file_from_folder_list(default_model, config.paths_checkpoints)): if not os.path.exists(os.path.join(config.paths_checkpoints[0], config.default_base_model_name)):
for alternative_model_name in previous_default_models: for alternative_model_name in config.previous_default_models:
if os.path.isfile(get_file_from_folder_list(alternative_model_name, config.paths_checkpoints)): if os.path.exists(os.path.join(config.paths_checkpoints[0], alternative_model_name)):
print(f'You do not have [{default_model}] but you have [{alternative_model_name}].') print(f'You do not have [{config.default_base_model_name}] but you have [{alternative_model_name}].')
print(f'Fooocus will use [{alternative_model_name}] to avoid downloading new models, ' print(f'Fooocus will use [{alternative_model_name}] to avoid downloading new models, '
f'but you are not using the latest models.') f'but you are not using latest models.')
print('Use --always-download-new-model to avoid fallback and always get new models.') print('Use --always-download-new-model to avoid fallback and always get new models.')
checkpoint_downloads = {} config.checkpoint_downloads = {}
default_model = alternative_model_name config.default_base_model_name = alternative_model_name
break break
for file_name, url in checkpoint_downloads.items(): for file_name, url in config.checkpoint_downloads.items():
model_dir = os.path.dirname(get_file_from_folder_list(file_name, config.paths_checkpoints)) load_file_from_url(url=url, model_dir=config.paths_checkpoints[0], file_name=file_name)
load_file_from_url(url=url, model_dir=model_dir, file_name=file_name) for file_name, url in config.embeddings_downloads.items():
for file_name, url in embeddings_downloads.items():
load_file_from_url(url=url, model_dir=config.path_embeddings, file_name=file_name) load_file_from_url(url=url, model_dir=config.path_embeddings, file_name=file_name)
for file_name, url in lora_downloads.items(): for file_name, url in config.lora_downloads.items():
model_dir = os.path.dirname(get_file_from_folder_list(file_name, config.paths_loras)) load_file_from_url(url=url, model_dir=config.paths_loras[0], file_name=file_name)
load_file_from_url(url=url, model_dir=model_dir, file_name=file_name)
for file_name, url in vae_downloads.items():
load_file_from_url(url=url, model_dir=config.path_vae, file_name=file_name)
return default_model, checkpoint_downloads return
config.default_base_model_name, config.checkpoint_downloads = download_models( download_models()
config.default_base_model_name, config.previous_default_models, config.checkpoint_downloads,
config.embeddings_downloads, config.lora_downloads, config.vae_downloads)
config.update_files()
init_cache(config.model_filenames, config.paths_checkpoints, config.lora_filenames, config.paths_loras)
from webui import * from webui import *

View File

@ -1,55 +0,0 @@
# https://github.com/comfyanonymous/ComfyUI/blob/master/nodes.py
#from: https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/howto.html
import numpy as np
import torch
def loglinear_interp(t_steps, num_steps):
"""
Performs log-linear interpolation of a given array of decreasing numbers.
"""
xs = np.linspace(0, 1, len(t_steps))
ys = np.log(t_steps[::-1])
new_xs = np.linspace(0, 1, num_steps)
new_ys = np.interp(new_xs, xs, ys)
interped_ys = np.exp(new_ys)[::-1].copy()
return interped_ys
NOISE_LEVELS = {"SD1": [14.6146412293, 6.4745760956, 3.8636745985, 2.6946151520, 1.8841921177, 1.3943805092, 0.9642583904, 0.6523686016, 0.3977456272, 0.1515232662, 0.0291671582],
"SDXL":[14.6146412293, 6.3184485287, 3.7681790315, 2.1811480769, 1.3405244945, 0.8620721141, 0.5550693289, 0.3798540708, 0.2332364134, 0.1114188177, 0.0291671582],
"SVD": [700.00, 54.5, 15.886, 7.977, 4.248, 1.789, 0.981, 0.403, 0.173, 0.034, 0.002]}
class AlignYourStepsScheduler:
@classmethod
def INPUT_TYPES(s):
return {"required":
{"model_type": (["SD1", "SDXL", "SVD"], ),
"steps": ("INT", {"default": 10, "min": 10, "max": 10000}),
"denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
}
}
RETURN_TYPES = ("SIGMAS",)
CATEGORY = "sampling/custom_sampling/schedulers"
FUNCTION = "get_sigmas"
def get_sigmas(self, model_type, steps, denoise):
total_steps = steps
if denoise < 1.0:
if denoise <= 0.0:
return (torch.FloatTensor([]),)
total_steps = round(steps * denoise)
sigmas = NOISE_LEVELS[model_type][:]
if (steps + 1) != len(sigmas):
sigmas = loglinear_interp(sigmas, steps + 1)
sigmas = sigmas[-(total_steps + 1):]
sigmas[-1] = 0
return (torch.FloatTensor(sigmas), )
NODE_CLASS_MAPPINGS = {
"AlignYourStepsScheduler": AlignYourStepsScheduler,
}

View File

@ -107,7 +107,8 @@ class SDTurboScheduler:
def get_sigmas(self, model, steps, denoise): def get_sigmas(self, model, steps, denoise):
start_step = 10 - int(10 * denoise) start_step = 10 - int(10 * denoise)
timesteps = torch.flip(torch.arange(1, 11) * 100 - 1, (0,))[start_step:start_step + steps] timesteps = torch.flip(torch.arange(1, 11) * 100 - 1, (0,))[start_step:start_step + steps]
sigmas = model.model_sampling.sigma(timesteps) ldm_patched.modules.model_management.load_models_gpu([model])
sigmas = model.model.model_sampling.sigma(timesteps)
sigmas = torch.cat([sigmas, sigmas.new_zeros([1])]) sigmas = torch.cat([sigmas, sigmas.new_zeros([1])])
return (sigmas, ) return (sigmas, )
@ -229,25 +230,6 @@ class SamplerDPMPP_SDE:
sampler = ldm_patched.modules.samplers.ksampler(sampler_name, {"eta": eta, "s_noise": s_noise, "r": r}) sampler = ldm_patched.modules.samplers.ksampler(sampler_name, {"eta": eta, "s_noise": s_noise, "r": r})
return (sampler, ) return (sampler, )
class SamplerTCD:
@classmethod
def INPUT_TYPES(s):
return {
"required": {
"eta": ("FLOAT", {"default": 0.3, "min": 0.0, "max": 1.0, "step": 0.01}),
}
}
RETURN_TYPES = ("SAMPLER",)
CATEGORY = "sampling/custom_sampling/samplers"
FUNCTION = "get_sampler"
def get_sampler(self, eta=0.3):
sampler = ldm_patched.modules.samplers.ksampler("tcd", {"eta": eta})
return (sampler, )
class SamplerCustom: class SamplerCustom:
@classmethod @classmethod
def INPUT_TYPES(s): def INPUT_TYPES(s):
@ -310,7 +292,6 @@ NODE_CLASS_MAPPINGS = {
"KSamplerSelect": KSamplerSelect, "KSamplerSelect": KSamplerSelect,
"SamplerDPMPP_2M_SDE": SamplerDPMPP_2M_SDE, "SamplerDPMPP_2M_SDE": SamplerDPMPP_2M_SDE,
"SamplerDPMPP_SDE": SamplerDPMPP_SDE, "SamplerDPMPP_SDE": SamplerDPMPP_SDE,
"SamplerTCD": SamplerTCD,
"SplitSigmas": SplitSigmas, "SplitSigmas": SplitSigmas,
"FlipSigmas": FlipSigmas, "FlipSigmas": FlipSigmas,
} }

View File

@ -70,7 +70,7 @@ class ModelSamplingDiscrete:
@classmethod @classmethod
def INPUT_TYPES(s): def INPUT_TYPES(s):
return {"required": { "model": ("MODEL",), return {"required": { "model": ("MODEL",),
"sampling": (["eps", "v_prediction", "lcm", "tcd"]), "sampling": (["eps", "v_prediction", "lcm"],),
"zsnr": ("BOOLEAN", {"default": False}), "zsnr": ("BOOLEAN", {"default": False}),
}} }}
@ -90,9 +90,6 @@ class ModelSamplingDiscrete:
elif sampling == "lcm": elif sampling == "lcm":
sampling_type = LCM sampling_type = LCM
sampling_base = ModelSamplingDiscreteDistilled sampling_base = ModelSamplingDiscreteDistilled
elif sampling == "tcd":
sampling_type = ldm_patched.modules.model_sampling.EPS
sampling_base = ModelSamplingDiscreteDistilled
class ModelSamplingAdvanced(sampling_base, sampling_type): class ModelSamplingAdvanced(sampling_base, sampling_type):
pass pass
@ -108,7 +105,7 @@ class ModelSamplingContinuousEDM:
@classmethod @classmethod
def INPUT_TYPES(s): def INPUT_TYPES(s):
return {"required": { "model": ("MODEL",), return {"required": { "model": ("MODEL",),
"sampling": (["v_prediction", "edm_playground_v2.5", "eps"],), "sampling": (["v_prediction", "eps"],),
"sigma_max": ("FLOAT", {"default": 120.0, "min": 0.0, "max": 1000.0, "step":0.001, "round": False}), "sigma_max": ("FLOAT", {"default": 120.0, "min": 0.0, "max": 1000.0, "step":0.001, "round": False}),
"sigma_min": ("FLOAT", {"default": 0.002, "min": 0.0, "max": 1000.0, "step":0.001, "round": False}), "sigma_min": ("FLOAT", {"default": 0.002, "min": 0.0, "max": 1000.0, "step":0.001, "round": False}),
}} }}
@ -121,25 +118,17 @@ class ModelSamplingContinuousEDM:
def patch(self, model, sampling, sigma_max, sigma_min): def patch(self, model, sampling, sigma_max, sigma_min):
m = model.clone() m = model.clone()
latent_format = None
sigma_data = 1.0
if sampling == "eps": if sampling == "eps":
sampling_type = ldm_patched.modules.model_sampling.EPS sampling_type = ldm_patched.modules.model_sampling.EPS
elif sampling == "v_prediction": elif sampling == "v_prediction":
sampling_type = ldm_patched.modules.model_sampling.V_PREDICTION sampling_type = ldm_patched.modules.model_sampling.V_PREDICTION
elif sampling == "edm_playground_v2.5":
sampling_type = ldm_patched.modules.model_sampling.EDM
sigma_data = 0.5
latent_format = ldm_patched.modules.latent_formats.SDXL_Playground_2_5()
class ModelSamplingAdvanced(ldm_patched.modules.model_sampling.ModelSamplingContinuousEDM, sampling_type): class ModelSamplingAdvanced(ldm_patched.modules.model_sampling.ModelSamplingContinuousEDM, sampling_type):
pass pass
model_sampling = ModelSamplingAdvanced(model.model.model_config) model_sampling = ModelSamplingAdvanced(model.model.model_config)
model_sampling.set_parameters(sigma_min, sigma_max, sigma_data) model_sampling.set_sigma_range(sigma_min, sigma_max)
m.add_object_patch("model_sampling", model_sampling) m.add_object_patch("model_sampling", model_sampling)
if latent_format is not None:
m.add_object_patch("latent_format", latent_format)
return (m, ) return (m, )
class RescaleCFG: class RescaleCFG:

View File

@ -752,6 +752,7 @@ def sample_lcm(model, x, sigmas, extra_args=None, callback=None, disable=None, n
return x return x
@torch.no_grad() @torch.no_grad()
def sample_heunpp2(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.): def sample_heunpp2(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
# From MIT licensed: https://github.com/Carzit/sd-webui-samplers-scheduler/ # From MIT licensed: https://github.com/Carzit/sd-webui-samplers-scheduler/
@ -807,102 +808,3 @@ def sample_heunpp2(model, x, sigmas, extra_args=None, callback=None, disable=Non
d_prime = w1 * d + w2 * d_2 + w3 * d_3 d_prime = w1 * d + w2 * d_2 + w3 * d_3
x = x + d_prime * dt x = x + d_prime * dt
return x return x
@torch.no_grad()
def sample_tcd(model, x, sigmas, extra_args=None, callback=None, disable=None, noise_sampler=None, eta=0.3):
extra_args = {} if extra_args is None else extra_args
noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
s_in = x.new_ones([x.shape[0]])
model_sampling = model.inner_model.inner_model.model_sampling
timesteps_s = torch.floor((1 - eta) * model_sampling.timestep(sigmas)).to(dtype=torch.long).detach().cpu()
timesteps_s[-1] = 0
alpha_prod_s = model_sampling.alphas_cumprod[timesteps_s]
beta_prod_s = 1 - alpha_prod_s
for i in trange(len(sigmas) - 1, disable=disable):
denoised = model(x, sigmas[i] * s_in, **extra_args) # predicted_original_sample
eps = (x - denoised) / sigmas[i]
denoised = alpha_prod_s[i + 1].sqrt() * denoised + beta_prod_s[i + 1].sqrt() * eps
if callback is not None:
callback({"x": x, "i": i, "sigma": sigmas[i], "sigma_hat": sigmas[i], "denoised": denoised})
x = denoised
if eta > 0 and sigmas[i + 1] > 0:
noise = noise_sampler(sigmas[i], sigmas[i + 1])
x = x / alpha_prod_s[i+1].sqrt() + noise * (sigmas[i+1]**2 + 1 - 1/alpha_prod_s[i+1]).sqrt()
else:
x *= torch.sqrt(1.0 + sigmas[i + 1] ** 2)
return x
@torch.no_grad()
def sample_restart(model, x, sigmas, extra_args=None, callback=None, disable=None, s_noise=1., restart_list=None):
"""Implements restart sampling in Restart Sampling for Improving Generative Processes (2023)
Restart_list format: {min_sigma: [ restart_steps, restart_times, max_sigma]}
If restart_list is None: will choose restart_list automatically, otherwise will use the given restart_list
"""
extra_args = {} if extra_args is None else extra_args
s_in = x.new_ones([x.shape[0]])
step_id = 0
def heun_step(x, old_sigma, new_sigma, second_order=True):
nonlocal step_id
denoised = model(x, old_sigma * s_in, **extra_args)
d = to_d(x, old_sigma, denoised)
if callback is not None:
callback({'x': x, 'i': step_id, 'sigma': new_sigma, 'sigma_hat': old_sigma, 'denoised': denoised})
dt = new_sigma - old_sigma
if new_sigma == 0 or not second_order:
# Euler method
x = x + d * dt
else:
# Heun's method
x_2 = x + d * dt
denoised_2 = model(x_2, new_sigma * s_in, **extra_args)
d_2 = to_d(x_2, new_sigma, denoised_2)
d_prime = (d + d_2) / 2
x = x + d_prime * dt
step_id += 1
return x
steps = sigmas.shape[0] - 1
if restart_list is None:
if steps >= 20:
restart_steps = 9
restart_times = 1
if steps >= 36:
restart_steps = steps // 4
restart_times = 2
sigmas = get_sigmas_karras(steps - restart_steps * restart_times, sigmas[-2].item(), sigmas[0].item(), device=sigmas.device)
restart_list = {0.1: [restart_steps + 1, restart_times, 2]}
else:
restart_list = {}
restart_list = {int(torch.argmin(abs(sigmas - key), dim=0)): value for key, value in restart_list.items()}
step_list = []
for i in range(len(sigmas) - 1):
step_list.append((sigmas[i], sigmas[i + 1]))
if i + 1 in restart_list:
restart_steps, restart_times, restart_max = restart_list[i + 1]
min_idx = i + 1
max_idx = int(torch.argmin(abs(sigmas - restart_max), dim=0))
if max_idx < min_idx:
sigma_restart = get_sigmas_karras(restart_steps, sigmas[min_idx].item(), sigmas[max_idx].item(), device=sigmas.device)[:-1]
while restart_times > 0:
restart_times -= 1
step_list.extend(zip(sigma_restart[:-1], sigma_restart[1:]))
last_sigma = None
for old_sigma, new_sigma in tqdm(step_list, disable=disable):
if last_sigma is None:
last_sigma = old_sigma
elif last_sigma < old_sigma:
x = x + torch.randn_like(x) * s_noise * (old_sigma ** 2 - last_sigma ** 2) ** 0.5
x = heun_step(x, old_sigma, new_sigma)
last_sigma = new_sigma
return x

View File

@ -8,7 +8,7 @@ class CLIPEmbeddingNoiseAugmentation(ImageConcatWithNoiseAugmentation):
if clip_stats_path is None: if clip_stats_path is None:
clip_mean, clip_std = torch.zeros(timestep_dim), torch.ones(timestep_dim) clip_mean, clip_std = torch.zeros(timestep_dim), torch.ones(timestep_dim)
else: else:
clip_mean, clip_std = torch.load(clip_stats_path, map_location="cpu", weights_only=True) clip_mean, clip_std = torch.load(clip_stats_path, map_location="cpu")
self.register_buffer("data_mean", clip_mean[None, :], persistent=False) self.register_buffer("data_mean", clip_mean[None, :], persistent=False)
self.register_buffer("data_std", clip_std[None, :], persistent=False) self.register_buffer("data_std", clip_std[None, :], persistent=False)
self.time_embed = Timestep(timestep_dim) self.time_embed = Timestep(timestep_dim)

View File

@ -37,7 +37,6 @@ parser.add_argument("--listen", type=str, default="127.0.0.1", metavar="IP", nar
parser.add_argument("--port", type=int, default=8188) parser.add_argument("--port", type=int, default=8188)
parser.add_argument("--disable-header-check", type=str, default=None, metavar="ORIGIN", nargs="?", const="*") parser.add_argument("--disable-header-check", type=str, default=None, metavar="ORIGIN", nargs="?", const="*")
parser.add_argument("--web-upload-size", type=float, default=100) parser.add_argument("--web-upload-size", type=float, default=100)
parser.add_argument("--hf-mirror", type=str, default=None)
parser.add_argument("--external-working-path", type=str, default=None, metavar="PATH", nargs='+', action='append') parser.add_argument("--external-working-path", type=str, default=None, metavar="PATH", nargs='+', action='append')
parser.add_argument("--output-path", type=str, default=None) parser.add_argument("--output-path", type=str, default=None)

View File

@ -1,4 +1,3 @@
import torch
class LatentFormat: class LatentFormat:
scale_factor = 1.0 scale_factor = 1.0
@ -35,70 +34,6 @@ class SDXL(LatentFormat):
] ]
self.taesd_decoder_name = "taesdxl_decoder" self.taesd_decoder_name = "taesdxl_decoder"
class SDXL_Playground_2_5(LatentFormat):
def __init__(self):
self.scale_factor = 0.5
self.latents_mean = torch.tensor([-1.6574, 1.886, -1.383, 2.5155]).view(1, 4, 1, 1)
self.latents_std = torch.tensor([8.4927, 5.9022, 6.5498, 5.2299]).view(1, 4, 1, 1)
self.latent_rgb_factors = [
# R G B
[ 0.3920, 0.4054, 0.4549],
[-0.2634, -0.0196, 0.0653],
[ 0.0568, 0.1687, -0.0755],
[-0.3112, -0.2359, -0.2076]
]
self.taesd_decoder_name = "taesdxl_decoder"
def process_in(self, latent):
latents_mean = self.latents_mean.to(latent.device, latent.dtype)
latents_std = self.latents_std.to(latent.device, latent.dtype)
return (latent - latents_mean) * self.scale_factor / latents_std
def process_out(self, latent):
latents_mean = self.latents_mean.to(latent.device, latent.dtype)
latents_std = self.latents_std.to(latent.device, latent.dtype)
return latent * latents_std / self.scale_factor + latents_mean
class SD_X4(LatentFormat): class SD_X4(LatentFormat):
def __init__(self): def __init__(self):
self.scale_factor = 0.08333 self.scale_factor = 0.08333
self.latent_rgb_factors = [
[-0.2340, -0.3863, -0.3257],
[ 0.0994, 0.0885, -0.0908],
[-0.2833, -0.2349, -0.3741],
[ 0.2523, -0.0055, -0.1651]
]
class SC_Prior(LatentFormat):
def __init__(self):
self.scale_factor = 1.0
self.latent_rgb_factors = [
[-0.0326, -0.0204, -0.0127],
[-0.1592, -0.0427, 0.0216],
[ 0.0873, 0.0638, -0.0020],
[-0.0602, 0.0442, 0.1304],
[ 0.0800, -0.0313, -0.1796],
[-0.0810, -0.0638, -0.1581],
[ 0.1791, 0.1180, 0.0967],
[ 0.0740, 0.1416, 0.0432],
[-0.1745, -0.1888, -0.1373],
[ 0.2412, 0.1577, 0.0928],
[ 0.1908, 0.0998, 0.0682],
[ 0.0209, 0.0365, -0.0092],
[ 0.0448, -0.0650, -0.1728],
[-0.1658, -0.1045, -0.1308],
[ 0.0542, 0.1545, 0.1325],
[-0.0352, -0.1672, -0.2541]
]
class SC_B(LatentFormat):
def __init__(self):
self.scale_factor = 1.0 / 0.43
self.latent_rgb_factors = [
[ 0.1121, 0.2006, 0.1023],
[-0.2093, -0.0222, -0.0195],
[-0.3087, -0.1535, 0.0366],
[ 0.0290, -0.1574, -0.4078]
]

View File

@ -1,7 +1,7 @@
import torch import torch
import numpy as np
from ldm_patched.ldm.modules.diffusionmodules.util import make_beta_schedule from ldm_patched.ldm.modules.diffusionmodules.util import make_beta_schedule
import math import math
import numpy as np
class EPS: class EPS:
def calculate_input(self, sigma, noise): def calculate_input(self, sigma, noise):
@ -12,28 +12,12 @@ class EPS:
sigma = sigma.view(sigma.shape[:1] + (1,) * (model_output.ndim - 1)) sigma = sigma.view(sigma.shape[:1] + (1,) * (model_output.ndim - 1))
return model_input - model_output * sigma return model_input - model_output * sigma
def noise_scaling(self, sigma, noise, latent_image, max_denoise=False):
if max_denoise:
noise = noise * torch.sqrt(1.0 + sigma ** 2.0)
else:
noise = noise * sigma
noise += latent_image
return noise
def inverse_noise_scaling(self, sigma, latent):
return latent
class V_PREDICTION(EPS): class V_PREDICTION(EPS):
def calculate_denoised(self, sigma, model_output, model_input): def calculate_denoised(self, sigma, model_output, model_input):
sigma = sigma.view(sigma.shape[:1] + (1,) * (model_output.ndim - 1)) sigma = sigma.view(sigma.shape[:1] + (1,) * (model_output.ndim - 1))
return model_input * self.sigma_data ** 2 / (sigma ** 2 + self.sigma_data ** 2) - model_output * sigma * self.sigma_data / (sigma ** 2 + self.sigma_data ** 2) ** 0.5 return model_input * self.sigma_data ** 2 / (sigma ** 2 + self.sigma_data ** 2) - model_output * sigma * self.sigma_data / (sigma ** 2 + self.sigma_data ** 2) ** 0.5
class EDM(V_PREDICTION):
def calculate_denoised(self, sigma, model_output, model_input):
sigma = sigma.view(sigma.shape[:1] + (1,) * (model_output.ndim - 1))
return model_input * self.sigma_data ** 2 / (sigma ** 2 + self.sigma_data ** 2) + model_output * sigma * self.sigma_data / (sigma ** 2 + self.sigma_data ** 2) ** 0.5
class ModelSamplingDiscrete(torch.nn.Module): class ModelSamplingDiscrete(torch.nn.Module):
def __init__(self, model_config=None): def __init__(self, model_config=None):
@ -58,7 +42,8 @@ class ModelSamplingDiscrete(torch.nn.Module):
else: else:
betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s) betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s)
alphas = 1. - betas alphas = 1. - betas
alphas_cumprod = torch.cumprod(alphas, dim=0) alphas_cumprod = torch.tensor(np.cumprod(alphas, axis=0), dtype=torch.float32)
# alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
timesteps, = betas.shape timesteps, = betas.shape
self.num_timesteps = int(timesteps) self.num_timesteps = int(timesteps)
@ -70,16 +55,11 @@ class ModelSamplingDiscrete(torch.nn.Module):
# self.register_buffer('alphas_cumprod_prev', torch.tensor(alphas_cumprod_prev, dtype=torch.float32)) # self.register_buffer('alphas_cumprod_prev', torch.tensor(alphas_cumprod_prev, dtype=torch.float32))
sigmas = ((1 - alphas_cumprod) / alphas_cumprod) ** 0.5 sigmas = ((1 - alphas_cumprod) / alphas_cumprod) ** 0.5
alphas_cumprod = torch.tensor(np.cumprod(alphas, axis=0), dtype=torch.float32)
self.set_sigmas(sigmas) self.set_sigmas(sigmas)
self.set_alphas_cumprod(alphas_cumprod.float())
def set_sigmas(self, sigmas): def set_sigmas(self, sigmas):
self.register_buffer('sigmas', sigmas.float()) self.register_buffer('sigmas', sigmas)
self.register_buffer('log_sigmas', sigmas.log().float()) self.register_buffer('log_sigmas', sigmas.log())
def set_alphas_cumprod(self, alphas_cumprod):
self.register_buffer("alphas_cumprod", alphas_cumprod.float())
@property @property
def sigma_min(self): def sigma_min(self):
@ -114,6 +94,8 @@ class ModelSamplingDiscrete(torch.nn.Module):
class ModelSamplingContinuousEDM(torch.nn.Module): class ModelSamplingContinuousEDM(torch.nn.Module):
def __init__(self, model_config=None): def __init__(self, model_config=None):
super().__init__() super().__init__()
self.sigma_data = 1.0
if model_config is not None: if model_config is not None:
sampling_settings = model_config.sampling_settings sampling_settings = model_config.sampling_settings
else: else:
@ -121,11 +103,9 @@ class ModelSamplingContinuousEDM(torch.nn.Module):
sigma_min = sampling_settings.get("sigma_min", 0.002) sigma_min = sampling_settings.get("sigma_min", 0.002)
sigma_max = sampling_settings.get("sigma_max", 120.0) sigma_max = sampling_settings.get("sigma_max", 120.0)
sigma_data = sampling_settings.get("sigma_data", 1.0) self.set_sigma_range(sigma_min, sigma_max)
self.set_parameters(sigma_min, sigma_max, sigma_data)
def set_parameters(self, sigma_min, sigma_max, sigma_data): def set_sigma_range(self, sigma_min, sigma_max):
self.sigma_data = sigma_data
sigmas = torch.linspace(math.log(sigma_min), math.log(sigma_max), 1000).exp() sigmas = torch.linspace(math.log(sigma_min), math.log(sigma_max), 1000).exp()
self.register_buffer('sigmas', sigmas) #for compatibility with some schedulers self.register_buffer('sigmas', sigmas) #for compatibility with some schedulers
@ -154,56 +134,3 @@ class ModelSamplingContinuousEDM(torch.nn.Module):
log_sigma_min = math.log(self.sigma_min) log_sigma_min = math.log(self.sigma_min)
return math.exp((math.log(self.sigma_max) - log_sigma_min) * percent + log_sigma_min) return math.exp((math.log(self.sigma_max) - log_sigma_min) * percent + log_sigma_min)
class StableCascadeSampling(ModelSamplingDiscrete):
def __init__(self, model_config=None):
super().__init__()
if model_config is not None:
sampling_settings = model_config.sampling_settings
else:
sampling_settings = {}
self.set_parameters(sampling_settings.get("shift", 1.0))
def set_parameters(self, shift=1.0, cosine_s=8e-3):
self.shift = shift
self.cosine_s = torch.tensor(cosine_s)
self._init_alpha_cumprod = torch.cos(self.cosine_s / (1 + self.cosine_s) * torch.pi * 0.5) ** 2
#This part is just for compatibility with some schedulers in the codebase
self.num_timesteps = 10000
sigmas = torch.empty((self.num_timesteps), dtype=torch.float32)
for x in range(self.num_timesteps):
t = (x + 1) / self.num_timesteps
sigmas[x] = self.sigma(t)
self.set_sigmas(sigmas)
def sigma(self, timestep):
alpha_cumprod = (torch.cos((timestep + self.cosine_s) / (1 + self.cosine_s) * torch.pi * 0.5) ** 2 / self._init_alpha_cumprod)
if self.shift != 1.0:
var = alpha_cumprod
logSNR = (var/(1-var)).log()
logSNR += 2 * torch.log(1.0 / torch.tensor(self.shift))
alpha_cumprod = logSNR.sigmoid()
alpha_cumprod = alpha_cumprod.clamp(0.0001, 0.9999)
return ((1 - alpha_cumprod) / alpha_cumprod) ** 0.5
def timestep(self, sigma):
var = 1 / ((sigma * sigma) + 1)
var = var.clamp(0, 1.0)
s, min_var = self.cosine_s.to(var.device), self._init_alpha_cumprod.to(var.device)
t = (((var * min_var) ** 0.5).acos() / (torch.pi * 0.5)) * (1 + s) - s
return t
def percent_to_sigma(self, percent):
if percent <= 0.0:
return 999999999.9
if percent >= 1.0:
return 0.0
percent = 1.0 - percent
return self.sigma(torch.tensor(percent))

View File

@ -523,7 +523,7 @@ class UNIPCBH2(Sampler):
KSAMPLER_NAMES = ["euler", "euler_ancestral", "heun", "heunpp2","dpm_2", "dpm_2_ancestral", KSAMPLER_NAMES = ["euler", "euler_ancestral", "heun", "heunpp2","dpm_2", "dpm_2_ancestral",
"lms", "dpm_fast", "dpm_adaptive", "dpmpp_2s_ancestral", "dpmpp_sde", "dpmpp_sde_gpu", "lms", "dpm_fast", "dpm_adaptive", "dpmpp_2s_ancestral", "dpmpp_sde", "dpmpp_sde_gpu",
"dpmpp_2m", "dpmpp_2m_sde", "dpmpp_2m_sde_gpu", "dpmpp_3m_sde", "dpmpp_3m_sde_gpu", "ddpm", "lcm", "tcd", "edm_playground_v2.5", "restart"] "dpmpp_2m", "dpmpp_2m_sde", "dpmpp_2m_sde_gpu", "dpmpp_3m_sde", "dpmpp_3m_sde_gpu", "ddpm", "lcm"]
class KSAMPLER(Sampler): class KSAMPLER(Sampler):
def __init__(self, sampler_function, extra_options={}, inpaint_options={}): def __init__(self, sampler_function, extra_options={}, inpaint_options={}):

View File

@ -427,13 +427,12 @@ def load_checkpoint(config_path=None, ckpt_path=None, output_vae=True, output_cl
return (ldm_patched.modules.model_patcher.ModelPatcher(model, load_device=model_management.get_torch_device(), offload_device=offload_device), clip, vae) return (ldm_patched.modules.model_patcher.ModelPatcher(model, load_device=model_management.get_torch_device(), offload_device=offload_device), clip, vae)
def load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, output_clipvision=False, embedding_directory=None, output_model=True, vae_filename_param=None): def load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, output_clipvision=False, embedding_directory=None, output_model=True):
sd = ldm_patched.modules.utils.load_torch_file(ckpt_path) sd = ldm_patched.modules.utils.load_torch_file(ckpt_path)
sd_keys = sd.keys() sd_keys = sd.keys()
clip = None clip = None
clipvision = None clipvision = None
vae = None vae = None
vae_filename = None
model = None model = None
model_patcher = None model_patcher = None
clip_target = None clip_target = None
@ -463,12 +462,8 @@ def load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, o
model.load_model_weights(sd, "model.diffusion_model.") model.load_model_weights(sd, "model.diffusion_model.")
if output_vae: if output_vae:
if vae_filename_param is None:
vae_sd = ldm_patched.modules.utils.state_dict_prefix_replace(sd, {"first_stage_model.": ""}, filter_keys=True) vae_sd = ldm_patched.modules.utils.state_dict_prefix_replace(sd, {"first_stage_model.": ""}, filter_keys=True)
vae_sd = model_config.process_vae_state_dict(vae_sd) vae_sd = model_config.process_vae_state_dict(vae_sd)
else:
vae_sd = ldm_patched.modules.utils.load_torch_file(vae_filename_param)
vae_filename = vae_filename_param
vae = VAE(sd=vae_sd) vae = VAE(sd=vae_sd)
if output_clip: if output_clip:
@ -490,7 +485,7 @@ def load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, o
print("loaded straight to GPU") print("loaded straight to GPU")
model_management.load_model_gpu(model_patcher) model_management.load_model_gpu(model_patcher)
return model_patcher, clip, vae, vae_filename, clipvision return (model_patcher, clip, vae, clipvision)
def load_unet_state_dict(sd): #load unet in diffusers format def load_unet_state_dict(sd): #load unet in diffusers format

View File

@ -326,7 +326,7 @@ def load_embed(embedding_name, embedding_directory, embedding_size, embed_key=No
except: except:
embed_out = safe_load_embed_zip(embed_path) embed_out = safe_load_embed_zip(embed_path)
else: else:
embed = torch.load(embed_path, map_location="cpu", weights_only=True) embed = torch.load(embed_path, map_location="cpu")
except Exception as e: except Exception as e:
print(traceback.format_exc()) print(traceback.format_exc())
print() print()

View File

@ -377,15 +377,15 @@ class VQAutoEncoder(nn.Module):
) )
if model_path is not None: if model_path is not None:
chkpt = torch.load(model_path, map_location="cpu", weights_only=True) chkpt = torch.load(model_path, map_location="cpu")
if "params_ema" in chkpt: if "params_ema" in chkpt:
self.load_state_dict( self.load_state_dict(
torch.load(model_path, map_location="cpu", weights_only=True)["params_ema"] torch.load(model_path, map_location="cpu")["params_ema"]
) )
logger.info(f"vqgan is loaded from: {model_path} [params_ema]") logger.info(f"vqgan is loaded from: {model_path} [params_ema]")
elif "params" in chkpt: elif "params" in chkpt:
self.load_state_dict( self.load_state_dict(
torch.load(model_path, map_location="cpu", weights_only=True)["params"] torch.load(model_path, map_location="cpu")["params"]
) )
logger.info(f"vqgan is loaded from: {model_path} [params]") logger.info(f"vqgan is loaded from: {model_path} [params]")
else: else:

View File

@ -273,8 +273,8 @@ class GFPGANBilinear(nn.Module):
if decoder_load_path: if decoder_load_path:
self.stylegan_decoder.load_state_dict( self.stylegan_decoder.load_state_dict(
torch.load( torch.load(
decoder_load_path, map_location=lambda storage, loc: storage, decoder_load_path, map_location=lambda storage, loc: storage
weights_only=True)["params_ema"] )["params_ema"]
) )
# fix decoder without updating params # fix decoder without updating params
if fix_decoder: if fix_decoder:

View File

@ -373,8 +373,8 @@ class GFPGANv1(nn.Module):
if decoder_load_path: if decoder_load_path:
self.stylegan_decoder.load_state_dict( self.stylegan_decoder.load_state_dict(
torch.load( torch.load(
decoder_load_path, map_location=lambda storage, loc: storage, decoder_load_path, map_location=lambda storage, loc: storage
weights_only=True)["params_ema"] )["params_ema"]
) )
# fix decoder without updating params # fix decoder without updating params
if fix_decoder: if fix_decoder:

View File

@ -284,8 +284,8 @@ class GFPGANv1Clean(nn.Module):
if decoder_load_path: if decoder_load_path:
self.stylegan_decoder.load_state_dict( self.stylegan_decoder.load_state_dict(
torch.load( torch.load(
decoder_load_path, map_location=lambda storage, loc: storage, decoder_load_path, map_location=lambda storage, loc: storage
weights_only=True)["params_ema"] )["params_ema"]
) )
# fix decoder without updating params # fix decoder without updating params
if fix_decoder: if fix_decoder:

View File

File diff suppressed because it is too large Load Diff

View File

@ -2,16 +2,13 @@ import os
import json import json
import math import math
import numbers import numbers
import args_manager import args_manager
import tempfile
import modules.flags import modules.flags
import modules.sdxl_styles import modules.sdxl_styles
from modules.model_loader import load_file_from_url from modules.model_loader import load_file_from_url
from modules.extra_utils import makedirs_with_log, get_files_from_folder, try_eval_env_var from modules.util import get_files_from_folder, makedirs_with_log
from modules.flags import OutputFormat, Performance, MetadataScheme from modules.flags import Performance, MetadataScheme
def get_config_path(key, default_value): def get_config_path(key, default_value):
env = os.getenv(key) env = os.getenv(key)
@ -21,7 +18,6 @@ def get_config_path(key, default_value):
else: else:
return os.path.abspath(default_value) return os.path.abspath(default_value)
wildcards_max_bfs_depth = 64
config_path = get_config_path('config_path', "./config.txt") config_path = get_config_path('config_path', "./config.txt")
config_example_path = get_config_path('config_example_path', "config_modification_tutorial.txt") config_example_path = get_config_path('config_example_path', "config_modification_tutorial.txt")
config_dict = {} config_dict = {}
@ -98,38 +94,21 @@ def try_load_deprecated_user_path_config():
try_load_deprecated_user_path_config() try_load_deprecated_user_path_config()
def get_presets(): preset = args_manager.args.preset
preset_folder = 'presets'
presets = ['initial']
if not os.path.exists(preset_folder):
print('No presets found.')
return presets
return presets + [f[:f.index(".json")] for f in os.listdir(preset_folder) if f.endswith('.json')] if isinstance(preset, str):
def update_presets():
global available_presets
available_presets = get_presets()
def try_get_preset_content(preset):
if isinstance(preset, str):
preset_path = os.path.abspath(f'./presets/{preset}.json') preset_path = os.path.abspath(f'./presets/{preset}.json')
try: try:
if os.path.exists(preset_path): if os.path.exists(preset_path):
with open(preset_path, "r", encoding="utf-8") as json_file: with open(preset_path, "r", encoding="utf-8") as json_file:
json_content = json.load(json_file) config_dict.update(json.load(json_file))
print(f'Loaded preset: {preset_path}') print(f'Loaded preset: {preset_path}')
return json_content
else: else:
raise FileNotFoundError raise FileNotFoundError
except Exception as e: except Exception as e:
print(f'Load preset [{preset_path}] failed') print(f'Load preset [{preset_path}] failed')
print(e) print(e)
return {}
available_presets = get_presets()
preset = args_manager.args.preset
config_dict.update(try_get_preset_content(preset))
def get_path_output() -> str: def get_path_output() -> str:
""" """
@ -138,7 +117,7 @@ def get_path_output() -> str:
global config_dict global config_dict
path_output = get_dir_or_set_default('path_outputs', '../outputs/', make_directory=True) path_output = get_dir_or_set_default('path_outputs', '../outputs/', make_directory=True)
if args_manager.args.output_path: if args_manager.args.output_path:
print(f'Overriding config value path_outputs with {args_manager.args.output_path}') print(f'[CONFIG] Overriding config value path_outputs with {args_manager.args.output_path}')
config_dict['path_outputs'] = path_output = args_manager.args.output_path config_dict['path_outputs'] = path_output = args_manager.args.output_path
return path_output return path_output
@ -192,19 +171,14 @@ paths_checkpoints = get_dir_or_set_default('path_checkpoints', ['../models/check
paths_loras = get_dir_or_set_default('path_loras', ['../models/loras/'], True) paths_loras = get_dir_or_set_default('path_loras', ['../models/loras/'], True)
path_embeddings = get_dir_or_set_default('path_embeddings', '../models/embeddings/') path_embeddings = get_dir_or_set_default('path_embeddings', '../models/embeddings/')
path_vae_approx = get_dir_or_set_default('path_vae_approx', '../models/vae_approx/') path_vae_approx = get_dir_or_set_default('path_vae_approx', '../models/vae_approx/')
path_vae = get_dir_or_set_default('path_vae', '../models/vae/')
path_upscale_models = get_dir_or_set_default('path_upscale_models', '../models/upscale_models/') path_upscale_models = get_dir_or_set_default('path_upscale_models', '../models/upscale_models/')
path_inpaint = get_dir_or_set_default('path_inpaint', '../models/inpaint/') path_inpaint = get_dir_or_set_default('path_inpaint', '../models/inpaint/')
path_controlnet = get_dir_or_set_default('path_controlnet', '../models/controlnet/') path_controlnet = get_dir_or_set_default('path_controlnet', '../models/controlnet/')
path_clip_vision = get_dir_or_set_default('path_clip_vision', '../models/clip_vision/') path_clip_vision = get_dir_or_set_default('path_clip_vision', '../models/clip_vision/')
path_fooocus_expansion = get_dir_or_set_default('path_fooocus_expansion', '../models/prompt_expansion/fooocus_expansion') path_fooocus_expansion = get_dir_or_set_default('path_fooocus_expansion', '../models/prompt_expansion/fooocus_expansion')
path_wildcards = get_dir_or_set_default('path_wildcards', '../wildcards/')
path_safety_checker = get_dir_or_set_default('path_safety_checker', '../models/safety_checker/')
path_sam = get_dir_or_set_default('path_sam', '../models/sam/')
path_outputs = get_path_output() path_outputs = get_path_output()
def get_config_item_or_set_default(key, default_value, validator, disable_empty_as_none=False):
def get_config_item_or_set_default(key, default_value, validator, disable_empty_as_none=False, expected_type=None):
global config_dict, visited_keys global config_dict, visited_keys
if key not in visited_keys: if key not in visited_keys:
@ -212,7 +186,6 @@ def get_config_item_or_set_default(key, default_value, validator, disable_empty_
v = os.getenv(key) v = os.getenv(key)
if v is not None: if v is not None:
v = try_eval_env_var(v, expected_type)
print(f"Environment: {key} = {v}") print(f"Environment: {key} = {v}")
config_dict[key] = v config_dict[key] = v
@ -233,145 +206,86 @@ def get_config_item_or_set_default(key, default_value, validator, disable_empty_
return default_value return default_value
def init_temp_path(path: str | None, default_path: str) -> str: default_base_model_name = get_config_item_or_set_default(
if args_manager.args.temp_path:
path = args_manager.args.temp_path
if path != '' and path != default_path:
try:
if not os.path.isabs(path):
path = os.path.abspath(path)
os.makedirs(path, exist_ok=True)
print(f'Using temp path {path}')
return path
except Exception as e:
print(f'Could not create temp path {path}. Reason: {e}')
print(f'Using default temp path {default_path} instead.')
os.makedirs(default_path, exist_ok=True)
return default_path
default_temp_path = os.path.join(tempfile.gettempdir(), 'fooocus')
temp_path = init_temp_path(get_config_item_or_set_default(
key='temp_path',
default_value=default_temp_path,
validator=lambda x: isinstance(x, str),
expected_type=str
), default_temp_path)
temp_path_cleanup_on_launch = get_config_item_or_set_default(
key='temp_path_cleanup_on_launch',
default_value=True,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_base_model_name = default_model = get_config_item_or_set_default(
key='default_model', key='default_model',
default_value='model.safetensors', default_value='model.safetensors',
validator=lambda x: isinstance(x, str), validator=lambda x: isinstance(x, str)
expected_type=str
) )
previous_default_models = get_config_item_or_set_default( previous_default_models = get_config_item_or_set_default(
key='previous_default_models', key='previous_default_models',
default_value=[], default_value=[],
validator=lambda x: isinstance(x, list) and all(isinstance(k, str) for k in x), validator=lambda x: isinstance(x, list) and all(isinstance(k, str) for k in x)
expected_type=list
) )
default_refiner_model_name = default_refiner = get_config_item_or_set_default( default_refiner_model_name = get_config_item_or_set_default(
key='default_refiner', key='default_refiner',
default_value='None', default_value='None',
validator=lambda x: isinstance(x, str), validator=lambda x: isinstance(x, str)
expected_type=str
) )
default_refiner_switch = get_config_item_or_set_default( default_refiner_switch = get_config_item_or_set_default(
key='default_refiner_switch', key='default_refiner_switch',
default_value=0.8, default_value=0.8,
validator=lambda x: isinstance(x, numbers.Number) and 0 <= x <= 1, validator=lambda x: isinstance(x, numbers.Number) and 0 <= x <= 1
expected_type=numbers.Number
) )
default_loras_min_weight = get_config_item_or_set_default( default_loras_min_weight = get_config_item_or_set_default(
key='default_loras_min_weight', key='default_loras_min_weight',
default_value=-2, default_value=-2,
validator=lambda x: isinstance(x, numbers.Number) and -10 <= x <= 10, validator=lambda x: isinstance(x, numbers.Number) and -10 <= x <= 10
expected_type=numbers.Number
) )
default_loras_max_weight = get_config_item_or_set_default( default_loras_max_weight = get_config_item_or_set_default(
key='default_loras_max_weight', key='default_loras_max_weight',
default_value=2, default_value=2,
validator=lambda x: isinstance(x, numbers.Number) and -10 <= x <= 10, validator=lambda x: isinstance(x, numbers.Number) and -10 <= x <= 10
expected_type=numbers.Number
) )
default_loras = get_config_item_or_set_default( default_loras = get_config_item_or_set_default(
key='default_loras', key='default_loras',
default_value=[ default_value=[
[ [
True,
"None", "None",
1.0 1.0
], ],
[ [
True,
"None", "None",
1.0 1.0
], ],
[ [
True,
"None", "None",
1.0 1.0
], ],
[ [
True,
"None", "None",
1.0 1.0
], ],
[ [
True,
"None", "None",
1.0 1.0
] ]
], ],
validator=lambda x: isinstance(x, list) and all( validator=lambda x: isinstance(x, list) and all(len(y) == 2 and isinstance(y[0], str) and isinstance(y[1], numbers.Number) for y in x)
len(y) == 3 and isinstance(y[0], bool) and isinstance(y[1], str) and isinstance(y[2], numbers.Number)
or len(y) == 2 and isinstance(y[0], str) and isinstance(y[1], numbers.Number)
for y in x),
expected_type=list
) )
default_loras = [(y[0], y[1], y[2]) if len(y) == 3 else (True, y[0], y[1]) for y in default_loras]
default_max_lora_number = get_config_item_or_set_default( default_max_lora_number = get_config_item_or_set_default(
key='default_max_lora_number', key='default_max_lora_number',
default_value=len(default_loras) if isinstance(default_loras, list) and len(default_loras) > 0 else 5, default_value=len(default_loras),
validator=lambda x: isinstance(x, int) and x >= 1, validator=lambda x: isinstance(x, int) and x >= 1
expected_type=int
) )
default_cfg_scale = get_config_item_or_set_default( default_cfg_scale = get_config_item_or_set_default(
key='default_cfg_scale', key='default_cfg_scale',
default_value=7.0, default_value=7.0,
validator=lambda x: isinstance(x, numbers.Number), validator=lambda x: isinstance(x, numbers.Number)
expected_type=numbers.Number
) )
default_sample_sharpness = get_config_item_or_set_default( default_sample_sharpness = get_config_item_or_set_default(
key='default_sample_sharpness', key='default_sample_sharpness',
default_value=2.0, default_value=2.0,
validator=lambda x: isinstance(x, numbers.Number), validator=lambda x: isinstance(x, numbers.Number)
expected_type=numbers.Number
) )
default_sampler = get_config_item_or_set_default( default_sampler = get_config_item_or_set_default(
key='default_sampler', key='default_sampler',
default_value='dpmpp_2m_sde_gpu', default_value='dpmpp_2m_sde_gpu',
validator=lambda x: x in modules.flags.sampler_list, validator=lambda x: x in modules.flags.sampler_list
expected_type=str
) )
default_scheduler = get_config_item_or_set_default( default_scheduler = get_config_item_or_set_default(
key='default_scheduler', key='default_scheduler',
default_value='karras', default_value='karras',
validator=lambda x: x in modules.flags.scheduler_list, validator=lambda x: x in modules.flags.scheduler_list
expected_type=str
)
default_vae = get_config_item_or_set_default(
key='default_vae',
default_value=modules.flags.default_vae,
validator=lambda x: isinstance(x, str),
expected_type=str
) )
default_styles = get_config_item_or_set_default( default_styles = get_config_item_or_set_default(
key='default_styles', key='default_styles',
@ -380,379 +294,146 @@ default_styles = get_config_item_or_set_default(
"Fooocus Enhance", "Fooocus Enhance",
"Fooocus Sharp" "Fooocus Sharp"
], ],
validator=lambda x: isinstance(x, list) and all(y in modules.sdxl_styles.legal_style_names for y in x), validator=lambda x: isinstance(x, list) and all(y in modules.sdxl_styles.legal_style_names for y in x)
expected_type=list
) )
default_prompt_negative = get_config_item_or_set_default( default_prompt_negative = get_config_item_or_set_default(
key='default_prompt_negative', key='default_prompt_negative',
default_value='', default_value='',
validator=lambda x: isinstance(x, str), validator=lambda x: isinstance(x, str),
disable_empty_as_none=True, disable_empty_as_none=True
expected_type=str
) )
default_prompt = get_config_item_or_set_default( default_prompt = get_config_item_or_set_default(
key='default_prompt', key='default_prompt',
default_value='', default_value='',
validator=lambda x: isinstance(x, str), validator=lambda x: isinstance(x, str),
disable_empty_as_none=True, disable_empty_as_none=True
expected_type=str
) )
default_performance = get_config_item_or_set_default( default_performance = get_config_item_or_set_default(
key='default_performance', key='default_performance',
default_value=Performance.SPEED.value, default_value=Performance.SPEED.value,
validator=lambda x: x in Performance.values(), validator=lambda x: x in Performance.list()
expected_type=str
)
default_image_prompt_checkbox = get_config_item_or_set_default(
key='default_image_prompt_checkbox',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_enhance_checkbox = get_config_item_or_set_default(
key='default_enhance_checkbox',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
) )
default_advanced_checkbox = get_config_item_or_set_default( default_advanced_checkbox = get_config_item_or_set_default(
key='default_advanced_checkbox', key='default_advanced_checkbox',
default_value=False, default_value=False,
validator=lambda x: isinstance(x, bool), validator=lambda x: isinstance(x, bool)
expected_type=bool
)
default_developer_debug_mode_checkbox = get_config_item_or_set_default(
key='default_developer_debug_mode_checkbox',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_image_prompt_advanced_checkbox = get_config_item_or_set_default(
key='default_image_prompt_advanced_checkbox',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
) )
default_max_image_number = get_config_item_or_set_default( default_max_image_number = get_config_item_or_set_default(
key='default_max_image_number', key='default_max_image_number',
default_value=32, default_value=32,
validator=lambda x: isinstance(x, int) and x >= 1, validator=lambda x: isinstance(x, int) and x >= 1
expected_type=int
) )
default_output_format = get_config_item_or_set_default( default_output_format = get_config_item_or_set_default(
key='default_output_format', key='default_output_format',
default_value='png', default_value='png',
validator=lambda x: x in OutputFormat.list(), validator=lambda x: x in modules.flags.output_formats
expected_type=str
) )
default_image_number = get_config_item_or_set_default( default_image_number = get_config_item_or_set_default(
key='default_image_number', key='default_image_number',
default_value=2, default_value=2,
validator=lambda x: isinstance(x, int) and 1 <= x <= default_max_image_number, validator=lambda x: isinstance(x, int) and 1 <= x <= default_max_image_number
expected_type=int
) )
checkpoint_downloads = get_config_item_or_set_default( checkpoint_downloads = get_config_item_or_set_default(
key='checkpoint_downloads', key='checkpoint_downloads',
default_value={}, default_value={},
validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items()), validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items())
expected_type=dict
) )
lora_downloads = get_config_item_or_set_default( lora_downloads = get_config_item_or_set_default(
key='lora_downloads', key='lora_downloads',
default_value={}, default_value={},
validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items()), validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items())
expected_type=dict
) )
embeddings_downloads = get_config_item_or_set_default( embeddings_downloads = get_config_item_or_set_default(
key='embeddings_downloads', key='embeddings_downloads',
default_value={}, default_value={},
validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items()), validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items())
expected_type=dict
)
vae_downloads = get_config_item_or_set_default(
key='vae_downloads',
default_value={},
validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items()),
expected_type=dict
) )
available_aspect_ratios = get_config_item_or_set_default( available_aspect_ratios = get_config_item_or_set_default(
key='available_aspect_ratios', key='available_aspect_ratios',
default_value=modules.flags.sdxl_aspect_ratios, default_value=[
validator=lambda x: isinstance(x, list) and all('*' in v for v in x) and len(x) > 1, '704*1408', '704*1344', '768*1344', '768*1280', '832*1216', '832*1152',
expected_type=list '896*1152', '896*1088', '960*1088', '960*1024', '1024*1024', '1024*960',
'1088*960', '1088*896', '1152*896', '1152*832', '1216*832', '1280*768',
'1344*768', '1344*704', '1408*704', '1472*704', '1536*640', '1600*640',
'1664*576', '1728*576'
],
validator=lambda x: isinstance(x, list) and all('*' in v for v in x) and len(x) > 1
) )
default_aspect_ratio = get_config_item_or_set_default( default_aspect_ratio = get_config_item_or_set_default(
key='default_aspect_ratio', key='default_aspect_ratio',
default_value='1152*896' if '1152*896' in available_aspect_ratios else available_aspect_ratios[0], default_value='1152*896' if '1152*896' in available_aspect_ratios else available_aspect_ratios[0],
validator=lambda x: x in available_aspect_ratios, validator=lambda x: x in available_aspect_ratios
expected_type=str
) )
default_inpaint_engine_version = get_config_item_or_set_default( default_inpaint_engine_version = get_config_item_or_set_default(
key='default_inpaint_engine_version', key='default_inpaint_engine_version',
default_value='v2.6', default_value='v2.6',
validator=lambda x: x in modules.flags.inpaint_engine_versions, validator=lambda x: x in modules.flags.inpaint_engine_versions
expected_type=str
)
default_selected_image_input_tab_id = get_config_item_or_set_default(
key='default_selected_image_input_tab_id',
default_value=modules.flags.default_input_image_tab,
validator=lambda x: x in modules.flags.input_image_tab_ids,
expected_type=str
)
default_uov_method = get_config_item_or_set_default(
key='default_uov_method',
default_value=modules.flags.disabled,
validator=lambda x: x in modules.flags.uov_list,
expected_type=str
)
default_controlnet_image_count = get_config_item_or_set_default(
key='default_controlnet_image_count',
default_value=4,
validator=lambda x: isinstance(x, int) and x > 0,
expected_type=int
)
default_ip_images = {}
default_ip_stop_ats = {}
default_ip_weights = {}
default_ip_types = {}
for image_count in range(default_controlnet_image_count):
image_count += 1
default_ip_images[image_count] = get_config_item_or_set_default(
key=f'default_ip_image_{image_count}',
default_value='None',
validator=lambda x: x == 'None' or isinstance(x, str) and os.path.exists(x),
expected_type=str
)
if default_ip_images[image_count] == 'None':
default_ip_images[image_count] = None
default_ip_types[image_count] = get_config_item_or_set_default(
key=f'default_ip_type_{image_count}',
default_value=modules.flags.default_ip,
validator=lambda x: x in modules.flags.ip_list,
expected_type=str
)
default_end, default_weight = modules.flags.default_parameters[default_ip_types[image_count]]
default_ip_stop_ats[image_count] = get_config_item_or_set_default(
key=f'default_ip_stop_at_{image_count}',
default_value=default_end,
validator=lambda x: isinstance(x, float) and 0 <= x <= 1,
expected_type=float
)
default_ip_weights[image_count] = get_config_item_or_set_default(
key=f'default_ip_weight_{image_count}',
default_value=default_weight,
validator=lambda x: isinstance(x, float) and 0 <= x <= 2,
expected_type=float
)
default_inpaint_advanced_masking_checkbox = get_config_item_or_set_default(
key='default_inpaint_advanced_masking_checkbox',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_inpaint_method = get_config_item_or_set_default(
key='default_inpaint_method',
default_value=modules.flags.inpaint_option_default,
validator=lambda x: x in modules.flags.inpaint_options,
expected_type=str
) )
default_cfg_tsnr = get_config_item_or_set_default( default_cfg_tsnr = get_config_item_or_set_default(
key='default_cfg_tsnr', key='default_cfg_tsnr',
default_value=7.0, default_value=7.0,
validator=lambda x: isinstance(x, numbers.Number), validator=lambda x: isinstance(x, numbers.Number)
expected_type=numbers.Number
)
default_clip_skip = get_config_item_or_set_default(
key='default_clip_skip',
default_value=2,
validator=lambda x: isinstance(x, int) and 1 <= x <= modules.flags.clip_skip_max,
expected_type=int
) )
default_overwrite_step = get_config_item_or_set_default( default_overwrite_step = get_config_item_or_set_default(
key='default_overwrite_step', key='default_overwrite_step',
default_value=-1, default_value=-1,
validator=lambda x: isinstance(x, int), validator=lambda x: isinstance(x, int)
expected_type=int
) )
default_overwrite_switch = get_config_item_or_set_default( default_overwrite_switch = get_config_item_or_set_default(
key='default_overwrite_switch', key='default_overwrite_switch',
default_value=-1, default_value=-1,
validator=lambda x: isinstance(x, int), validator=lambda x: isinstance(x, int)
expected_type=int
)
default_overwrite_upscale = get_config_item_or_set_default(
key='default_overwrite_upscale',
default_value=-1,
validator=lambda x: isinstance(x, numbers.Number)
) )
example_inpaint_prompts = get_config_item_or_set_default( example_inpaint_prompts = get_config_item_or_set_default(
key='example_inpaint_prompts', key='example_inpaint_prompts',
default_value=[ default_value=[
'highly detailed face', 'detailed girl face', 'detailed man face', 'detailed hand', 'beautiful eyes' 'highly detailed face', 'detailed girl face', 'detailed man face', 'detailed hand', 'beautiful eyes'
], ],
validator=lambda x: isinstance(x, list) and all(isinstance(v, str) for v in x), validator=lambda x: isinstance(x, list) and all(isinstance(v, str) for v in x)
expected_type=list
)
example_enhance_detection_prompts = get_config_item_or_set_default(
key='example_enhance_detection_prompts',
default_value=[
'face', 'eye', 'mouth', 'hair', 'hand', 'body'
],
validator=lambda x: isinstance(x, list) and all(isinstance(v, str) for v in x),
expected_type=list
)
default_enhance_tabs = get_config_item_or_set_default(
key='default_enhance_tabs',
default_value=3,
validator=lambda x: isinstance(x, int) and 1 <= x <= 5,
expected_type=int
)
default_enhance_uov_method = get_config_item_or_set_default(
key='default_enhance_uov_method',
default_value=modules.flags.disabled,
validator=lambda x: x in modules.flags.uov_list,
expected_type=int
)
default_enhance_uov_processing_order = get_config_item_or_set_default(
key='default_enhance_uov_processing_order',
default_value=modules.flags.enhancement_uov_before,
validator=lambda x: x in modules.flags.enhancement_uov_processing_order,
expected_type=int
)
default_enhance_uov_prompt_type = get_config_item_or_set_default(
key='default_enhance_uov_prompt_type',
default_value=modules.flags.enhancement_uov_prompt_type_original,
validator=lambda x: x in modules.flags.enhancement_uov_prompt_types,
expected_type=int
)
default_sam_max_detections = get_config_item_or_set_default(
key='default_sam_max_detections',
default_value=0,
validator=lambda x: isinstance(x, int) and 0 <= x <= 10,
expected_type=int
)
default_black_out_nsfw = get_config_item_or_set_default(
key='default_black_out_nsfw',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_save_only_final_enhanced_image = get_config_item_or_set_default(
key='default_save_only_final_enhanced_image',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
) )
default_save_metadata_to_images = get_config_item_or_set_default( default_save_metadata_to_images = get_config_item_or_set_default(
key='default_save_metadata_to_images', key='default_save_metadata_to_images',
default_value=False, default_value=False,
validator=lambda x: isinstance(x, bool), validator=lambda x: isinstance(x, bool)
expected_type=bool
) )
default_metadata_scheme = get_config_item_or_set_default( default_metadata_scheme = get_config_item_or_set_default(
key='default_metadata_scheme', key='default_metadata_scheme',
default_value=MetadataScheme.FOOOCUS.value, default_value=MetadataScheme.FOOOCUS.value,
validator=lambda x: x in [y[1] for y in modules.flags.metadata_scheme if y[1] == x], validator=lambda x: x in [y[1] for y in modules.flags.metadata_scheme if y[1] == x]
expected_type=str
) )
metadata_created_by = get_config_item_or_set_default( metadata_created_by = get_config_item_or_set_default(
key='metadata_created_by', key='metadata_created_by',
default_value='', default_value='',
validator=lambda x: isinstance(x, str), validator=lambda x: isinstance(x, str)
expected_type=str
) )
example_inpaint_prompts = [[x] for x in example_inpaint_prompts] example_inpaint_prompts = [[x] for x in example_inpaint_prompts]
example_enhance_detection_prompts = [[x] for x in example_enhance_detection_prompts]
default_invert_mask_checkbox = get_config_item_or_set_default( config_dict["default_loras"] = default_loras = default_loras[:default_max_lora_number] + [['None', 1.0] for _ in range(default_max_lora_number - len(default_loras))]
key='default_invert_mask_checkbox',
default_value=False,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_inpaint_mask_model = get_config_item_or_set_default( possible_preset_keys = [
key='default_inpaint_mask_model', "default_model",
default_value='isnet-general-use', "default_refiner",
validator=lambda x: x in modules.flags.inpaint_mask_models, "default_refiner_switch",
expected_type=str "default_loras_min_weight",
) "default_loras_max_weight",
"default_loras",
"default_max_lora_number",
"default_cfg_scale",
"default_sample_sharpness",
"default_sampler",
"default_scheduler",
"default_performance",
"default_prompt",
"default_prompt_negative",
"default_styles",
"default_aspect_ratio",
"default_save_metadata_to_images",
"checkpoint_downloads",
"embeddings_downloads",
"lora_downloads",
]
default_enhance_inpaint_mask_model = get_config_item_or_set_default(
key='default_enhance_inpaint_mask_model',
default_value='sam',
validator=lambda x: x in modules.flags.inpaint_mask_models,
expected_type=str
)
default_inpaint_mask_cloth_category = get_config_item_or_set_default(
key='default_inpaint_mask_cloth_category',
default_value='full',
validator=lambda x: x in modules.flags.inpaint_mask_cloth_category,
expected_type=str
)
default_inpaint_mask_sam_model = get_config_item_or_set_default(
key='default_inpaint_mask_sam_model',
default_value='vit_b',
validator=lambda x: x in modules.flags.inpaint_mask_sam_model,
expected_type=str
)
default_describe_apply_prompts_checkbox = get_config_item_or_set_default(
key='default_describe_apply_prompts_checkbox',
default_value=True,
validator=lambda x: isinstance(x, bool),
expected_type=bool
)
default_describe_content_type = get_config_item_or_set_default(
key='default_describe_content_type',
default_value=[modules.flags.describe_type_photo],
validator=lambda x: all(k in modules.flags.describe_types for k in x),
expected_type=list
)
config_dict["default_loras"] = default_loras = default_loras[:default_max_lora_number] + [[True, 'None', 1.0] for _ in range(default_max_lora_number - len(default_loras))]
# mapping config to meta parameter
possible_preset_keys = {
"default_model": "base_model",
"default_refiner": "refiner_model",
"default_refiner_switch": "refiner_switch",
"previous_default_models": "previous_default_models",
"default_loras_min_weight": "default_loras_min_weight",
"default_loras_max_weight": "default_loras_max_weight",
"default_loras": "<processed>",
"default_cfg_scale": "guidance_scale",
"default_sample_sharpness": "sharpness",
"default_cfg_tsnr": "adaptive_cfg",
"default_clip_skip": "clip_skip",
"default_sampler": "sampler",
"default_scheduler": "scheduler",
"default_overwrite_step": "steps",
"default_overwrite_switch": "overwrite_switch",
"default_performance": "performance",
"default_image_number": "image_number",
"default_prompt": "prompt",
"default_prompt_negative": "negative_prompt",
"default_styles": "styles",
"default_aspect_ratio": "resolution",
"default_save_metadata_to_images": "default_save_metadata_to_images",
"checkpoint_downloads": "checkpoint_downloads",
"embeddings_downloads": "embeddings_downloads",
"lora_downloads": "lora_downloads",
"vae_downloads": "vae_downloads",
"default_vae": "vae",
# "default_inpaint_method": "inpaint_method", # disabled so inpaint mode doesn't refresh after every preset change
"default_inpaint_engine_version": "inpaint_engine_version",
}
REWRITE_PRESET = False REWRITE_PRESET = False
@ -772,7 +453,7 @@ def add_ratio(x):
default_aspect_ratio = add_ratio(default_aspect_ratio) default_aspect_ratio = add_ratio(default_aspect_ratio)
available_aspect_ratios_labels = [add_ratio(x) for x in available_aspect_ratios] available_aspect_ratios = [add_ratio(x) for x in available_aspect_ratios]
# Only write config in the first launch. # Only write config in the first launch.
@ -793,30 +474,20 @@ with open(config_example_path, "w", encoding="utf-8") as json_file:
model_filenames = [] model_filenames = []
lora_filenames = [] lora_filenames = []
vae_filenames = []
wildcard_filenames = []
def get_model_filenames(folder_paths, extensions=None, name_filter=None): def get_model_filenames(folder_paths, name_filter=None):
if extensions is None:
extensions = ['.pth', '.ckpt', '.bin', '.safetensors', '.fooocus.patch'] extensions = ['.pth', '.ckpt', '.bin', '.safetensors', '.fooocus.patch']
files = [] files = []
if not isinstance(folder_paths, list):
folder_paths = [folder_paths]
for folder in folder_paths: for folder in folder_paths:
files += get_files_from_folder(folder, extensions, name_filter) files += get_files_from_folder(folder, extensions, name_filter)
return files return files
def update_files(): def update_all_model_names():
global model_filenames, lora_filenames, vae_filenames, wildcard_filenames, available_presets global model_filenames, lora_filenames
model_filenames = get_model_filenames(paths_checkpoints) model_filenames = get_model_filenames(paths_checkpoints)
lora_filenames = get_model_filenames(paths_loras) lora_filenames = get_model_filenames(paths_loras)
vae_filenames = get_model_filenames(path_vae)
wildcard_filenames = get_files_from_folder(path_wildcards, ['.txt'])
available_presets = get_presets()
return return
@ -862,27 +533,9 @@ def downloading_sdxl_lcm_lora():
load_file_from_url( load_file_from_url(
url='https://huggingface.co/lllyasviel/misc/resolve/main/sdxl_lcm_lora.safetensors', url='https://huggingface.co/lllyasviel/misc/resolve/main/sdxl_lcm_lora.safetensors',
model_dir=paths_loras[0], model_dir=paths_loras[0],
file_name=modules.flags.PerformanceLoRA.EXTREME_SPEED.value file_name='sdxl_lcm_lora.safetensors'
) )
return modules.flags.PerformanceLoRA.EXTREME_SPEED.value return 'sdxl_lcm_lora.safetensors'
def downloading_sdxl_lightning_lora():
load_file_from_url(
url='https://huggingface.co/mashb1t/misc/resolve/main/sdxl_lightning_4step_lora.safetensors',
model_dir=paths_loras[0],
file_name=modules.flags.PerformanceLoRA.LIGHTNING.value
)
return modules.flags.PerformanceLoRA.LIGHTNING.value
def downloading_sdxl_hyper_sd_lora():
load_file_from_url(
url='https://huggingface.co/mashb1t/misc/resolve/main/sdxl_hyper_sd_4step_lora.safetensors',
model_dir=paths_loras[0],
file_name=modules.flags.PerformanceLoRA.HYPER_SD.value
)
return modules.flags.PerformanceLoRA.HYPER_SD.value
def downloading_controlnet_canny(): def downloading_controlnet_canny():
@ -949,49 +602,5 @@ def downloading_upscale_model():
) )
return os.path.join(path_upscale_models, 'fooocus_upscaler_s409985e5.bin') return os.path.join(path_upscale_models, 'fooocus_upscaler_s409985e5.bin')
def downloading_safety_checker_model():
load_file_from_url(
url='https://huggingface.co/mashb1t/misc/resolve/main/stable-diffusion-safety-checker.bin',
model_dir=path_safety_checker,
file_name='stable-diffusion-safety-checker.bin'
)
return os.path.join(path_safety_checker, 'stable-diffusion-safety-checker.bin')
update_all_model_names()
def download_sam_model(sam_model: str) -> str:
match sam_model:
case 'vit_b':
return downloading_sam_vit_b()
case 'vit_l':
return downloading_sam_vit_l()
case 'vit_h':
return downloading_sam_vit_h()
case _:
raise ValueError(f"sam model {sam_model} does not exist.")
def downloading_sam_vit_b():
load_file_from_url(
url='https://huggingface.co/mashb1t/misc/resolve/main/sam_vit_b_01ec64.pth',
model_dir=path_sam,
file_name='sam_vit_b_01ec64.pth'
)
return os.path.join(path_sam, 'sam_vit_b_01ec64.pth')
def downloading_sam_vit_l():
load_file_from_url(
url='https://huggingface.co/mashb1t/misc/resolve/main/sam_vit_l_0b3195.pth',
model_dir=path_sam,
file_name='sam_vit_l_0b3195.pth'
)
return os.path.join(path_sam, 'sam_vit_l_0b3195.pth')
def downloading_sam_vit_h():
load_file_from_url(
url='https://huggingface.co/mashb1t/misc/resolve/main/sam_vit_h_4b8939.pth',
model_dir=path_sam,
file_name='sam_vit_h_4b8939.pth'
)
return os.path.join(path_sam, 'sam_vit_h_4b8939.pth')

View File

@ -21,7 +21,8 @@ from modules.lora import match_lora
from modules.util import get_file_from_folder_list from modules.util import get_file_from_folder_list
from ldm_patched.modules.lora import model_lora_keys_unet, model_lora_keys_clip from ldm_patched.modules.lora import model_lora_keys_unet, model_lora_keys_clip
from modules.config import path_embeddings from modules.config import path_embeddings
from ldm_patched.contrib.external_model_advanced import ModelSamplingDiscrete, ModelSamplingContinuousEDM from ldm_patched.contrib.external_model_advanced import ModelSamplingDiscrete
opEmptyLatentImage = EmptyLatentImage() opEmptyLatentImage = EmptyLatentImage()
opVAEDecode = VAEDecode() opVAEDecode = VAEDecode()
@ -31,17 +32,15 @@ opVAEEncodeTiled = VAEEncodeTiled()
opControlNetApplyAdvanced = ControlNetApplyAdvanced() opControlNetApplyAdvanced = ControlNetApplyAdvanced()
opFreeU = FreeU_V2() opFreeU = FreeU_V2()
opModelSamplingDiscrete = ModelSamplingDiscrete() opModelSamplingDiscrete = ModelSamplingDiscrete()
opModelSamplingContinuousEDM = ModelSamplingContinuousEDM()
class StableDiffusionModel: class StableDiffusionModel:
def __init__(self, unet=None, vae=None, clip=None, clip_vision=None, filename=None, vae_filename=None): def __init__(self, unet=None, vae=None, clip=None, clip_vision=None, filename=None):
self.unet = unet self.unet = unet
self.vae = vae self.vae = vae
self.clip = clip self.clip = clip
self.clip_vision = clip_vision self.clip_vision = clip_vision
self.filename = filename self.filename = filename
self.vae_filename = vae_filename
self.unet_with_lora = unet self.unet_with_lora = unet
self.clip_with_lora = clip self.clip_with_lora = clip
self.visited_loras = '' self.visited_loras = ''
@ -74,14 +73,14 @@ class StableDiffusionModel:
loras_to_load = [] loras_to_load = []
for filename, weight in loras: for name, weight in loras:
if filename == 'None': if name == 'None':
continue continue
if os.path.exists(filename): if os.path.exists(name):
lora_filename = filename lora_filename = name
else: else:
lora_filename = get_file_from_folder_list(filename, modules.config.paths_loras) lora_filename = get_file_from_folder_list(name, modules.config.paths_loras)
if not os.path.exists(lora_filename): if not os.path.exists(lora_filename):
print(f'Lora file not found: {lora_filename}') print(f'Lora file not found: {lora_filename}')
@ -143,10 +142,9 @@ def apply_controlnet(positive, negative, control_net, image, strength, start_per
@torch.no_grad() @torch.no_grad()
@torch.inference_mode() @torch.inference_mode()
def load_model(ckpt_filename, vae_filename=None): def load_model(ckpt_filename):
unet, clip, vae, vae_filename, clip_vision = load_checkpoint_guess_config(ckpt_filename, embedding_directory=path_embeddings, unet, clip, vae, clip_vision = load_checkpoint_guess_config(ckpt_filename, embedding_directory=path_embeddings)
vae_filename_param=vae_filename) return StableDiffusionModel(unet=unet, clip=clip, vae=vae, clip_vision=clip_vision, filename=ckpt_filename)
return StableDiffusionModel(unet=unet, clip=clip, vae=vae, clip_vision=clip_vision, filename=ckpt_filename, vae_filename=vae_filename)
@torch.no_grad() @torch.no_grad()
@ -231,7 +229,7 @@ def get_previewer(model):
if vae_approx_filename in VAE_approx_models: if vae_approx_filename in VAE_approx_models:
VAE_approx_model = VAE_approx_models[vae_approx_filename] VAE_approx_model = VAE_approx_models[vae_approx_filename]
else: else:
sd = torch.load(vae_approx_filename, map_location='cpu', weights_only=True) sd = torch.load(vae_approx_filename, map_location='cpu')
VAE_approx_model = VAEApprox() VAE_approx_model = VAEApprox()
VAE_approx_model.load_state_dict(sd) VAE_approx_model.load_state_dict(sd)
del sd del sd

View File

@ -3,7 +3,6 @@ import os
import torch import torch
import modules.patch import modules.patch
import modules.config import modules.config
import modules.flags
import ldm_patched.modules.model_management import ldm_patched.modules.model_management
import ldm_patched.modules.latent_formats import ldm_patched.modules.latent_formats
import modules.inpaint_worker import modules.inpaint_worker
@ -12,7 +11,7 @@ from extras.expansion import FooocusExpansion
from ldm_patched.modules.model_base import SDXL, SDXLRefiner from ldm_patched.modules.model_base import SDXL, SDXLRefiner
from modules.sample_hijack import clip_separate from modules.sample_hijack import clip_separate
from modules.util import get_file_from_folder_list, get_enabled_loras from modules.util import get_file_from_folder_list
model_base = core.StableDiffusionModel() model_base = core.StableDiffusionModel()
@ -59,21 +58,17 @@ def assert_model_integrity():
@torch.no_grad() @torch.no_grad()
@torch.inference_mode() @torch.inference_mode()
def refresh_base_model(name, vae_name=None): def refresh_base_model(name):
global model_base global model_base
filename = get_file_from_folder_list(name, modules.config.paths_checkpoints) filename = get_file_from_folder_list(name, modules.config.paths_checkpoints)
vae_filename = None if model_base.filename == filename:
if vae_name is not None and vae_name != modules.flags.default_vae:
vae_filename = get_file_from_folder_list(vae_name, modules.config.path_vae)
if model_base.filename == filename and model_base.vae_filename == vae_filename:
return return
model_base = core.load_model(filename, vae_filename) model_base = core.StableDiffusionModel()
model_base = core.load_model(filename)
print(f'Base model loaded: {model_base.filename}') print(f'Base model loaded: {model_base.filename}')
print(f'VAE loaded: {model_base.vae_filename}')
return return
@ -201,17 +196,6 @@ def clip_encode(texts, pool_top_k=1):
return [[torch.cat(cond_list, dim=1), {"pooled_output": pooled_acc}]] return [[torch.cat(cond_list, dim=1), {"pooled_output": pooled_acc}]]
@torch.no_grad()
@torch.inference_mode()
def set_clip_skip(clip_skip: int):
global final_clip
if final_clip is None:
return
final_clip.clip_layer(-abs(clip_skip))
return
@torch.no_grad() @torch.no_grad()
@torch.inference_mode() @torch.inference_mode()
def clear_all_caches(): def clear_all_caches():
@ -232,7 +216,7 @@ def prepare_text_encoder(async_call=True):
@torch.no_grad() @torch.no_grad()
@torch.inference_mode() @torch.inference_mode()
def refresh_everything(refiner_model_name, base_model_name, loras, def refresh_everything(refiner_model_name, base_model_name, loras,
base_model_additional_loras=None, use_synthetic_refiner=False, vae_name=None): base_model_additional_loras=None, use_synthetic_refiner=False):
global final_unet, final_clip, final_vae, final_refiner_unet, final_refiner_vae, final_expansion global final_unet, final_clip, final_vae, final_refiner_unet, final_refiner_vae, final_expansion
final_unet = None final_unet = None
@ -243,11 +227,11 @@ def refresh_everything(refiner_model_name, base_model_name, loras,
if use_synthetic_refiner and refiner_model_name == 'None': if use_synthetic_refiner and refiner_model_name == 'None':
print('Synthetic Refiner Activated') print('Synthetic Refiner Activated')
refresh_base_model(base_model_name, vae_name) refresh_base_model(base_model_name)
synthesize_refiner_model() synthesize_refiner_model()
else: else:
refresh_refiner_model(refiner_model_name) refresh_refiner_model(refiner_model_name)
refresh_base_model(base_model_name, vae_name) refresh_base_model(base_model_name)
refresh_loras(loras, base_model_additional_loras=base_model_additional_loras) refresh_loras(loras, base_model_additional_loras=base_model_additional_loras)
assert_model_integrity() assert_model_integrity()
@ -270,8 +254,7 @@ def refresh_everything(refiner_model_name, base_model_name, loras,
refresh_everything( refresh_everything(
refiner_model_name=modules.config.default_refiner_model_name, refiner_model_name=modules.config.default_refiner_model_name,
base_model_name=modules.config.default_base_model_name, base_model_name=modules.config.default_base_model_name,
loras=get_enabled_loras(modules.config.default_loras), loras=modules.config.default_loras
vae_name=modules.config.default_vae,
) )

View File

@ -1,41 +0,0 @@
import os
from ast import literal_eval
def makedirs_with_log(path):
try:
os.makedirs(path, exist_ok=True)
except OSError as error:
print(f'Directory {path} could not be created, reason: {error}')
def get_files_from_folder(folder_path, extensions=None, name_filter=None):
if not os.path.isdir(folder_path):
raise ValueError("Folder path is not a valid directory.")
filenames = []
for root, _, files in os.walk(folder_path, topdown=False):
relative_path = os.path.relpath(root, folder_path)
if relative_path == ".":
relative_path = ""
for filename in sorted(files, key=lambda s: s.casefold()):
_, file_extension = os.path.splitext(filename)
if (extensions is None or file_extension.lower() in extensions) and (name_filter is None or name_filter in _):
path = os.path.join(relative_path, filename)
filenames.append(path)
return filenames
def try_eval_env_var(value: str, expected_type=None):
try:
value_eval = value
if expected_type is bool:
value_eval = value.title()
value_eval = literal_eval(value_eval)
if expected_type is not None and not isinstance(value_eval, expected_type):
return value
return value_eval
except:
return value

View File

@ -8,15 +8,9 @@ upscale_15 = 'Upscale (1.5x)'
upscale_2 = 'Upscale (2x)' upscale_2 = 'Upscale (2x)'
upscale_fast = 'Upscale (Fast 2x)' upscale_fast = 'Upscale (Fast 2x)'
uov_list = [disabled, subtle_variation, strong_variation, upscale_15, upscale_2, upscale_fast] uov_list = [
disabled, subtle_variation, strong_variation, upscale_15, upscale_2, upscale_fast
enhancement_uov_before = "Before First Enhancement" ]
enhancement_uov_after = "After Last Enhancement"
enhancement_uov_processing_order = [enhancement_uov_before, enhancement_uov_after]
enhancement_uov_prompt_type_original = 'Original Prompts'
enhancement_uov_prompt_type_last_filled = 'Last Filled Enhancement Prompts'
enhancement_uov_prompt_types = [enhancement_uov_prompt_type_original, enhancement_uov_prompt_type_last_filled]
CIVITAI_NO_KARRAS = ["euler", "euler_ancestral", "heun", "dpm_fast", "dpm_adaptive", "ddim", "uni_pc"] CIVITAI_NO_KARRAS = ["euler", "euler_ancestral", "heun", "dpm_fast", "dpm_adaptive", "ddim", "uni_pc"]
@ -40,9 +34,7 @@ KSAMPLER = {
"dpmpp_3m_sde": "", "dpmpp_3m_sde": "",
"dpmpp_3m_sde_gpu": "", "dpmpp_3m_sde_gpu": "",
"ddpm": "", "ddpm": "",
"lcm": "LCM", "lcm": "LCM"
"tcd": "TCD",
"restart": "Restart"
} }
SAMPLER_EXTRA = { SAMPLER_EXTRA = {
@ -55,21 +47,14 @@ SAMPLERS = KSAMPLER | SAMPLER_EXTRA
KSAMPLER_NAMES = list(KSAMPLER.keys()) KSAMPLER_NAMES = list(KSAMPLER.keys())
SCHEDULER_NAMES = ["normal", "karras", "exponential", "sgm_uniform", "simple", "ddim_uniform", "lcm", "turbo", "align_your_steps", "tcd", "edm_playground_v2.5"] SCHEDULER_NAMES = ["normal", "karras", "exponential", "sgm_uniform", "simple", "ddim_uniform", "lcm", "turbo"]
SAMPLER_NAMES = KSAMPLER_NAMES + list(SAMPLER_EXTRA.keys()) SAMPLER_NAMES = KSAMPLER_NAMES + list(SAMPLER_EXTRA.keys())
sampler_list = SAMPLER_NAMES sampler_list = SAMPLER_NAMES
scheduler_list = SCHEDULER_NAMES scheduler_list = SCHEDULER_NAMES
clip_skip_max = 12
default_vae = 'Default (model)'
refiner_swap_method = 'joint' refiner_swap_method = 'joint'
default_input_image_tab = 'uov_tab'
input_image_tab_ids = ['uov_tab', 'ip_tab', 'inpaint_tab', 'describe_tab', 'enhance_tab', 'metadata_tab']
cn_ip = "ImagePrompt" cn_ip = "ImagePrompt"
cn_ip_face = "FaceSwap" cn_ip_face = "FaceSwap"
cn_canny = "PyraCanny" cn_canny = "PyraCanny"
@ -82,11 +67,7 @@ default_parameters = {
cn_ip: (0.5, 0.6), cn_ip_face: (0.9, 0.75), cn_canny: (0.5, 1.0), cn_cpds: (0.5, 1.0) cn_ip: (0.5, 0.6), cn_ip_face: (0.9, 0.75), cn_canny: (0.5, 1.0), cn_cpds: (0.5, 1.0)
} # stop, weight } # stop, weight
output_formats = ['png', 'jpeg', 'webp'] output_formats = ['png', 'jpg', 'webp']
inpaint_mask_models = ['u2net', 'u2netp', 'u2net_human_seg', 'u2net_cloth_seg', 'silueta', 'isnet-general-use', 'isnet-anime', 'sam']
inpaint_mask_cloth_category = ['full', 'upper', 'lower']
inpaint_mask_sam_model = ['vit_b', 'vit_l', 'vit_h']
inpaint_engine_versions = ['None', 'v1', 'v2.5', 'v2.6'] inpaint_engine_versions = ['None', 'v1', 'v2.5', 'v2.6']
inpaint_option_default = 'Inpaint or Outpaint (default)' inpaint_option_default = 'Inpaint or Outpaint (default)'
@ -94,17 +75,8 @@ inpaint_option_detail = 'Improve Detail (face, hand, eyes, etc.)'
inpaint_option_modify = 'Modify Content (add objects, change background, etc.)' inpaint_option_modify = 'Modify Content (add objects, change background, etc.)'
inpaint_options = [inpaint_option_default, inpaint_option_detail, inpaint_option_modify] inpaint_options = [inpaint_option_default, inpaint_option_detail, inpaint_option_modify]
describe_type_photo = 'Photograph' desc_type_photo = 'Photograph'
describe_type_anime = 'Art/Anime' desc_type_anime = 'Art/Anime'
describe_types = [describe_type_photo, describe_type_anime]
sdxl_aspect_ratios = [
'704*1408', '704*1344', '768*1344', '768*1280', '832*1216', '832*1152',
'896*1152', '896*1088', '960*1088', '960*1024', '1024*1024', '1024*960',
'1088*960', '1088*896', '1152*896', '1152*832', '1216*832', '1280*768',
'1344*768', '1344*704', '1408*704', '1472*704', '1536*640', '1600*640',
'1664*576', '1728*576'
]
class MetadataScheme(Enum): class MetadataScheme(Enum):
@ -117,75 +89,37 @@ metadata_scheme = [
(f'{MetadataScheme.A1111.value} (plain text)', MetadataScheme.A1111.value), (f'{MetadataScheme.A1111.value} (plain text)', MetadataScheme.A1111.value),
] ]
lora_count = 5
class OutputFormat(Enum): controlnet_image_count = 4
PNG = 'png'
JPEG = 'jpeg'
WEBP = 'webp'
@classmethod
def list(cls) -> list:
return list(map(lambda c: c.value, cls))
class PerformanceLoRA(Enum):
QUALITY = None
SPEED = None
EXTREME_SPEED = 'sdxl_lcm_lora.safetensors'
LIGHTNING = 'sdxl_lightning_4step_lora.safetensors'
HYPER_SD = 'sdxl_hyper_sd_4step_lora.safetensors'
class Steps(IntEnum): class Steps(IntEnum):
QUALITY = 60 QUALITY = 60
SPEED = 30 SPEED = 30
EXTREME_SPEED = 8 EXTREME_SPEED = 8
LIGHTNING = 4
HYPER_SD = 4
@classmethod
def keys(cls) -> list:
return list(map(lambda c: c, Steps.__members__))
class StepsUOV(IntEnum): class StepsUOV(IntEnum):
QUALITY = 36 QUALITY = 36
SPEED = 18 SPEED = 18
EXTREME_SPEED = 8 EXTREME_SPEED = 8
LIGHTNING = 4
HYPER_SD = 4
class Performance(Enum): class Performance(Enum):
QUALITY = 'Quality' QUALITY = 'Quality'
SPEED = 'Speed' SPEED = 'Speed'
EXTREME_SPEED = 'Extreme Speed' EXTREME_SPEED = 'Extreme Speed'
LIGHTNING = 'Lightning'
HYPER_SD = 'Hyper-SD'
@classmethod @classmethod
def list(cls) -> list: def list(cls) -> list:
return list(map(lambda c: (c.name, c.value), cls))
@classmethod
def values(cls) -> list:
return list(map(lambda c: c.value, cls)) return list(map(lambda c: c.value, cls))
@classmethod
def by_steps(cls, steps: int | str):
return cls[Steps(int(steps)).name]
@classmethod
def has_restricted_features(cls, x) -> bool:
if isinstance(x, Performance):
x = x.value
return x in [cls.EXTREME_SPEED.value, cls.LIGHTNING.value, cls.HYPER_SD.value]
def steps(self) -> int | None: def steps(self) -> int | None:
return Steps[self.name].value if self.name in Steps.__members__ else None return Steps[self.name].value if Steps[self.name] else None
def steps_uov(self) -> int | None: def steps_uov(self) -> int | None:
return StepsUOV[self.name].value if self.name in StepsUOV.__members__ else None return StepsUOV[self.name].value if Steps[self.name] else None
def lora_filename(self) -> str | None:
return PerformanceLoRA[self.name].value if self.name in PerformanceLoRA.__members__ else None performance_selections = Performance.list()

View File

@ -17,7 +17,7 @@ from gradio_client.documentation import document, set_documentation_group
from gradio_client.serializing import ImgSerializable from gradio_client.serializing import ImgSerializable
from PIL import Image as _Image # using _ to minimize namespace pollution from PIL import Image as _Image # using _ to minimize namespace pollution
from gradio import processing_utils, utils, Error from gradio import processing_utils, utils
from gradio.components.base import IOComponent, _Keywords, Block from gradio.components.base import IOComponent, _Keywords, Block
from gradio.deprecation import warn_style_method_deprecation from gradio.deprecation import warn_style_method_deprecation
from gradio.events import ( from gradio.events import (
@ -275,10 +275,7 @@ class Image(
x, mask = x["image"], x["mask"] x, mask = x["image"], x["mask"]
assert isinstance(x, str) assert isinstance(x, str)
try:
im = processing_utils.decode_base64_to_image(x) im = processing_utils.decode_base64_to_image(x)
except PIL.UnidentifiedImageError:
raise Error("Unsupported image type in input")
with warnings.catch_warnings(): with warnings.catch_warnings():
warnings.simplefilter("ignore") warnings.simplefilter("ignore")
im = im.convert(self.image_mode) im = im.convert(self.image_mode)

View File

@ -1,83 +0,0 @@
import json
import os
from concurrent.futures import ThreadPoolExecutor
from multiprocessing import cpu_count
import args_manager
from modules.util import sha256, HASH_SHA256_LENGTH, get_file_from_folder_list
hash_cache_filename = 'hash_cache.txt'
hash_cache = {}
def sha256_from_cache(filepath):
global hash_cache
if filepath not in hash_cache:
print(f"[Cache] Calculating sha256 for {filepath}")
hash_value = sha256(filepath)
print(f"[Cache] sha256 for {filepath}: {hash_value}")
hash_cache[filepath] = hash_value
save_cache_to_file(filepath, hash_value)
return hash_cache[filepath]
def load_cache_from_file():
global hash_cache
try:
if os.path.exists(hash_cache_filename):
with open(hash_cache_filename, 'rt', encoding='utf-8') as fp:
for line in fp:
entry = json.loads(line)
for filepath, hash_value in entry.items():
if not os.path.exists(filepath) or not isinstance(hash_value, str) and len(hash_value) != HASH_SHA256_LENGTH:
print(f'[Cache] Skipping invalid cache entry: {filepath}')
continue
hash_cache[filepath] = hash_value
except Exception as e:
print(f'[Cache] Loading failed: {e}')
def save_cache_to_file(filename=None, hash_value=None):
global hash_cache
if filename is not None and hash_value is not None:
items = [(filename, hash_value)]
mode = 'at'
else:
items = sorted(hash_cache.items())
mode = 'wt'
try:
with open(hash_cache_filename, mode, encoding='utf-8') as fp:
for filepath, hash_value in items:
json.dump({filepath: hash_value}, fp)
fp.write('\n')
except Exception as e:
print(f'[Cache] Saving failed: {e}')
def init_cache(model_filenames, paths_checkpoints, lora_filenames, paths_loras):
load_cache_from_file()
if args_manager.args.rebuild_hash_cache:
max_workers = args_manager.args.rebuild_hash_cache if args_manager.args.rebuild_hash_cache > 0 else cpu_count()
rebuild_cache(lora_filenames, model_filenames, paths_checkpoints, paths_loras, max_workers)
# write cache to file again for sorting and cleanup of invalid cache entries
save_cache_to_file()
def rebuild_cache(lora_filenames, model_filenames, paths_checkpoints, paths_loras, max_workers=cpu_count()):
def thread(filename, paths):
filepath = get_file_from_folder_list(filename, paths)
sha256_from_cache(filepath)
print('[Cache] Rebuilding hash cache')
with ThreadPoolExecutor(max_workers=max_workers) as executor:
for model_filename in model_filenames:
executor.submit(thread, model_filename, paths_checkpoints)
for lora_filename in lora_filenames:
executor.submit(thread, lora_filename, paths_loras)
print('[Cache] Done')

View File

@ -1,3 +1,142 @@
css = '''
.loader-container {
display: flex; /* Use flex to align items horizontally */
align-items: center; /* Center items vertically within the container */
white-space: nowrap; /* Prevent line breaks within the container */
}
.loader {
border: 8px solid #f3f3f3; /* Light grey */
border-top: 8px solid #3498db; /* Blue */
border-radius: 50%;
width: 30px;
height: 30px;
animation: spin 2s linear infinite;
}
@keyframes spin {
0% { transform: rotate(0deg); }
100% { transform: rotate(360deg); }
}
/* Style the progress bar */
progress {
appearance: none; /* Remove default styling */
height: 20px; /* Set the height of the progress bar */
border-radius: 5px; /* Round the corners of the progress bar */
background-color: #f3f3f3; /* Light grey background */
width: 100%;
}
/* Style the progress bar container */
.progress-container {
margin-left: 20px;
margin-right: 20px;
flex-grow: 1; /* Allow the progress container to take up remaining space */
}
/* Set the color of the progress bar fill */
progress::-webkit-progress-value {
background-color: #3498db; /* Blue color for the fill */
}
progress::-moz-progress-bar {
background-color: #3498db; /* Blue color for the fill in Firefox */
}
/* Style the text on the progress bar */
progress::after {
content: attr(value '%'); /* Display the progress value followed by '%' */
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
color: white; /* Set text color */
font-size: 14px; /* Set font size */
}
/* Style other texts */
.loader-container > span {
margin-left: 5px; /* Add spacing between the progress bar and the text */
}
.progress-bar > .generating {
display: none !important;
}
.progress-bar{
height: 30px !important;
}
.type_row{
height: 80px !important;
}
.type_row_half{
height: 32px !important;
}
.scroll-hide{
resize: none !important;
}
.refresh_button{
border: none !important;
background: none !important;
font-size: none !important;
box-shadow: none !important;
}
.advanced_check_row{
width: 250px !important;
}
.min_check{
min-width: min(1px, 100%) !important;
}
.resizable_area {
resize: vertical;
overflow: auto !important;
}
.aspect_ratios label {
width: 140px !important;
}
.aspect_ratios label span {
white-space: nowrap !important;
}
.aspect_ratios label input {
margin-left: -5px !important;
}
.lora_enable {
flex-grow: 1 !important;
}
.lora_enable label {
height: 100%;
}
.lora_enable label input {
margin: auto;
}
.lora_enable label span {
display: none;
}
.lora_model {
flex-grow: 5 !important;
}
.lora_weight {
flex-grow: 5 !important;
}
'''
progress_html = ''' progress_html = '''
<div class="loader-container"> <div class="loader-container">
<div class="loader"></div> <div class="loader"></div>

View File

@ -196,7 +196,7 @@ class InpaintWorker:
if inpaint_head_model is None: if inpaint_head_model is None:
inpaint_head_model = InpaintHead() inpaint_head_model = InpaintHead()
sd = torch.load(inpaint_head_model_path, map_location='cpu', weights_only=True) sd = torch.load(inpaint_head_model_path, map_location='cpu')
inpaint_head_model.load_state_dict(sd) inpaint_head_model.load_state_dict(sd)
feed = torch.cat([ feed = torch.cat([

View File

@ -1,7 +1,6 @@
import os import os
import importlib import importlib
import importlib.util import importlib.util
import shutil
import subprocess import subprocess
import sys import sys
import re import re
@ -10,6 +9,9 @@ import importlib.metadata
import packaging.version import packaging.version
from packaging.requirements import Requirement from packaging.requirements import Requirement
logging.getLogger("torch.distributed.nn").setLevel(logging.ERROR) # sshh... logging.getLogger("torch.distributed.nn").setLevel(logging.ERROR) # sshh...
logging.getLogger("xformers").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage()) logging.getLogger("xformers").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage())
@ -99,19 +101,3 @@ def requirements_met(requirements_file):
return True return True
def delete_folder_content(folder, prefix=None):
result = True
for filename in os.listdir(folder):
file_path = os.path.join(folder, filename)
try:
if os.path.isfile(file_path) or os.path.islink(file_path):
os.unlink(file_path)
elif os.path.isdir(file_path):
shutil.rmtree(file_path)
except Exception as e:
print(f'{prefix}Failed to delete {file_path}. Reason: {e}')
result = False
return result

View File

@ -1,4 +1,5 @@
import json import json
import os
import re import re
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from pathlib import Path from pathlib import Path
@ -11,45 +12,41 @@ import modules.config
import modules.sdxl_styles import modules.sdxl_styles
from modules.flags import MetadataScheme, Performance, Steps from modules.flags import MetadataScheme, Performance, Steps
from modules.flags import SAMPLERS, CIVITAI_NO_KARRAS from modules.flags import SAMPLERS, CIVITAI_NO_KARRAS
from modules.hash_cache import sha256_from_cache from modules.util import quote, unquote, extract_styles_from_prompt, is_json, get_file_from_folder_list, calculate_sha256
from modules.util import quote, unquote, extract_styles_from_prompt, is_json, get_file_from_folder_list
re_param_code = r'\s*(\w[\w \-/]+):\s*("(?:\\.|[^\\"])+"|[^,]*)(?:,|$)' re_param_code = r'\s*(\w[\w \-/]+):\s*("(?:\\.|[^\\"])+"|[^,]*)(?:,|$)'
re_param = re.compile(re_param_code) re_param = re.compile(re_param_code)
re_imagesize = re.compile(r"^(\d+)x(\d+)$") re_imagesize = re.compile(r"^(\d+)x(\d+)$")
hash_cache = {}
def load_parameter_button_click(raw_metadata: dict | str, is_generating: bool, inpaint_mode: str):
def load_parameter_button_click(raw_metadata: dict | str, is_generating: bool):
loaded_parameter_dict = raw_metadata loaded_parameter_dict = raw_metadata
if isinstance(raw_metadata, str): if isinstance(raw_metadata, str):
loaded_parameter_dict = json.loads(raw_metadata) loaded_parameter_dict = json.loads(raw_metadata)
assert isinstance(loaded_parameter_dict, dict) assert isinstance(loaded_parameter_dict, dict)
results = [len(loaded_parameter_dict) > 0] results = [len(loaded_parameter_dict) > 0, 1]
get_image_number('image_number', 'Image Number', loaded_parameter_dict, results)
get_str('prompt', 'Prompt', loaded_parameter_dict, results) get_str('prompt', 'Prompt', loaded_parameter_dict, results)
get_str('negative_prompt', 'Negative Prompt', loaded_parameter_dict, results) get_str('negative_prompt', 'Negative Prompt', loaded_parameter_dict, results)
get_list('styles', 'Styles', loaded_parameter_dict, results) get_list('styles', 'Styles', loaded_parameter_dict, results)
performance = get_str('performance', 'Performance', loaded_parameter_dict, results) get_str('performance', 'Performance', loaded_parameter_dict, results)
get_steps('steps', 'Steps', loaded_parameter_dict, results) get_steps('steps', 'Steps', loaded_parameter_dict, results)
get_number('overwrite_switch', 'Overwrite Switch', loaded_parameter_dict, results) get_float('overwrite_switch', 'Overwrite Switch', loaded_parameter_dict, results)
get_resolution('resolution', 'Resolution', loaded_parameter_dict, results) get_resolution('resolution', 'Resolution', loaded_parameter_dict, results)
get_number('guidance_scale', 'Guidance Scale', loaded_parameter_dict, results) get_float('guidance_scale', 'Guidance Scale', loaded_parameter_dict, results)
get_number('sharpness', 'Sharpness', loaded_parameter_dict, results) get_float('sharpness', 'Sharpness', loaded_parameter_dict, results)
get_adm_guidance('adm_guidance', 'ADM Guidance', loaded_parameter_dict, results) get_adm_guidance('adm_guidance', 'ADM Guidance', loaded_parameter_dict, results)
get_str('refiner_swap_method', 'Refiner Swap Method', loaded_parameter_dict, results) get_str('refiner_swap_method', 'Refiner Swap Method', loaded_parameter_dict, results)
get_number('adaptive_cfg', 'CFG Mimicking from TSNR', loaded_parameter_dict, results) get_float('adaptive_cfg', 'CFG Mimicking from TSNR', loaded_parameter_dict, results)
get_number('clip_skip', 'CLIP Skip', loaded_parameter_dict, results, cast_type=int)
get_str('base_model', 'Base Model', loaded_parameter_dict, results) get_str('base_model', 'Base Model', loaded_parameter_dict, results)
get_str('refiner_model', 'Refiner Model', loaded_parameter_dict, results) get_str('refiner_model', 'Refiner Model', loaded_parameter_dict, results)
get_number('refiner_switch', 'Refiner Switch', loaded_parameter_dict, results) get_float('refiner_switch', 'Refiner Switch', loaded_parameter_dict, results)
get_str('sampler', 'Sampler', loaded_parameter_dict, results) get_str('sampler', 'Sampler', loaded_parameter_dict, results)
get_str('scheduler', 'Scheduler', loaded_parameter_dict, results) get_str('scheduler', 'Scheduler', loaded_parameter_dict, results)
get_str('vae', 'VAE', loaded_parameter_dict, results)
get_seed('seed', 'Seed', loaded_parameter_dict, results) get_seed('seed', 'Seed', loaded_parameter_dict, results)
get_inpaint_engine_version('inpaint_engine_version', 'Inpaint Engine Version', loaded_parameter_dict, results, inpaint_mode)
get_inpaint_method('inpaint_method', 'Inpaint Mode', loaded_parameter_dict, results)
if is_generating: if is_generating:
results.append(gr.update()) results.append(gr.update())
@ -60,27 +57,19 @@ def load_parameter_button_click(raw_metadata: dict | str, is_generating: bool, i
get_freeu('freeu', 'FreeU', loaded_parameter_dict, results) get_freeu('freeu', 'FreeU', loaded_parameter_dict, results)
# prevent performance LoRAs to be added twice, by performance and by lora
performance_filename = None
if performance is not None and performance in Performance.values():
performance = Performance(performance)
performance_filename = performance.lora_filename()
for i in range(modules.config.default_max_lora_number): for i in range(modules.config.default_max_lora_number):
get_lora(f'lora_combined_{i + 1}', f'LoRA {i + 1}', loaded_parameter_dict, results, performance_filename) get_lora(f'lora_combined_{i + 1}', f'LoRA {i + 1}', loaded_parameter_dict, results)
return results return results
def get_str(key: str, fallback: str | None, source_dict: dict, results: list, default=None) -> str | None: def get_str(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try: try:
h = source_dict.get(key, source_dict.get(fallback, default)) h = source_dict.get(key, source_dict.get(fallback, default))
assert isinstance(h, str) assert isinstance(h, str)
results.append(h) results.append(h)
return h
except: except:
results.append(gr.update()) results.append(gr.update())
return None
def get_list(key: str, fallback: str | None, source_dict: dict, results: list, default=None): def get_list(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
@ -93,36 +82,23 @@ def get_list(key: str, fallback: str | None, source_dict: dict, results: list, d
results.append(gr.update()) results.append(gr.update())
def get_number(key: str, fallback: str | None, source_dict: dict, results: list, default=None, cast_type=float): def get_float(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try: try:
h = source_dict.get(key, source_dict.get(fallback, default)) h = source_dict.get(key, source_dict.get(fallback, default))
assert h is not None assert h is not None
h = cast_type(h) h = float(h)
results.append(h) results.append(h)
except: except:
results.append(gr.update()) results.append(gr.update())
def get_image_number(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try:
h = source_dict.get(key, source_dict.get(fallback, default))
assert h is not None
h = int(h)
h = min(h, modules.config.default_max_image_number)
results.append(h)
except:
results.append(1)
def get_steps(key: str, fallback: str | None, source_dict: dict, results: list, default=None): def get_steps(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try: try:
h = source_dict.get(key, source_dict.get(fallback, default)) h = source_dict.get(key, source_dict.get(fallback, default))
assert h is not None assert h is not None
h = int(h) h = int(h)
# if not in steps or in steps and performance is not the same # if not in steps or in steps and performance is not the same
performance_name = source_dict.get('performance', '').replace(' ', '_').replace('-', '_').casefold() if h not in iter(Steps) or Steps(h).name.casefold() != source_dict.get('performance', '').replace(' ', '_').casefold():
performance_candidates = [key for key in Steps.keys() if key.casefold() == performance_name and Steps[key] == h]
if len(performance_candidates) == 0:
results.append(h) results.append(h)
return return
results.append(-1) results.append(-1)
@ -135,14 +111,14 @@ def get_resolution(key: str, fallback: str | None, source_dict: dict, results: l
h = source_dict.get(key, source_dict.get(fallback, default)) h = source_dict.get(key, source_dict.get(fallback, default))
width, height = eval(h) width, height = eval(h)
formatted = modules.config.add_ratio(f'{width}*{height}') formatted = modules.config.add_ratio(f'{width}*{height}')
if formatted in modules.config.available_aspect_ratios_labels: if formatted in modules.config.available_aspect_ratios:
results.append(formatted) results.append(formatted)
results.append(-1) results.append(-1)
results.append(-1) results.append(-1)
else: else:
results.append(gr.update()) results.append(gr.update())
results.append(int(width)) results.append(width)
results.append(int(height)) results.append(height)
except: except:
results.append(gr.update()) results.append(gr.update())
results.append(gr.update()) results.append(gr.update())
@ -161,36 +137,6 @@ def get_seed(key: str, fallback: str | None, source_dict: dict, results: list, d
results.append(gr.update()) results.append(gr.update())
def get_inpaint_engine_version(key: str, fallback: str | None, source_dict: dict, results: list, inpaint_mode: str, default=None) -> str | None:
try:
h = source_dict.get(key, source_dict.get(fallback, default))
assert isinstance(h, str) and h in modules.flags.inpaint_engine_versions
if inpaint_mode != modules.flags.inpaint_option_detail:
results.append(h)
else:
results.append(gr.update())
results.append(h)
return h
except:
results.append(gr.update())
results.append('empty')
return None
def get_inpaint_method(key: str, fallback: str | None, source_dict: dict, results: list, default=None) -> str | None:
try:
h = source_dict.get(key, source_dict.get(fallback, default))
assert isinstance(h, str) and h in modules.flags.inpaint_options
results.append(h)
for i in range(modules.config.default_enhance_tabs):
results.append(h)
return h
except:
results.append(gr.update())
for i in range(modules.config.default_enhance_tabs):
results.append(gr.update())
def get_adm_guidance(key: str, fallback: str | None, source_dict: dict, results: list, default=None): def get_adm_guidance(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
try: try:
h = source_dict.get(key, source_dict.get(fallback, default)) h = source_dict.get(key, source_dict.get(fallback, default))
@ -221,31 +167,27 @@ def get_freeu(key: str, fallback: str | None, source_dict: dict, results: list,
results.append(gr.update()) results.append(gr.update())
def get_lora(key: str, fallback: str | None, source_dict: dict, results: list, performance_filename: str | None): def get_lora(key: str, fallback: str | None, source_dict: dict, results: list):
try: try:
split_data = source_dict.get(key, source_dict.get(fallback)).split(' : ') n, w = source_dict.get(key, source_dict.get(fallback)).split(' : ')
enabled = True w = float(w)
name = split_data[0] results.append(True)
weight = split_data[1] results.append(n)
results.append(w)
if len(split_data) == 3:
enabled = split_data[0] == 'True'
name = split_data[1]
weight = split_data[2]
if name == performance_filename:
raise Exception
weight = float(weight)
results.append(enabled)
results.append(name)
results.append(weight)
except: except:
results.append(True) results.append(True)
results.append('None') results.append('None')
results.append(1) results.append(1)
def get_sha256(filepath):
global hash_cache
if filepath not in hash_cache:
hash_cache[filepath] = calculate_sha256(filepath)
return hash_cache[filepath]
def parse_meta_from_preset(preset_content): def parse_meta_from_preset(preset_content):
assert isinstance(preset_content, dict) assert isinstance(preset_content, dict)
preset_prepared = {} preset_prepared = {}
@ -256,7 +198,7 @@ def parse_meta_from_preset(preset_content):
loras = getattr(modules.config, settings_key) loras = getattr(modules.config, settings_key)
if settings_key in items: if settings_key in items:
loras = items[settings_key] loras = items[settings_key]
for index, lora in enumerate(loras[:modules.config.default_max_lora_number]): for index, lora in enumerate(loras[:5]):
preset_prepared[f'lora_combined_{index + 1}'] = ' : '.join(map(str, lora)) preset_prepared[f'lora_combined_{index + 1}'] = ' : '.join(map(str, lora))
elif settings_key == "default_aspect_ratio": elif settings_key == "default_aspect_ratio":
if settings_key in items and items[settings_key] is not None: if settings_key in items and items[settings_key] is not None:
@ -268,7 +210,8 @@ def parse_meta_from_preset(preset_content):
height = height[:height.index(" ")] height = height[:height.index(" ")]
preset_prepared[meta_key] = (width, height) preset_prepared[meta_key] = (width, height)
else: else:
preset_prepared[meta_key] = items[settings_key] if settings_key in items and items[settings_key] is not None else getattr(modules.config, settings_key) preset_prepared[meta_key] = items[settings_key] if settings_key in items and items[
settings_key] is not None else getattr(modules.config, settings_key)
if settings_key == "default_styles" or settings_key == "default_aspect_ratio": if settings_key == "default_styles" or settings_key == "default_aspect_ratio":
preset_prepared[meta_key] = str(preset_prepared[meta_key]) preset_prepared[meta_key] = str(preset_prepared[meta_key])
@ -282,28 +225,27 @@ class MetadataParser(ABC):
self.full_prompt: str = '' self.full_prompt: str = ''
self.raw_negative_prompt: str = '' self.raw_negative_prompt: str = ''
self.full_negative_prompt: str = '' self.full_negative_prompt: str = ''
self.steps: int = Steps.SPEED.value self.steps: int = 30
self.base_model_name: str = '' self.base_model_name: str = ''
self.base_model_hash: str = '' self.base_model_hash: str = ''
self.refiner_model_name: str = '' self.refiner_model_name: str = ''
self.refiner_model_hash: str = '' self.refiner_model_hash: str = ''
self.loras: list = [] self.loras: list = []
self.vae_name: str = ''
@abstractmethod @abstractmethod
def get_scheme(self) -> MetadataScheme: def get_scheme(self) -> MetadataScheme:
raise NotImplementedError raise NotImplementedError
@abstractmethod @abstractmethod
def to_json(self, metadata: dict | str) -> dict: def parse_json(self, metadata: dict | str) -> dict:
raise NotImplementedError raise NotImplementedError
@abstractmethod @abstractmethod
def to_string(self, metadata: dict) -> str: def parse_string(self, metadata: dict) -> str:
raise NotImplementedError raise NotImplementedError
def set_data(self, raw_prompt, full_prompt, raw_negative_prompt, full_negative_prompt, steps, base_model_name, def set_data(self, raw_prompt, full_prompt, raw_negative_prompt, full_negative_prompt, steps, base_model_name,
refiner_model_name, loras, vae_name): refiner_model_name, loras):
self.raw_prompt = raw_prompt self.raw_prompt = raw_prompt
self.full_prompt = full_prompt self.full_prompt = full_prompt
self.raw_negative_prompt = raw_negative_prompt self.raw_negative_prompt = raw_negative_prompt
@ -312,20 +254,19 @@ class MetadataParser(ABC):
self.base_model_name = Path(base_model_name).stem self.base_model_name = Path(base_model_name).stem
base_model_path = get_file_from_folder_list(base_model_name, modules.config.paths_checkpoints) base_model_path = get_file_from_folder_list(base_model_name, modules.config.paths_checkpoints)
self.base_model_hash = sha256_from_cache(base_model_path) self.base_model_hash = get_sha256(base_model_path)
if refiner_model_name not in ['', 'None']: if refiner_model_name not in ['', 'None']:
self.refiner_model_name = Path(refiner_model_name).stem self.refiner_model_name = Path(refiner_model_name).stem
refiner_model_path = get_file_from_folder_list(refiner_model_name, modules.config.paths_checkpoints) refiner_model_path = get_file_from_folder_list(refiner_model_name, modules.config.paths_checkpoints)
self.refiner_model_hash = sha256_from_cache(refiner_model_path) self.refiner_model_hash = get_sha256(refiner_model_path)
self.loras = [] self.loras = []
for (lora_name, lora_weight) in loras: for (lora_name, lora_weight) in loras:
if lora_name != 'None': if lora_name != 'None':
lora_path = get_file_from_folder_list(lora_name, modules.config.paths_loras) lora_path = get_file_from_folder_list(lora_name, modules.config.paths_loras)
lora_hash = sha256_from_cache(lora_path) lora_hash = get_sha256(lora_path)
self.loras.append((Path(lora_name).stem, lora_weight, lora_hash)) self.loras.append((Path(lora_name).stem, lora_weight, lora_hash))
self.vae_name = Path(vae_name).stem
class A1111MetadataParser(MetadataParser): class A1111MetadataParser(MetadataParser):
@ -341,7 +282,6 @@ class A1111MetadataParser(MetadataParser):
'steps': 'Steps', 'steps': 'Steps',
'sampler': 'Sampler', 'sampler': 'Sampler',
'scheduler': 'Scheduler', 'scheduler': 'Scheduler',
'vae': 'VAE',
'guidance_scale': 'CFG scale', 'guidance_scale': 'CFG scale',
'seed': 'Seed', 'seed': 'Seed',
'resolution': 'Size', 'resolution': 'Size',
@ -349,7 +289,6 @@ class A1111MetadataParser(MetadataParser):
'adm_guidance': 'ADM Guidance', 'adm_guidance': 'ADM Guidance',
'refiner_swap_method': 'Refiner Swap Method', 'refiner_swap_method': 'Refiner Swap Method',
'adaptive_cfg': 'Adaptive CFG', 'adaptive_cfg': 'Adaptive CFG',
'clip_skip': 'Clip skip',
'overwrite_switch': 'Overwrite Switch', 'overwrite_switch': 'Overwrite Switch',
'freeu': 'FreeU', 'freeu': 'FreeU',
'base_model': 'Model', 'base_model': 'Model',
@ -362,7 +301,7 @@ class A1111MetadataParser(MetadataParser):
'version': 'Version' 'version': 'Version'
} }
def to_json(self, metadata: str) -> dict: def parse_json(self, metadata: str) -> dict:
metadata_prompt = '' metadata_prompt = ''
metadata_negative_prompt = '' metadata_negative_prompt = ''
@ -416,9 +355,9 @@ class A1111MetadataParser(MetadataParser):
data['styles'] = str(found_styles) data['styles'] = str(found_styles)
# try to load performance based on steps, fallback for direct A1111 imports # try to load performance based on steps, fallback for direct A1111 imports
if 'steps' in data and 'performance' in data is None: if 'steps' in data and 'performance' not in data:
try: try:
data['performance'] = Performance.by_steps(data['steps']).value data['performance'] = Performance[Steps(int(data['steps'])).name].value
except ValueError | KeyError: except ValueError | KeyError:
pass pass
@ -430,25 +369,20 @@ class A1111MetadataParser(MetadataParser):
data['sampler'] = k data['sampler'] = k
break break
for key in ['base_model', 'refiner_model', 'vae']: for key in ['base_model', 'refiner_model']:
if key in data: if key in data:
if key == 'vae': for filename in modules.config.model_filenames:
self.add_extension_to_filename(data, modules.config.vae_filenames, 'vae') path = Path(filename)
else: if data[key] == path.stem:
self.add_extension_to_filename(data, modules.config.model_filenames, key) data[key] = filename
break
lora_data = '' if 'lora_hashes' in data:
if 'lora_weights' in data and data['lora_weights'] != '': lora_filenames = modules.config.lora_filenames.copy()
lora_data = data['lora_weights'] lora_filenames.remove(modules.config.downloading_sdxl_lcm_lora())
elif 'lora_hashes' in data and data['lora_hashes'] != '' and data['lora_hashes'].split(', ')[0].count(':') == 2: for li, lora in enumerate(data['lora_hashes'].split(', ')):
lora_data = data['lora_hashes'] lora_name, lora_hash, lora_weight = lora.split(': ')
for filename in lora_filenames:
if lora_data != '':
for li, lora in enumerate(lora_data.split(', ')):
lora_split = lora.split(': ')
lora_name = lora_split[0]
lora_weight = lora_split[2] if len(lora_split) == 3 else lora_split[1]
for filename in modules.config.lora_filenames:
path = Path(filename) path = Path(filename)
if lora_name == path.stem: if lora_name == path.stem:
data[f'lora_combined_{li + 1}'] = f'{filename} : {lora_weight}' data[f'lora_combined_{li + 1}'] = f'{filename} : {lora_weight}'
@ -456,14 +390,13 @@ class A1111MetadataParser(MetadataParser):
return data return data
def to_string(self, metadata: dict) -> str: def parse_string(self, metadata: dict) -> str:
data = {k: v for _, k, v in metadata} data = {k: v for _, k, v in metadata}
width, height = eval(data['resolution']) width, height = eval(data['resolution'])
sampler = data['sampler'] sampler = data['sampler']
scheduler = data['scheduler'] scheduler = data['scheduler']
if sampler in SAMPLERS and SAMPLERS[sampler] != '': if sampler in SAMPLERS and SAMPLERS[sampler] != '':
sampler = SAMPLERS[sampler] sampler = SAMPLERS[sampler]
if sampler not in CIVITAI_NO_KARRAS and scheduler == 'karras': if sampler not in CIVITAI_NO_KARRAS and scheduler == 'karras':
@ -482,7 +415,6 @@ class A1111MetadataParser(MetadataParser):
self.fooocus_to_a1111['performance']: data['performance'], self.fooocus_to_a1111['performance']: data['performance'],
self.fooocus_to_a1111['scheduler']: scheduler, self.fooocus_to_a1111['scheduler']: scheduler,
self.fooocus_to_a1111['vae']: Path(data['vae']).stem,
# workaround for multiline prompts # workaround for multiline prompts
self.fooocus_to_a1111['raw_prompt']: self.raw_prompt, self.fooocus_to_a1111['raw_prompt']: self.raw_prompt,
self.fooocus_to_a1111['raw_negative_prompt']: self.raw_negative_prompt, self.fooocus_to_a1111['raw_negative_prompt']: self.raw_negative_prompt,
@ -494,23 +426,20 @@ class A1111MetadataParser(MetadataParser):
self.fooocus_to_a1111['refiner_model_hash']: self.refiner_model_hash self.fooocus_to_a1111['refiner_model_hash']: self.refiner_model_hash
} }
for key in ['adaptive_cfg', 'clip_skip', 'overwrite_switch', 'refiner_swap_method', 'freeu']: for key in ['adaptive_cfg', 'overwrite_switch', 'refiner_swap_method', 'freeu']:
if key in data: if key in data:
generation_params[self.fooocus_to_a1111[key]] = data[key] generation_params[self.fooocus_to_a1111[key]] = data[key]
if len(self.loras) > 0:
lora_hashes = [] lora_hashes = []
lora_weights = []
for index, (lora_name, lora_weight, lora_hash) in enumerate(self.loras): for index, (lora_name, lora_weight, lora_hash) in enumerate(self.loras):
# workaround for Fooocus not knowing LoRA name in LoRA metadata # workaround for Fooocus not knowing LoRA name in LoRA metadata
lora_hashes.append(f'{lora_name}: {lora_hash}') lora_hashes.append(f'{lora_name}: {lora_hash}: {lora_weight}')
lora_weights.append(f'{lora_name}: {lora_weight}')
lora_hashes_string = ', '.join(lora_hashes) lora_hashes_string = ', '.join(lora_hashes)
lora_weights_string = ', '.join(lora_weights)
generation_params[self.fooocus_to_a1111['lora_hashes']] = lora_hashes_string
generation_params[self.fooocus_to_a1111['lora_weights']] = lora_weights_string
generation_params[self.fooocus_to_a1111['version']] = data['version'] generation_params |= {
self.fooocus_to_a1111['lora_hashes']: lora_hashes_string,
self.fooocus_to_a1111['version']: data['version']
}
if modules.config.metadata_created_by != '': if modules.config.metadata_created_by != '':
generation_params[self.fooocus_to_a1111['created_by']] = modules.config.metadata_created_by generation_params[self.fooocus_to_a1111['created_by']] = modules.config.metadata_created_by
@ -523,35 +452,29 @@ class A1111MetadataParser(MetadataParser):
negative_prompt_text = f"\nNegative prompt: {negative_prompt_resolved}" if negative_prompt_resolved else "" negative_prompt_text = f"\nNegative prompt: {negative_prompt_resolved}" if negative_prompt_resolved else ""
return f"{positive_prompt_resolved}{negative_prompt_text}\n{generation_params_text}".strip() return f"{positive_prompt_resolved}{negative_prompt_text}\n{generation_params_text}".strip()
@staticmethod
def add_extension_to_filename(data, filenames, key):
for filename in filenames:
path = Path(filename)
if data[key] == path.stem:
data[key] = filename
break
class FooocusMetadataParser(MetadataParser): class FooocusMetadataParser(MetadataParser):
def get_scheme(self) -> MetadataScheme: def get_scheme(self) -> MetadataScheme:
return MetadataScheme.FOOOCUS return MetadataScheme.FOOOCUS
def to_json(self, metadata: dict) -> dict: def parse_json(self, metadata: dict) -> dict:
model_filenames = modules.config.model_filenames.copy()
lora_filenames = modules.config.lora_filenames.copy()
lora_filenames.remove(modules.config.downloading_sdxl_lcm_lora())
for key, value in metadata.items(): for key, value in metadata.items():
if value in ['', 'None']: if value in ['', 'None']:
continue continue
if key in ['base_model', 'refiner_model']: if key in ['base_model', 'refiner_model']:
metadata[key] = self.replace_value_with_filename(key, value, modules.config.model_filenames) metadata[key] = self.replace_value_with_filename(key, value, model_filenames)
elif key.startswith('lora_combined_'): elif key.startswith('lora_combined_'):
metadata[key] = self.replace_value_with_filename(key, value, modules.config.lora_filenames) metadata[key] = self.replace_value_with_filename(key, value, lora_filenames)
elif key == 'vae':
metadata[key] = self.replace_value_with_filename(key, value, modules.config.vae_filenames)
else: else:
continue continue
return metadata return metadata
def to_string(self, metadata: list) -> str: def parse_string(self, metadata: list) -> str:
for li, (label, key, value) in enumerate(metadata): for li, (label, key, value) in enumerate(metadata):
# remove model folder paths from metadata # remove model folder paths from metadata
if key.startswith('lora_combined_'): if key.startswith('lora_combined_'):
@ -572,7 +495,6 @@ class FooocusMetadataParser(MetadataParser):
res['refiner_model'] = self.refiner_model_name res['refiner_model'] = self.refiner_model_name
res['refiner_model_hash'] = self.refiner_model_hash res['refiner_model_hash'] = self.refiner_model_hash
res['vae'] = self.vae_name
res['loras'] = self.loras res['loras'] = self.loras
if modules.config.metadata_created_by != '': if modules.config.metadata_created_by != '':
@ -591,8 +513,6 @@ class FooocusMetadataParser(MetadataParser):
elif value == path.stem: elif value == path.stem:
return filename return filename
return None
def get_metadata_parser(metadata_scheme: MetadataScheme) -> MetadataParser: def get_metadata_parser(metadata_scheme: MetadataScheme) -> MetadataParser:
match metadata_scheme: match metadata_scheme:
@ -604,8 +524,9 @@ def get_metadata_parser(metadata_scheme: MetadataScheme) -> MetadataParser:
raise NotImplementedError raise NotImplementedError
def read_info_from_image(file) -> tuple[str | None, MetadataScheme | None]: def read_info_from_image(filepath) -> tuple[str | None, MetadataScheme | None]:
items = (file.info or {}).copy() with Image.open(filepath) as image:
items = (image.info or {}).copy()
parameters = items.pop('parameters', None) parameters = items.pop('parameters', None)
metadata_scheme = items.pop('fooocus_scheme', None) metadata_scheme = items.pop('fooocus_scheme', None)
@ -614,7 +535,7 @@ def read_info_from_image(file) -> tuple[str | None, MetadataScheme | None]:
if parameters is not None and is_json(parameters): if parameters is not None and is_json(parameters):
parameters = json.loads(parameters) parameters = json.loads(parameters)
elif exif is not None: elif exif is not None:
exif = file.getexif() exif = image.getexif()
# 0x9286 = UserComment # 0x9286 = UserComment
parameters = exif.get(0x9286, None) parameters = exif.get(0x9286, None)
# 0x927C = MakerNote # 0x927C = MakerNote

View File

@ -14,8 +14,6 @@ def load_file_from_url(
Returns the path to the downloaded file. Returns the path to the downloaded file.
""" """
domain = os.environ.get("HF_MIRROR", "https://huggingface.co").rstrip('/')
url = str.replace(url, "https://huggingface.co", domain, 1)
os.makedirs(model_dir, exist_ok=True) os.makedirs(model_dir, exist_ok=True)
if not file_name: if not file_name:
parts = urlparse(url) parts = urlparse(url)

View File

@ -51,8 +51,6 @@ def patched_register_schedule(self, given_betas=None, beta_schedule="linear", ti
self.linear_end = linear_end self.linear_end = linear_end
sigmas = torch.tensor(((1 - alphas_cumprod) / alphas_cumprod) ** 0.5, dtype=torch.float32) sigmas = torch.tensor(((1 - alphas_cumprod) / alphas_cumprod) ** 0.5, dtype=torch.float32)
self.set_sigmas(sigmas) self.set_sigmas(sigmas)
alphas_cumprod = torch.tensor(alphas_cumprod, dtype=torch.float32)
self.set_alphas_cumprod(alphas_cumprod)
return return

View File

@ -6,9 +6,8 @@ import urllib.parse
from PIL import Image from PIL import Image
from PIL.PngImagePlugin import PngInfo from PIL.PngImagePlugin import PngInfo
from modules.flags import OutputFormat
from modules.meta_parser import MetadataParser, get_exif
from modules.util import generate_temp_filename from modules.util import generate_temp_filename
from modules.meta_parser import MetadataParser, get_exif
log_cache = {} log_cache = {}
@ -21,16 +20,16 @@ def get_current_html_path(output_format=None):
return html_name return html_name
def log(img, metadata, metadata_parser: MetadataParser | None = None, output_format=None, task=None, persist_image=True) -> str: def log(img, metadata, metadata_parser: MetadataParser | None = None, output_format=None) -> str:
path_outputs = modules.config.temp_path if args_manager.args.disable_image_log or not persist_image else modules.config.path_outputs path_outputs = args_manager.args.temp_path if args_manager.args.disable_image_log else modules.config.path_outputs
output_format = output_format if output_format else modules.config.default_output_format output_format = output_format if output_format else modules.config.default_output_format
date_string, local_temp_filename, only_name = generate_temp_filename(folder=path_outputs, extension=output_format) date_string, local_temp_filename, only_name = generate_temp_filename(folder=path_outputs, extension=output_format)
os.makedirs(os.path.dirname(local_temp_filename), exist_ok=True) os.makedirs(os.path.dirname(local_temp_filename), exist_ok=True)
parsed_parameters = metadata_parser.to_string(metadata.copy()) if metadata_parser is not None else '' parsed_parameters = metadata_parser.parse_string(metadata) if metadata_parser is not None else ''
image = Image.fromarray(img) image = Image.fromarray(img)
if output_format == OutputFormat.PNG.value: if output_format == 'png':
if parsed_parameters != '': if parsed_parameters != '':
pnginfo = PngInfo() pnginfo = PngInfo()
pnginfo.add_text('parameters', parsed_parameters) pnginfo.add_text('parameters', parsed_parameters)
@ -38,9 +37,9 @@ def log(img, metadata, metadata_parser: MetadataParser | None = None, output_for
else: else:
pnginfo = None pnginfo = None
image.save(local_temp_filename, pnginfo=pnginfo) image.save(local_temp_filename, pnginfo=pnginfo)
elif output_format == OutputFormat.JPEG.value: elif output_format == 'jpg':
image.save(local_temp_filename, quality=95, optimize=True, progressive=True, exif=get_exif(parsed_parameters, metadata_parser.get_scheme().value) if metadata_parser else Image.Exif()) image.save(local_temp_filename, quality=95, optimize=True, progressive=True, exif=get_exif(parsed_parameters, metadata_parser.get_scheme().value) if metadata_parser else Image.Exif())
elif output_format == OutputFormat.WEBP.value: elif output_format == 'webp':
image.save(local_temp_filename, quality=95, lossless=False, exif=get_exif(parsed_parameters, metadata_parser.get_scheme().value) if metadata_parser else Image.Exif()) image.save(local_temp_filename, quality=95, lossless=False, exif=get_exif(parsed_parameters, metadata_parser.get_scheme().value) if metadata_parser else Image.Exif())
else: else:
image.save(local_temp_filename) image.save(local_temp_filename)
@ -91,7 +90,7 @@ def log(img, metadata, metadata_parser: MetadataParser | None = None, output_for
</script>""" </script>"""
) )
begin_part = f"<!DOCTYPE html><html><head><title>Fooocus Log {date_string}</title>{css_styles}</head><body>{js}<p>Fooocus Log {date_string} (private)</p>\n<p>Metadata is embedded if enabled in the config or developer debug mode. You can find the information for each image in line Metadata Scheme.</p><!--fooocus-log-split-->\n\n" begin_part = f"<!DOCTYPE html><html><head><title>Fooocus Log {date_string}</title>{css_styles}</head><body>{js}<p>Fooocus Log {date_string} (private)</p>\n<p>All images are clean, without any hidden data/meta, and safe to share with others.</p><!--fooocus-log-split-->\n\n"
end_part = f'\n<!--fooocus-log-split--></body></html>' end_part = f'\n<!--fooocus-log-split--></body></html>'
middle_part = log_cache.get(html_name, "") middle_part = log_cache.get(html_name, "")
@ -111,15 +110,9 @@ def log(img, metadata, metadata_parser: MetadataParser | None = None, output_for
for label, key, value in metadata: for label, key, value in metadata:
value_txt = str(value).replace('\n', ' </br> ') value_txt = str(value).replace('\n', ' </br> ')
item += f"<tr><td class='label'>{label}</td><td class='value'>{value_txt}</td></tr>\n" item += f"<tr><td class='label'>{label}</td><td class='value'>{value_txt}</td></tr>\n"
if task is not None and 'positive' in task and 'negative' in task:
full_prompt_details = f"""<details><summary>Positive</summary>{', '.join(task['positive'])}</details>
<details><summary>Negative</summary>{', '.join(task['negative'])}</details>"""
item += f"<tr><td class='label'>Full raw prompt</td><td class='value'>{full_prompt_details}</td></tr>\n"
item += "</table>" item += "</table>"
js_txt = urllib.parse.quote(json.dumps({k: v for _, k, v, in metadata}, indent=0), safe='') js_txt = urllib.parse.quote(json.dumps({k: v for _, k, v in metadata}, indent=0), safe='')
item += f"</br><button onclick=\"to_clipboard('{js_txt}')\">Copy to Clipboard</button>" item += f"</br><button onclick=\"to_clipboard('{js_txt}')\">Copy to Clipboard</button>"
item += "</td>" item += "</td>"

View File

@ -3,7 +3,6 @@ import ldm_patched.modules.samplers
import ldm_patched.modules.model_management import ldm_patched.modules.model_management
from collections import namedtuple from collections import namedtuple
from ldm_patched.contrib.external_align_your_steps import AlignYourStepsScheduler
from ldm_patched.contrib.external_custom_sampler import SDTurboScheduler from ldm_patched.contrib.external_custom_sampler import SDTurboScheduler
from ldm_patched.k_diffusion import sampling as k_diffusion_sampling from ldm_patched.k_diffusion import sampling as k_diffusion_sampling
from ldm_patched.modules.samplers import normal_scheduler, simple_scheduler, ddim_scheduler from ldm_patched.modules.samplers import normal_scheduler, simple_scheduler, ddim_scheduler
@ -175,10 +174,7 @@ def calculate_sigmas_scheduler_hacked(model, scheduler_name, steps):
elif scheduler_name == "sgm_uniform": elif scheduler_name == "sgm_uniform":
sigmas = normal_scheduler(model, steps, sgm=True) sigmas = normal_scheduler(model, steps, sgm=True)
elif scheduler_name == "turbo": elif scheduler_name == "turbo":
sigmas = SDTurboScheduler().get_sigmas(model=model, steps=steps, denoise=1.0)[0] sigmas = SDTurboScheduler().get_sigmas(namedtuple('Patcher', ['model'])(model=model), steps=steps, denoise=1.0)[0]
elif scheduler_name == "align_your_steps":
model_type = 'SDXL' if isinstance(model.latent_format, ldm_patched.modules.latent_formats.SDXL) else 'SD1'
sigmas = AlignYourStepsScheduler().get_sigmas(model_type=model_type, steps=steps, denoise=1.0)[0]
else: else:
raise TypeError("error invalid scheduler") raise TypeError("error invalid scheduler")
return sigmas return sigmas

View File

@ -3,11 +3,13 @@ import re
import json import json
import math import math
from modules.extra_utils import get_files_from_folder from modules.util import get_files_from_folder
from random import Random
# cannot use modules.config - validators causing circular imports # cannot use modules.config - validators causing circular imports
styles_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../sdxl_styles/')) styles_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../sdxl_styles/'))
wildcards_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../wildcards/'))
wildcards_max_bfs_depth = 64
def normalize_key(k): def normalize_key(k):
@ -23,6 +25,7 @@ def normalize_key(k):
styles = {} styles = {}
styles_files = get_files_from_folder(styles_path, ['.json']) styles_files = get_files_from_folder(styles_path, ['.json'])
for x in ['sdxl_styles_fooocus.json', for x in ['sdxl_styles_fooocus.json',
@ -48,22 +51,39 @@ for styles_file in styles_files:
print(f'Failed to load style file {styles_file}') print(f'Failed to load style file {styles_file}')
style_keys = list(styles.keys()) style_keys = list(styles.keys())
fooocus_expansion = 'Fooocus V2' fooocus_expansion = "Fooocus V2"
random_style_name = 'Random Style' legal_style_names = [fooocus_expansion] + style_keys
legal_style_names = [fooocus_expansion, random_style_name] + style_keys
def get_random_style(rng: Random) -> str:
return rng.choice(list(styles.items()))[0]
def apply_style(style, positive): def apply_style(style, positive):
p, n = styles[style] p, n = styles[style]
return p.replace('{prompt}', positive).splitlines(), n.splitlines(), '{prompt}' in p return p.replace('{prompt}', positive).splitlines(), n.splitlines()
def get_words(arrays, total_mult, index): def apply_wildcards(wildcard_text, rng, directory=wildcards_path):
if len(arrays) == 1: for _ in range(wildcards_max_bfs_depth):
placeholders = re.findall(r'__([\w-]+)__', wildcard_text)
if len(placeholders) == 0:
return wildcard_text
print(f'[Wildcards] processing: {wildcard_text}')
for placeholder in placeholders:
try:
words = open(os.path.join(directory, f'{placeholder}.txt'), encoding='utf-8').read().splitlines()
words = [x for x in words if x != '']
assert len(words) > 0
wildcard_text = wildcard_text.replace(f'__{placeholder}__', rng.choice(words), 1)
except:
print(f'[Wildcards] Warning: {placeholder}.txt missing or empty. '
f'Using "{placeholder}" as a normal word.')
wildcard_text = wildcard_text.replace(f'__{placeholder}__', placeholder)
print(f'[Wildcards] {wildcard_text}')
print(f'[Wildcards] BFS stack overflow. Current text: {wildcard_text}')
return wildcard_text
def get_words(arrays, totalMult, index):
if(len(arrays) == 1):
return [arrays[0].split(',')[index]] return [arrays[0].split(',')[index]]
else: else:
words = arrays[0].split(',') words = arrays[0].split(',')
@ -71,11 +91,12 @@ def get_words(arrays, total_mult, index):
index -= index % len(words) index -= index % len(words)
index /= len(words) index /= len(words)
index = math.floor(index) index = math.floor(index)
return [word] + get_words(arrays[1:], math.floor(total_mult / len(words)), index) return [word] + get_words(arrays[1:], math.floor(totalMult/len(words)), index)
def apply_arrays(text, index): def apply_arrays(text, index):
arrays = re.findall(r'\[\[(.*?)\]\]', text) arrays = re.findall(r'\[\[([\s,\w-]+)\]\]', text)
if len(arrays) == 0: if len(arrays) == 0:
return text return text

View File

@ -39,7 +39,7 @@ def javascript_html():
head += f'<script type="text/javascript" src="{edit_attention_js_path}"></script>\n' head += f'<script type="text/javascript" src="{edit_attention_js_path}"></script>\n'
head += f'<script type="text/javascript" src="{viewer_js_path}"></script>\n' head += f'<script type="text/javascript" src="{viewer_js_path}"></script>\n'
head += f'<script type="text/javascript" src="{image_viewer_js_path}"></script>\n' head += f'<script type="text/javascript" src="{image_viewer_js_path}"></script>\n'
head += f'<meta name="samples-path" content="{samples_path}">\n' head += f'<meta name="samples-path" content="{samples_path}"></meta>\n'
if args_manager.args.theme: if args_manager.args.theme:
head += f'<script type="text/javascript">set_theme(\"{args_manager.args.theme}\");</script>\n' head += f'<script type="text/javascript">set_theme(\"{args_manager.args.theme}\");</script>\n'

View File

@ -1,11 +1,13 @@
from collections import OrderedDict import os
import modules.core as core
import torch import torch
from ldm_patched.contrib.external_upscale_model import ImageUpscaleWithModel import modules.core as core
from ldm_patched.pfn.architecture.RRDB import RRDBNet as ESRGAN
from modules.config import downloading_upscale_model
from ldm_patched.pfn.architecture.RRDB import RRDBNet as ESRGAN
from ldm_patched.contrib.external_upscale_model import ImageUpscaleWithModel
from collections import OrderedDict
from modules.config import path_upscale_models
model_filename = os.path.join(path_upscale_models, 'fooocus_upscaler_s409985e5.bin')
opImageUpscaleWithModel = ImageUpscaleWithModel() opImageUpscaleWithModel = ImageUpscaleWithModel()
model = None model = None
@ -16,8 +18,7 @@ def perform_upscale(img):
print(f'Upscaling image with shape {str(img.shape)} ...') print(f'Upscaling image with shape {str(img.shape)} ...')
if model is None: if model is None:
model_filename = downloading_upscale_model() sd = torch.load(model_filename)
sd = torch.load(model_filename, weights_only=True)
sdo = OrderedDict() sdo = OrderedDict()
for k, v in sd.items(): for k, v in sd.items():
sdo[k.replace('residual_block_', 'RDB')] = v sdo[k.replace('residual_block_', 'RDB')] = v

View File

@ -1,4 +1,4 @@
from pathlib import Path import typing
import numpy as np import numpy as np
import datetime import datetime
@ -6,28 +6,16 @@ import random
import math import math
import os import os
import cv2 import cv2
import re
from typing import List, Tuple, AnyStr, NamedTuple
import json import json
import hashlib
from PIL import Image from PIL import Image
from hashlib import sha256
import modules.config
import modules.sdxl_styles import modules.sdxl_styles
from modules.flags import Performance
LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS) LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS)
# Regexp compiled once. Matches entries with the following pattern:
# <lora:some_lora:1>
# <lora:aNotherLora:-1.6>
LORAS_PROMPT_PATTERN = re.compile(r"(<lora:([^:]+):([+-]?(?:\d+(?:\.\d*)?|\.\d+))>)", re.X)
HASH_SHA256_LENGTH = 10 HASH_SHA256_LENGTH = 10
def erode_or_dilate(x, k): def erode_or_dilate(x, k):
k = int(k) k = int(k)
if k > 0: if k > 0:
@ -175,42 +163,35 @@ def generate_temp_filename(folder='./outputs/', extension='png'):
return date_string, os.path.abspath(result), filename return date_string, os.path.abspath(result), filename
def sha256(filename, use_addnet_hash=False, length=HASH_SHA256_LENGTH): def get_files_from_folder(folder_path, exensions=None, name_filter=None):
if use_addnet_hash: if not os.path.isdir(folder_path):
with open(filename, "rb") as file: raise ValueError("Folder path is not a valid directory.")
sha256_value = addnet_hash_safetensors(file)
else:
sha256_value = calculate_sha256(filename)
return sha256_value[:length] if length is not None else sha256_value filenames = []
for root, dirs, files in os.walk(folder_path, topdown=False):
relative_path = os.path.relpath(root, folder_path)
if relative_path == ".":
relative_path = ""
for filename in sorted(files, key=lambda s: s.casefold()):
_, file_extension = os.path.splitext(filename)
if (exensions is None or file_extension.lower() in exensions) and (name_filter is None or name_filter in _):
path = os.path.join(relative_path, filename)
filenames.append(path)
return filenames
def addnet_hash_safetensors(b): def calculate_sha256(filename, length=HASH_SHA256_LENGTH) -> str:
"""kohya-ss hash for safetensors from https://github.com/kohya-ss/sd-scripts/blob/main/library/train_util.py""" hash_sha256 = sha256()
hash_sha256 = hashlib.sha256()
blksize = 1024 * 1024
b.seek(0)
header = b.read(8)
n = int.from_bytes(header, "little")
offset = n + 8
b.seek(offset)
for chunk in iter(lambda: b.read(blksize), b""):
hash_sha256.update(chunk)
return hash_sha256.hexdigest()
def calculate_sha256(filename) -> str:
hash_sha256 = hashlib.sha256()
blksize = 1024 * 1024 blksize = 1024 * 1024
with open(filename, "rb") as f: with open(filename, "rb") as f:
for chunk in iter(lambda: f.read(blksize), b""): for chunk in iter(lambda: f.read(blksize), b""):
hash_sha256.update(chunk) hash_sha256.update(chunk)
return hash_sha256.hexdigest() res = hash_sha256.hexdigest()
return res[:length] if length else res
def quote(text): def quote(text):
@ -346,7 +327,7 @@ def extract_styles_from_prompt(prompt, negative_prompt):
return list(reversed(extracted)), real_prompt, negative_prompt return list(reversed(extracted)), real_prompt, negative_prompt
class PromptStyle(NamedTuple): class PromptStyle(typing.NamedTuple):
name: str name: str
prompt: str prompt: str
negative_prompt: str negative_prompt: str
@ -361,18 +342,7 @@ def is_json(data: str) -> bool:
return True return True
def get_filname_by_stem(lora_name, filenames: List[str]) -> str | None:
for filename in filenames:
path = Path(filename)
if lora_name == path.stem:
return filename
return None
def get_file_from_folder_list(name, folders): def get_file_from_folder_list(name, folders):
if not isinstance(folders, list):
folders = [folders]
for folder in folders: for folder in folders:
filename = os.path.abspath(os.path.realpath(os.path.join(folder, name))) filename = os.path.abspath(os.path.realpath(os.path.join(folder, name)))
if os.path.isfile(filename): if os.path.isfile(filename):
@ -381,135 +351,12 @@ def get_file_from_folder_list(name, folders):
return os.path.abspath(os.path.realpath(os.path.join(folders[0], name))) return os.path.abspath(os.path.realpath(os.path.join(folders[0], name)))
def get_enabled_loras(loras: list, remove_none=True) -> list: def ordinal_suffix(number: int) -> str:
return [(lora[1], lora[2]) for lora in loras if lora[0] and (lora[1] != 'None' if remove_none else True)] return 'th' if 10 <= number % 100 <= 20 else {1: 'st', 2: 'nd', 3: 'rd'}.get(number % 10, 'th')
def parse_lora_references_from_prompt(prompt: str, loras: List[Tuple[AnyStr, float]], loras_limit: int = 5, def makedirs_with_log(path):
skip_file_check=False, prompt_cleanup=True, deduplicate_loras=True,
lora_filenames=None) -> tuple[List[Tuple[AnyStr, float]], str]:
# prevent unintended side effects when returning without detection
loras = loras.copy()
if lora_filenames is None:
lora_filenames = []
found_loras = []
prompt_without_loras = ''
cleaned_prompt = ''
for token in prompt.split(','):
matches = LORAS_PROMPT_PATTERN.findall(token)
if len(matches) == 0:
prompt_without_loras += token + ', '
continue
for match in matches:
lora_name = match[1] + '.safetensors'
if not skip_file_check:
lora_name = get_filname_by_stem(match[1], lora_filenames)
if lora_name is not None:
found_loras.append((lora_name, float(match[2])))
token = token.replace(match[0], '')
prompt_without_loras += token + ', '
if prompt_without_loras != '':
cleaned_prompt = prompt_without_loras[:-2]
if prompt_cleanup:
cleaned_prompt = cleanup_prompt(prompt_without_loras)
new_loras = []
lora_names = [lora[0] for lora in loras]
for found_lora in found_loras:
if deduplicate_loras and (found_lora[0] in lora_names or found_lora in new_loras):
continue
new_loras.append(found_lora)
if len(new_loras) == 0:
return loras, cleaned_prompt
updated_loras = []
for lora in loras + new_loras:
if lora[0] != "None":
updated_loras.append(lora)
return updated_loras[:loras_limit], cleaned_prompt
def remove_performance_lora(filenames: list, performance: Performance | None):
loras_without_performance = filenames.copy()
if performance is None:
return loras_without_performance
performance_lora = performance.lora_filename()
for filename in filenames:
path = Path(filename)
if performance_lora == path.name:
loras_without_performance.remove(filename)
return loras_without_performance
def cleanup_prompt(prompt):
prompt = re.sub(' +', ' ', prompt)
prompt = re.sub(',+', ',', prompt)
cleaned_prompt = ''
for token in prompt.split(','):
token = token.strip()
if token == '':
continue
cleaned_prompt += token + ', '
return cleaned_prompt[:-2]
def apply_wildcards(wildcard_text, rng, i, read_wildcards_in_order) -> str:
for _ in range(modules.config.wildcards_max_bfs_depth):
placeholders = re.findall(r'__([\w-]+)__', wildcard_text)
if len(placeholders) == 0:
return wildcard_text
print(f'[Wildcards] processing: {wildcard_text}')
for placeholder in placeholders:
try: try:
matches = [x for x in modules.config.wildcard_filenames if os.path.splitext(os.path.basename(x))[0] == placeholder] os.makedirs(path, exist_ok=True)
words = open(os.path.join(modules.config.path_wildcards, matches[0]), encoding='utf-8').read().splitlines() except OSError as error:
words = [x for x in words if x != ''] print(f'Directory {path} could not be created, reason: {error}')
assert len(words) > 0
if read_wildcards_in_order:
wildcard_text = wildcard_text.replace(f'__{placeholder}__', words[i % len(words)], 1)
else:
wildcard_text = wildcard_text.replace(f'__{placeholder}__', rng.choice(words), 1)
except:
print(f'[Wildcards] Warning: {placeholder}.txt missing or empty. '
f'Using "{placeholder}" as a normal word.')
wildcard_text = wildcard_text.replace(f'__{placeholder}__', placeholder)
print(f'[Wildcards] {wildcard_text}')
print(f'[Wildcards] BFS stack overflow. Current text: {wildcard_text}')
return wildcard_text
def get_image_size_info(image: np.ndarray, aspect_ratios: list) -> str:
try:
image = Image.fromarray(np.uint8(image))
width, height = image.size
ratio = round(width / height, 2)
gcd = math.gcd(width, height)
lcm_ratio = f'{width // gcd}:{height // gcd}'
size_info = f'Image Size: {width} x {height}, Ratio: {ratio}, {lcm_ratio}'
closest_ratio = min(aspect_ratios, key=lambda x: abs(ratio - float(x.split('*')[0]) / float(x.split('*')[1])))
recommended_width, recommended_height = map(int, closest_ratio.split('*'))
recommended_ratio = round(recommended_width / recommended_height, 2)
recommended_gcd = math.gcd(recommended_width, recommended_height)
recommended_lcm_ratio = f'{recommended_width // recommended_gcd}:{recommended_height // recommended_gcd}'
size_info = f'{width} x {height}, {ratio}, {lcm_ratio}'
size_info += f'\n{recommended_width} x {recommended_height}, {recommended_ratio}, {recommended_lcm_ratio}'
return size_info
except Exception as e:
return f'Error reading image: {e}'

Binary file not shown.

BIN
notification-example.ogg Normal file

Binary file not shown.

8
presets/.gitignore vendored
View File

@ -1,8 +0,0 @@
*.json
!anime.json
!default.json
!lcm.json
!playground_v2.5.json
!pony_v6.json
!realistic.json
!sai.json

View File

@ -1,60 +1,46 @@
{ {
"default_model": "animaPencilXL_v500.safetensors", "default_model": "animaPencilXL_v100.safetensors",
"default_refiner": "None", "default_refiner": "None",
"default_refiner_switch": 0.5, "default_refiner_switch": 0.5,
"default_loras": [ "default_loras": [
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
] ]
], ],
"default_cfg_scale": 6.0, "default_cfg_scale": 7.0,
"default_sample_sharpness": 2.0, "default_sample_sharpness": 2.0,
"default_sampler": "dpmpp_2m_sde_gpu", "default_sampler": "dpmpp_2m_sde_gpu",
"default_scheduler": "karras", "default_scheduler": "karras",
"default_performance": "Speed", "default_performance": "Speed",
"default_prompt": "", "default_prompt": "1girl, ",
"default_prompt_negative": "", "default_prompt_negative": "",
"default_styles": [ "default_styles": [
"Fooocus V2", "Fooocus V2",
"Fooocus Semi Realistic", "Fooocus Negative",
"Fooocus Masterpiece" "Fooocus Masterpiece"
], ],
"default_aspect_ratio": "896*1152", "default_aspect_ratio": "896*1152",
"default_overwrite_step": -1,
"checkpoint_downloads": { "checkpoint_downloads": {
"animaPencilXL_v500.safetensors": "https://huggingface.co/mashb1t/fav_models/resolve/main/fav/animaPencilXL_v500.safetensors" "animaPencilXL_v100.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/animaPencilXL_v100.safetensors"
}, },
"embeddings_downloads": {}, "embeddings_downloads": {},
"lora_downloads": {}, "lora_downloads": {},
"previous_default_models": [ "previous_default_models": []
"animaPencilXL_v400.safetensors",
"animaPencilXL_v310.safetensors",
"animaPencilXL_v300.safetensors",
"animaPencilXL_v260.safetensors",
"animaPencilXL_v210.safetensors",
"animaPencilXL_v200.safetensors",
"animaPencilXL_v100.safetensors"
]
} }

View File

@ -4,27 +4,22 @@
"default_refiner_switch": 0.5, "default_refiner_switch": 0.5,
"default_loras": [ "default_loras": [
[ [
true,
"sd_xl_offset_example-lora_1.0.safetensors", "sd_xl_offset_example-lora_1.0.safetensors",
0.1 0.1
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
] ]
@ -42,7 +37,6 @@
"Fooocus Sharp" "Fooocus Sharp"
], ],
"default_aspect_ratio": "1152*896", "default_aspect_ratio": "1152*896",
"default_overwrite_step": -1,
"checkpoint_downloads": { "checkpoint_downloads": {
"juggernautXL_v8Rundiffusion.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_v8Rundiffusion.safetensors" "juggernautXL_v8Rundiffusion.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_v8Rundiffusion.safetensors"
}, },

View File

@ -4,27 +4,22 @@
"default_refiner_switch": 0.5, "default_refiner_switch": 0.5,
"default_loras": [ "default_loras": [
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
] ]
@ -42,7 +37,6 @@
"Fooocus Sharp" "Fooocus Sharp"
], ],
"default_aspect_ratio": "1152*896", "default_aspect_ratio": "1152*896",
"default_overwrite_step": -1,
"checkpoint_downloads": { "checkpoint_downloads": {
"juggernautXL_v8Rundiffusion.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_v8Rundiffusion.safetensors" "juggernautXL_v8Rundiffusion.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_v8Rundiffusion.safetensors"
}, },

View File

@ -1,57 +0,0 @@
{
"default_model": "juggernautXL_v8Rundiffusion.safetensors",
"default_refiner": "None",
"default_refiner_switch": 0.5,
"default_loras": [
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
]
],
"default_cfg_scale": 4.0,
"default_sample_sharpness": 2.0,
"default_sampler": "dpmpp_2m_sde_gpu",
"default_scheduler": "karras",
"default_performance": "Lightning",
"default_prompt": "",
"default_prompt_negative": "",
"default_styles": [
"Fooocus V2",
"Fooocus Enhance",
"Fooocus Sharp"
],
"default_aspect_ratio": "1152*896",
"checkpoint_downloads": {
"juggernautXL_v8Rundiffusion.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_v8Rundiffusion.safetensors"
},
"embeddings_downloads": {},
"lora_downloads": {},
"previous_default_models": [
"juggernautXL_version8Rundiffusion.safetensors",
"juggernautXL_version7Rundiffusion.safetensors",
"juggernautXL_v7Rundiffusion.safetensors",
"juggernautXL_version6Rundiffusion.safetensors",
"juggernautXL_v6Rundiffusion.safetensors"
]
}

View File

@ -1,51 +0,0 @@
{
"default_model": "playground-v2.5-1024px-aesthetic.fp16.safetensors",
"default_refiner": "None",
"default_refiner_switch": 0.5,
"default_loras": [
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
]
],
"default_cfg_scale": 2.0,
"default_sample_sharpness": 2.0,
"default_sampler": "dpmpp_2m",
"default_scheduler": "edm_playground_v2.5",
"default_performance": "Speed",
"default_prompt": "",
"default_prompt_negative": "",
"default_styles": [
"Fooocus V2"
],
"default_aspect_ratio": "1024*1024",
"default_overwrite_step": -1,
"default_inpaint_engine_version": "None",
"checkpoint_downloads": {
"playground-v2.5-1024px-aesthetic.fp16.safetensors": "https://huggingface.co/mashb1t/fav_models/resolve/main/fav/playground-v2.5-1024px-aesthetic.fp16.safetensors"
},
"embeddings_downloads": {},
"lora_downloads": {},
"previous_default_models": []
}

View File

@ -1,54 +0,0 @@
{
"default_model": "ponyDiffusionV6XL.safetensors",
"default_refiner": "None",
"default_refiner_switch": 0.5,
"default_vae": "ponyDiffusionV6XL_vae.safetensors",
"default_loras": [
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
],
[
true,
"None",
1.0
]
],
"default_cfg_scale": 7.0,
"default_sample_sharpness": 2.0,
"default_sampler": "dpmpp_2m_sde_gpu",
"default_scheduler": "karras",
"default_performance": "Speed",
"default_prompt": "",
"default_prompt_negative": "",
"default_styles": [
"Fooocus Pony"
],
"default_aspect_ratio": "896*1152",
"default_overwrite_step": -1,
"default_inpaint_engine_version": "None",
"checkpoint_downloads": {
"ponyDiffusionV6XL.safetensors": "https://huggingface.co/mashb1t/fav_models/resolve/main/fav/ponyDiffusionV6XL.safetensors"
},
"embeddings_downloads": {},
"lora_downloads": {},
"vae_downloads": {
"ponyDiffusionV6XL_vae.safetensors": "https://huggingface.co/mashb1t/fav_models/resolve/main/fav/ponyDiffusionV6XL_vae.safetensors"
}
}

View File

@ -1,30 +1,25 @@
{ {
"default_model": "realisticStockPhoto_v20.safetensors", "default_model": "realisticStockPhoto_v20.safetensors",
"default_refiner": "None", "default_refiner": "",
"default_refiner_switch": 0.5, "default_refiner_switch": 0.5,
"default_loras": [ "default_loras": [
[ [
true, "SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors",
"SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors",
0.25 0.25
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
] ]
@ -42,13 +37,12 @@
"Fooocus Negative" "Fooocus Negative"
], ],
"default_aspect_ratio": "896*1152", "default_aspect_ratio": "896*1152",
"default_overwrite_step": -1,
"checkpoint_downloads": { "checkpoint_downloads": {
"realisticStockPhoto_v20.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/realisticStockPhoto_v20.safetensors" "realisticStockPhoto_v20.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/realisticStockPhoto_v20.safetensors"
}, },
"embeddings_downloads": {}, "embeddings_downloads": {},
"lora_downloads": { "lora_downloads": {
"SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors": "https://huggingface.co/mashb1t/fav_models/resolve/main/fav/SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors" "SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors"
}, },
"previous_default_models": ["realisticStockPhoto_v10.safetensors"] "previous_default_models": ["realisticStockPhoto_v10.safetensors"]
} }

View File

@ -4,27 +4,22 @@
"default_refiner_switch": 0.75, "default_refiner_switch": 0.75,
"default_loras": [ "default_loras": [
[ [
true,
"sd_xl_offset_example-lora_1.0.safetensors", "sd_xl_offset_example-lora_1.0.safetensors",
0.5 0.5
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
], ],
[ [
true,
"None", "None",
1.0 1.0
] ]
@ -41,7 +36,6 @@
"Fooocus Cinematic" "Fooocus Cinematic"
], ],
"default_aspect_ratio": "1152*896", "default_aspect_ratio": "1152*896",
"default_overwrite_step": -1,
"checkpoint_downloads": { "checkpoint_downloads": {
"sd_xl_base_1.0_0.9vae.safetensors": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0_0.9vae.safetensors", "sd_xl_base_1.0_0.9vae.safetensors": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0_0.9vae.safetensors",
"sd_xl_refiner_1.0_0.9vae.safetensors": "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0_0.9vae.safetensors" "sd_xl_refiner_1.0_0.9vae.safetensors": "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0_0.9vae.safetensors"

154
readme.md
View File

@ -1,30 +1,40 @@
<div align=center> <div align=center>
<img src="https://github.com/lllyasviel/Fooocus/assets/19834515/483fb86d-c9a2-4c20-997c-46dafc124f25"> <img src="https://github.com/lllyasviel/Fooocus/assets/19834515/483fb86d-c9a2-4c20-997c-46dafc124f25">
**Non-cherry-picked** random batch by just typing two words "forest elf",
without any parameter tweaking, without any strange prompt tags.
See also **non-cherry-picked** generalization and diversity tests [here](https://github.com/lllyasviel/Fooocus/discussions/2067) and [here](https://github.com/lllyasviel/Fooocus/discussions/808) and [here](https://github.com/lllyasviel/Fooocus/discussions/679) and [here](https://github.com/lllyasviel/Fooocus/discussions/679#realistic).
In the entire open source community, only Fooocus can achieve this level of **non-cherry-picked** quality.
</div> </div>
# Fooocus # Fooocus
[>>> Click Here to Install Fooocus <<<](#download) Fooocus is an image generating software (based on [Gradio](https://www.gradio.app/)).
Fooocus is an image generating software (based on [Gradio](https://www.gradio.app/) <a href='https://github.com/gradio-app/gradio'><img src='https://img.shields.io/github/stars/gradio-app/gradio'></a>). Fooocus is a rethinking of Stable Diffusion and Midjourneys designs:
Fooocus presents a rethinking of image generator designs. The software is offline, open source, and free, while at the same time, similar to many online image generators like Midjourney, the manual tweaking is not needed, and users only need to focus on the prompts and images. Fooocus has also simplified the installation: between pressing "download" and generating the first image, the number of needed mouse clicks is strictly limited to less than 3. Minimal GPU memory requirement is 4GB (Nvidia). * Learned from Stable Diffusion, the software is offline, open source, and free.
* Learned from Midjourney, the manual tweaking is not needed, and users only need to focus on the prompts and images.
Fooocus has included and automated [lots of inner optimizations and quality improvements](#tech_list). Users can forget all those difficult technical parameters, and just enjoy the interaction between human and computer to "explore new mediums of thought and expanding the imaginative powers of the human species" `[1]`.
Fooocus has simplified the installation. Between pressing "download" and generating the first image, the number of needed mouse clicks is strictly limited to less than 3. Minimal GPU memory requirement is 4GB (Nvidia).
`[1]` David Holz, 2019.
**Recently many fake websites exist on Google when you search “fooocus”. Do not trust those here is the only official source of Fooocus.** **Recently many fake websites exist on Google when you search “fooocus”. Do not trust those here is the only official source of Fooocus.**
# Project Status: Limited Long-Term Support (LTS) with Bug Fixes Only ## [Installing Fooocus](#download)
The Fooocus project, built entirely on the **Stable Diffusion XL** architecture, is now in a state of limited long-term support (LTS) with bug fixes only. As the existing functionalities are considered as nearly free of programmartic issues (Thanks to [mashb1t](https://github.com/mashb1t)'s huge efforts), future updates will focus exclusively on addressing any bugs that may arise. # Moving from Midjourney to Fooocus
**There are no current plans to migrate to or incorporate newer model architectures.** However, this may change during time with the development of open-source community. For example, if the community converge to one single dominant method for image generation (which may really happen in half or one years given the current status), Fooocus may also migrate to that exact method. Using Fooocus is as easy as (probably easier than) Midjourney but this does not mean we lack functionality. Below are the details.
For those interested in utilizing newer models such as **Flux**, we recommend exploring alternative platforms such as [WebUI Forge](https://github.com/lllyasviel/stable-diffusion-webui-forge) (also from us), [ComfyUI/SwarmUI](https://github.com/comfyanonymous/ComfyUI). Additionally, several [excellent forks of Fooocus](https://github.com/lllyasviel/Fooocus?tab=readme-ov-file#forks) are available for experimentation.
Again, recently many fake websites exist on Google when you search “fooocus”. Do **NOT** get Fooocus from those websites this page is the only official source of Fooocus. We never have any website like such as “fooocus.com”, “fooocus.net”, “fooocus.co”, “fooocus.ai”, “fooocus.org”, “fooocus.pro”, “fooocus.one”. Those websites are ALL FAKE. **They have ABSOLUTLY no relationship to us. Fooocus is a 100% non-commercial offline open-source software.**
# Features
Below is a quick list using Midjourney's examples:
| Midjourney | Fooocus | | Midjourney | Fooocus |
| - | - | | - | - |
@ -45,7 +55,7 @@ Below is a quick list using Midjourney's examples:
| InsightFace | Input Image -> Image Prompt -> Advanced -> FaceSwap | | InsightFace | Input Image -> Image Prompt -> Advanced -> FaceSwap |
| Describe | Input Image -> Describe | | Describe | Input Image -> Describe |
Below is a quick list using LeonardoAI's examples: We also have a few things borrowed from the best parts of LeonardoAI:
| LeonardoAI | Fooocus | | LeonardoAI | Fooocus |
| - | - | | - | - |
@ -53,7 +63,7 @@ Below is a quick list using LeonardoAI's examples:
| Advanced Sampler Parameters (like Contrast/Sharpness/etc) | Advanced -> Advanced -> Sampling Sharpness / etc | | Advanced Sampler Parameters (like Contrast/Sharpness/etc) | Advanced -> Advanced -> Sampling Sharpness / etc |
| User-friendly ControlNets | Input Image -> Image Prompt -> Advanced | | User-friendly ControlNets | Input Image -> Image Prompt -> Advanced |
Also, [click here to browse the advanced features.](https://github.com/lllyasviel/Fooocus/discussions/117) Fooocus also developed many "fooocus-only" features for advanced users to get perfect results. [Click here to browse the advanced features.](https://github.com/lllyasviel/Fooocus/discussions/117)
# Download # Download
@ -61,7 +71,7 @@ Also, [click here to browse the advanced features.](https://github.com/lllyasvie
You can directly download Fooocus with: You can directly download Fooocus with:
**[>>> Click here to download <<<](https://github.com/lllyasviel/Fooocus/releases/download/v2.5.0/Fooocus_win64_2-5-0.7z)** **[>>> Click here to download <<<](https://github.com/lllyasviel/Fooocus/releases/download/release/Fooocus_win64_2-1-831.7z)**
After you download the file, please uncompress it and then run the "run.bat". After you download the file, please uncompress it and then run the "run.bat".
@ -74,10 +84,6 @@ The first time you launch the software, it will automatically download models:
After Fooocus 2.1.60, you will also have `run_anime.bat` and `run_realistic.bat`. They are different model presets (and require different models, but they will be automatically downloaded). [Check here for more details](https://github.com/lllyasviel/Fooocus/discussions/679). After Fooocus 2.1.60, you will also have `run_anime.bat` and `run_realistic.bat`. They are different model presets (and require different models, but they will be automatically downloaded). [Check here for more details](https://github.com/lllyasviel/Fooocus/discussions/679).
After Fooocus 2.3.0 you can also switch presets directly in the browser. Keep in mind to add these arguments if you want to change the default behavior:
* Use `--disable-preset-selection` to disable preset selection in the browser.
* Use `--always-download-new-model` to download missing models on preset switch. Default is fallback to `previous_default_models` defined in the corresponding preset, also see terminal output.
![image](https://github.com/lllyasviel/Fooocus/assets/19834515/d386f817-4bd7-490c-ad89-c1e228c23447) ![image](https://github.com/lllyasviel/Fooocus/assets/19834515/d386f817-4bd7-490c-ad89-c1e228c23447)
If you already have these files, you can copy them to the above locations to speed up installation. If you already have these files, you can copy them to the above locations to speed up installation.
@ -109,21 +115,17 @@ See also the common problems and troubleshoots [here](troubleshoot.md).
### Colab ### Colab
(Last tested - 2024 Aug 12 by [mashb1t](https://github.com/mashb1t)) (Last tested - 2023 Dec 12)
| Colab | Info | Colab | Info
| --- | --- | | --- | --- |
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/lllyasviel/Fooocus/blob/main/fooocus_colab.ipynb) | Fooocus Official [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/lllyasviel/Fooocus/blob/main/fooocus_colab.ipynb) | Fooocus Official
In Colab, you can modify the last line to `!python entry_with_update.py --share --always-high-vram` or `!python entry_with_update.py --share --always-high-vram --preset anime` or `!python entry_with_update.py --share --always-high-vram --preset realistic` for Fooocus Default/Anime/Realistic Edition. In Colab, you can modify the last line to `!python entry_with_update.py --share` or `!python entry_with_update.py --preset anime --share` or `!python entry_with_update.py --preset realistic --share` for Fooocus Default/Anime/Realistic Edition.
You can also change the preset in the UI. Please be aware that this may lead to timeouts after 60 seconds. If this is the case, please wait until the download has finished, change the preset to initial and back to the one you've selected or reload the page.
Note that this Colab will disable refiner by default because Colab free's resources are relatively limited (and some "big" features like image prompt may cause free-tier Colab to disconnect). We make sure that basic text-to-image is always working on free-tier Colab. Note that this Colab will disable refiner by default because Colab free's resources are relatively limited (and some "big" features like image prompt may cause free-tier Colab to disconnect). We make sure that basic text-to-image is always working on free-tier Colab.
Using `--always-high-vram` shifts resource allocation from RAM to VRAM and achieves the overall best balance between performance, flexibility and stability on the default T4 instance. Please find more information [here](https://github.com/lllyasviel/Fooocus/pull/1710#issuecomment-1989185346). Thanks to [camenduru](https://github.com/camenduru)!
Thanks to [camenduru](https://github.com/camenduru) for the template!
### Linux (Using Anaconda) ### Linux (Using Anaconda)
@ -215,7 +217,7 @@ Then run the `run.bat`.
AMD is not intensively tested, however. The AMD support is in beta. AMD is not intensively tested, however. The AMD support is in beta.
For AMD, use `.\python_embeded\python.exe Fooocus\entry_with_update.py --directml --preset anime` or `.\python_embeded\python.exe Fooocus\entry_with_update.py --directml --preset realistic` for Fooocus Anime/Realistic Edition. For AMD, use `.\python_embeded\python.exe entry_with_update.py --directml --preset anime` or `.\python_embeded\python.exe entry_with_update.py --directml --preset realistic` for Fooocus Anime/Realistic Edition.
### Mac ### Mac
@ -276,10 +278,10 @@ See the common problems [here](troubleshoot.md).
Given different goals, the default models and configs of Fooocus are different: Given different goals, the default models and configs of Fooocus are different:
| Task | Windows | Linux args | Main Model | Refiner | Config | | Task | Windows | Linux args | Main Model | Refiner | Config |
|-----------| --- | --- |-----------------------------| --- |--------------------------------------------------------------------------------| | --- | --- | --- | --- | --- |--------------------------------------------------------------------------------|
| General | run.bat | | juggernautXL_v8Rundiffusion | not used | [here](https://github.com/lllyasviel/Fooocus/blob/main/presets/default.json) | | General | run.bat | | juggernautXL_v8Rundiffusion | not used | [here](https://github.com/lllyasviel/Fooocus/blob/main/presets/default.json) |
| Realistic | run_realistic.bat | --preset realistic | realisticStockPhoto_v20 | not used | [here](https://github.com/lllyasviel/Fooocus/blob/main/presets/realistic.json) | | Realistic | run_realistic.bat | --preset realistic | realisticStockPhoto_v20 | not used | [here](https://github.com/lllyasviel/Fooocus/blob/main/presets/realistic.json) |
| Anime | run_anime.bat | --preset anime | animaPencilXL_v500 | not used | [here](https://github.com/lllyasviel/Fooocus/blob/main/presets/anime.json) | | Anime | run_anime.bat | --preset anime | animaPencilXL_v100 | not used | [here](https://github.com/lllyasviel/Fooocus/blob/main/presets/anime.json) |
Note that the download is **automatic** - you do not need to do anything if the internet connection is okay. However, you can download them manually if you (or move them from somewhere else) have your own preparation. Note that the download is **automatic** - you do not need to do anything if the internet connection is okay. However, you can download them manually if you (or move them from somewhere else) have your own preparation.
@ -293,10 +295,9 @@ In both ways the access is unauthenticated by default. You can add basic authent
## List of "Hidden" Tricks ## List of "Hidden" Tricks
<a name="tech_list"></a> <a name="tech_list"></a>
<details> The below things are already inside the software, and **users do not need to do anything about these**.
<summary>Click to see a list of tricks. Those are based on SDXL and are not very up-to-date with latest models.</summary>
1. GPT2-based [prompt expansion as a dynamic style "Fooocus V2".](https://github.com/lllyasviel/Fooocus/discussions/117#raw) (similar to Midjourney's hidden pre-processing and "raw" mode, or the LeonardoAI's Prompt Magic). 1. GPT2-based [prompt expansion as a dynamic style "Fooocus V2".](https://github.com/lllyasviel/Fooocus/discussions/117#raw) (similar to Midjourney's hidden pre-processsing and "raw" mode, or the LeonardoAI's Prompt Magic).
2. Native refiner swap inside one single k-sampler. The advantage is that the refiner model can now reuse the base model's momentum (or ODE's history parameters) collected from k-sampling to achieve more coherent sampling. In Automatic1111's high-res fix and ComfyUI's node system, the base model and refiner use two independent k-samplers, which means the momentum is largely wasted, and the sampling continuity is broken. Fooocus uses its own advanced k-diffusion sampling that ensures seamless, native, and continuous swap in a refiner setup. (Update Aug 13: Actually, I discussed this with Automatic1111 several days ago, and it seems that the “native refiner swap inside one single k-sampler” is [merged]( https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12371) into the dev branch of webui. Great!) 2. Native refiner swap inside one single k-sampler. The advantage is that the refiner model can now reuse the base model's momentum (or ODE's history parameters) collected from k-sampling to achieve more coherent sampling. In Automatic1111's high-res fix and ComfyUI's node system, the base model and refiner use two independent k-samplers, which means the momentum is largely wasted, and the sampling continuity is broken. Fooocus uses its own advanced k-diffusion sampling that ensures seamless, native, and continuous swap in a refiner setup. (Update Aug 13: Actually, I discussed this with Automatic1111 several days ago, and it seems that the “native refiner swap inside one single k-sampler” is [merged]( https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12371) into the dev branch of webui. Great!)
3. Negative ADM guidance. Because the highest resolution level of XL Base does not have cross attentions, the positive and negative signals for XL's highest resolution level cannot receive enough contrasts during the CFG sampling, causing the results to look a bit plastic or overly smooth in certain cases. Fortunately, since the XL's highest resolution level is still conditioned on image aspect ratios (ADM), we can modify the adm on the positive/negative side to compensate for the lack of CFG contrast in the highest resolution level. (Update Aug 16, the IOS App [Draw Things](https://apps.apple.com/us/app/draw-things-ai-generation/id6444050820) will support Negative ADM Guidance. Great!) 3. Negative ADM guidance. Because the highest resolution level of XL Base does not have cross attentions, the positive and negative signals for XL's highest resolution level cannot receive enough contrasts during the CFG sampling, causing the results to look a bit plastic or overly smooth in certain cases. Fortunately, since the XL's highest resolution level is still conditioned on image aspect ratios (ADM), we can modify the adm on the positive/negative side to compensate for the lack of CFG contrast in the highest resolution level. (Update Aug 16, the IOS App [Draw Things](https://apps.apple.com/us/app/draw-things-ai-generation/id6444050820) will support Negative ADM Guidance. Great!)
4. We implemented a carefully tuned variation of Section 5.1 of ["Improving Sample Quality of Diffusion Models Using Self-Attention Guidance"](https://arxiv.org/pdf/2210.00939.pdf). The weight is set to very low, but this is Fooocus's final guarantee to make sure that the XL will never yield an overly smooth or plastic appearance (examples [here](https://github.com/lllyasviel/Fooocus/discussions/117#sharpness)). This can almost eliminate all cases for which XL still occasionally produces overly smooth results, even with negative ADM guidance. (Update 2023 Aug 18, the Gaussian kernel of SAG is changed to an anisotropic kernel for better structure preservation and fewer artifacts.) 4. We implemented a carefully tuned variation of Section 5.1 of ["Improving Sample Quality of Diffusion Models Using Self-Attention Guidance"](https://arxiv.org/pdf/2210.00939.pdf). The weight is set to very low, but this is Fooocus's final guarantee to make sure that the XL will never yield an overly smooth or plastic appearance (examples [here](https://github.com/lllyasviel/Fooocus/discussions/117#sharpness)). This can almost eliminate all cases for which XL still occasionally produces overly smooth results, even with negative ADM guidance. (Update 2023 Aug 18, the Gaussian kernel of SAG is changed to an anisotropic kernel for better structure preservation and fewer artifacts.)
@ -310,7 +311,6 @@ In both ways the access is unauthenticated by default. You can add basic authent
12. Using automatic1111's method to normalize prompt emphasizing. This significantly improves results when users directly copy prompts from civitai. 12. Using automatic1111's method to normalize prompt emphasizing. This significantly improves results when users directly copy prompts from civitai.
13. The joint swap system of the refiner now also supports img2img and upscale in a seamless way. 13. The joint swap system of the refiner now also supports img2img and upscale in a seamless way.
14. CFG Scale and TSNR correction (tuned for SDXL) when CFG is bigger than 10. 14. CFG Scale and TSNR correction (tuned for SDXL) when CFG is bigger than 10.
</details>
## Customization ## Customization
@ -360,93 +360,43 @@ A safer way is just to try "run_anime.bat" or "run_realistic.bat" - they should
entry_with_update.py [-h] [--listen [IP]] [--port PORT] entry_with_update.py [-h] [--listen [IP]] [--port PORT]
[--disable-header-check [ORIGIN]] [--disable-header-check [ORIGIN]]
[--web-upload-size WEB_UPLOAD_SIZE] [--web-upload-size WEB_UPLOAD_SIZE]
[--hf-mirror HF_MIRROR]
[--external-working-path PATH [PATH ...]] [--external-working-path PATH [PATH ...]]
[--output-path OUTPUT_PATH] [--output-path OUTPUT_PATH] [--temp-path TEMP_PATH]
[--temp-path TEMP_PATH] [--cache-path CACHE_PATH] [--cache-path CACHE_PATH] [--in-browser]
[--in-browser] [--disable-in-browser] [--disable-in-browser] [--gpu-device-id DEVICE_ID]
[--gpu-device-id DEVICE_ID]
[--async-cuda-allocation | --disable-async-cuda-allocation] [--async-cuda-allocation | --disable-async-cuda-allocation]
[--disable-attention-upcast] [--disable-attention-upcast] [--all-in-fp32 | --all-in-fp16]
[--all-in-fp32 | --all-in-fp16]
[--unet-in-bf16 | --unet-in-fp16 | --unet-in-fp8-e4m3fn | --unet-in-fp8-e5m2] [--unet-in-bf16 | --unet-in-fp16 | --unet-in-fp8-e4m3fn | --unet-in-fp8-e5m2]
[--vae-in-fp16 | --vae-in-fp32 | --vae-in-bf16] [--vae-in-fp16 | --vae-in-fp32 | --vae-in-bf16]
[--vae-in-cpu]
[--clip-in-fp8-e4m3fn | --clip-in-fp8-e5m2 | --clip-in-fp16 | --clip-in-fp32] [--clip-in-fp8-e4m3fn | --clip-in-fp8-e5m2 | --clip-in-fp16 | --clip-in-fp32]
[--directml [DIRECTML_DEVICE]] [--directml [DIRECTML_DEVICE]] [--disable-ipex-hijack]
[--disable-ipex-hijack]
[--preview-option [none,auto,fast,taesd]] [--preview-option [none,auto,fast,taesd]]
[--attention-split | --attention-quad | --attention-pytorch] [--attention-split | --attention-quad | --attention-pytorch]
[--disable-xformers] [--disable-xformers]
[--always-gpu | --always-high-vram | --always-normal-vram | --always-low-vram | --always-no-vram | --always-cpu [CPU_NUM_THREADS]] [--always-gpu | --always-high-vram | --always-normal-vram |
[--always-offload-from-vram] --always-low-vram | --always-no-vram | --always-cpu [CPU_NUM_THREADS]]
[--pytorch-deterministic] [--disable-server-log] [--always-offload-from-vram] [--disable-server-log]
[--debug-mode] [--is-windows-embedded-python] [--debug-mode] [--is-windows-embedded-python]
[--disable-server-info] [--multi-user] [--share] [--disable-server-info] [--share] [--preset PRESET]
[--preset PRESET] [--disable-preset-selection] [--language LANGUAGE] [--disable-offload-from-vram]
[--language LANGUAGE] [--theme THEME] [--disable-image-log]
[--disable-offload-from-vram] [--theme THEME]
[--disable-image-log] [--disable-analytics]
[--disable-metadata] [--disable-preset-download]
[--disable-enhance-output-sorting]
[--enable-auto-describe-image]
[--always-download-new-model]
[--rebuild-hash-cache [CPU_NUM_THREADS]]
``` ```
## Inline Prompt Features
### Wildcards
Example prompt: `__color__ flower`
Processed for positive and negative prompt.
Selects a random wildcard from a predefined list of options, in this case the `wildcards/color.txt` file.
The wildcard will be replaced with a random color (randomness based on seed).
You can also disable randomness and process a wildcard file from top to bottom by enabling the checkbox `Read wildcards in order` in Developer Debug Mode.
Wildcards can be nested and combined, and multiple wildcards can be used in the same prompt (example see `wildcards/color_flower.txt`).
### Array Processing
Example prompt: `[[red, green, blue]] flower`
Processed only for positive prompt.
Processes the array from left to right, generating a separate image for each element in the array. In this case 3 images would be generated, one for each color.
Increase the image number to 3 to generate all 3 variants.
Arrays can not be nested, but multiple arrays can be used in the same prompt.
Does support inline LoRAs as array elements!
### Inline LoRAs
Example prompt: `flower <lora:sunflowers:1.2>`
Processed only for positive prompt.
Applies a LoRA to the prompt. The LoRA file must be located in the `models/loras` directory.
## Advanced Features ## Advanced Features
[Click here to browse the advanced features.](https://github.com/lllyasviel/Fooocus/discussions/117) [Click here to browse the advanced features.](https://github.com/lllyasviel/Fooocus/discussions/117)
## Forks Fooocus also has many community forks, just like SD-WebUI's [vladmandic/automatic](https://github.com/vladmandic/automatic) and [anapnoe/stable-diffusion-webui-ux](https://github.com/anapnoe/stable-diffusion-webui-ux), for enthusiastic users who want to try!
Below are some Forks to Fooocus:
| Fooocus' forks | | Fooocus' forks |
| - | | - |
| [fenneishi/Fooocus-Control](https://github.com/fenneishi/Fooocus-Control) </br>[runew0lf/RuinedFooocus](https://github.com/runew0lf/RuinedFooocus) </br> [MoonRide303/Fooocus-MRE](https://github.com/MoonRide303/Fooocus-MRE) </br> [mashb1t/Fooocus](https://github.com/mashb1t/Fooocus) </br> and so on ... | | [fenneishi/Fooocus-Control](https://github.com/fenneishi/Fooocus-Control) </br>[runew0lf/RuinedFooocus](https://github.com/runew0lf/RuinedFooocus) </br> [MoonRide303/Fooocus-MRE](https://github.com/MoonRide303/Fooocus-MRE) </br> [metercai/SimpleSDXL](https://github.com/metercai/SimpleSDXL) </br> and so on ... |
See also [About Forking and Promotion of Forks](https://github.com/lllyasviel/Fooocus/discussions/699).
## Thanks ## Thanks
Many thanks to [twri](https://github.com/twri) and [3Diva](https://github.com/3Diva) and [Marc K3nt3L](https://github.com/K3nt3L) for creating additional SDXL styles available in Fooocus. Special thanks to [twri](https://github.com/twri) and [3Diva](https://github.com/3Diva) and [Marc K3nt3L](https://github.com/K3nt3L) for creating additional SDXL styles available in Fooocus. Thanks [daswer123](https://github.com/daswer123) for contributing the Canvas Zoom!
The project starts from a mixture of [Stable Diffusion WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) and [ComfyUI](https://github.com/comfyanonymous/ComfyUI) codebases.
Also, thanks [daswer123](https://github.com/daswer123) for contributing the Canvas Zoom!
## Update Log ## Update Log
@ -454,6 +404,8 @@ The log is [here](update_log.md).
## Localization/Translation/I18N ## Localization/Translation/I18N
**We need your help!** Please help translate Fooocus into international languages.
You can put json files in the `language` folder to translate the user interface. You can put json files in the `language` folder to translate the user interface.
For example, below is the content of `Fooocus/language/example.json`: For example, below is the content of `Fooocus/language/example.json`:

View File

@ -1,2 +1,5 @@
torch==2.1.0 torch==2.0.1
torchvision==0.16.0 torchvision==0.15.2
torchaudio==2.0.2
torchtext==0.15.2
torchdata==0.6.1

View File

@ -1,24 +1,18 @@
torchsde==0.2.6 torchsde==0.2.5
einops==0.8.0 einops==0.4.1
transformers==4.42.4 transformers==4.30.2
safetensors==0.4.3 safetensors==0.3.1
accelerate==0.32.1 accelerate==0.21.0
pyyaml==6.0.1 pyyaml==6.0
pillow==10.4.0 Pillow==9.2.0
scipy==1.14.0 scipy==1.9.3
tqdm==4.66.4 tqdm==4.64.1
psutil==6.0.0 psutil==5.9.5
pytorch_lightning==2.3.3 pytorch_lightning==1.9.4
omegaconf==2.3.0 omegaconf==2.2.3
gradio==3.41.2 gradio==3.41.2
pygit2==1.15.1 pygit2==1.12.2
opencv-contrib-python-headless==4.10.0.84 opencv-contrib-python==4.8.0.74
httpx==0.27.0 httpx==0.24.1
onnxruntime==1.18.1 onnxruntime==1.16.3
timm==1.0.7 timm==0.9.2
numpy==1.26.4
tokenizers==0.19.1
packaging==24.1
rembg==2.0.57
groundingdino-py==0.4.0
segment_anything==1.0

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.4 KiB

View File

@ -3,10 +3,6 @@
"name": "Fooocus Enhance", "name": "Fooocus Enhance",
"negative_prompt": "(worst quality, low quality, normal quality, lowres, low details, oversaturated, undersaturated, overexposed, underexposed, grayscale, bw, bad photo, bad photography, bad art:1.4), (watermark, signature, text font, username, error, logo, words, letters, digits, autograph, trademark, name:1.2), (blur, blurry, grainy), morbid, ugly, asymmetrical, mutated malformed, mutilated, poorly lit, bad shadow, draft, cropped, out of frame, cut off, censored, jpeg artifacts, out of focus, glitch, duplicate, (airbrushed, cartoon, anime, semi-realistic, cgi, render, blender, digital art, manga, amateur:1.3), (3D ,3D Game, 3D Game Scene, 3D Character:1.1), (bad hands, bad anatomy, bad body, bad face, bad teeth, bad arms, bad legs, deformities:1.3)" "negative_prompt": "(worst quality, low quality, normal quality, lowres, low details, oversaturated, undersaturated, overexposed, underexposed, grayscale, bw, bad photo, bad photography, bad art:1.4), (watermark, signature, text font, username, error, logo, words, letters, digits, autograph, trademark, name:1.2), (blur, blurry, grainy), morbid, ugly, asymmetrical, mutated malformed, mutilated, poorly lit, bad shadow, draft, cropped, out of frame, cut off, censored, jpeg artifacts, out of focus, glitch, duplicate, (airbrushed, cartoon, anime, semi-realistic, cgi, render, blender, digital art, manga, amateur:1.3), (3D ,3D Game, 3D Game Scene, 3D Character:1.1), (bad hands, bad anatomy, bad body, bad face, bad teeth, bad arms, bad legs, deformities:1.3)"
}, },
{
"name": "Fooocus Semi Realistic",
"negative_prompt": "(worst quality, low quality, normal quality, lowres, low details, oversaturated, undersaturated, overexposed, underexposed, bad photo, bad photography, bad art:1.4), (watermark, signature, text font, username, error, logo, words, letters, digits, autograph, trademark, name:1.2), (blur, blurry, grainy), morbid, ugly, asymmetrical, mutated malformed, mutilated, poorly lit, bad shadow, draft, cropped, out of frame, cut off, censored, jpeg artifacts, out of focus, glitch, duplicate, (bad hands, bad anatomy, bad body, bad face, bad teeth, bad arms, bad legs, deformities:1.3)"
},
{ {
"name": "Fooocus Sharp", "name": "Fooocus Sharp",
"prompt": "cinematic still {prompt} . emotional, harmonious, vignette, 4k epic detailed, shot on kodak, 35mm photo, sharp focus, high budget, cinemascope, moody, epic, gorgeous, film grain, grainy", "prompt": "cinematic still {prompt} . emotional, harmonious, vignette, 4k epic detailed, shot on kodak, 35mm photo, sharp focus, high budget, cinemascope, moody, epic, gorgeous, film grain, grainy",
@ -14,7 +10,7 @@
}, },
{ {
"name": "Fooocus Masterpiece", "name": "Fooocus Masterpiece",
"prompt": "(masterpiece), (best quality), (ultra-detailed), {prompt}, illustration, disheveled hair, detailed eyes, perfect composition, moist skin, intricate details, earrings", "prompt": "(masterpiece), (best quality), (ultra-detailed), {prompt}, illustration, disheveled hair, detailed eyes, perfect composition, moist skin, intricate details, earrings, by wlop",
"negative_prompt": "longbody, lowres, bad anatomy, bad hands, missing fingers, pubic hair,extra digit, fewer digits, cropped, worst quality, low quality" "negative_prompt": "longbody, lowres, bad anatomy, bad hands, missing fingers, pubic hair,extra digit, fewer digits, cropped, worst quality, low quality"
}, },
{ {
@ -30,10 +26,5 @@
"name": "Fooocus Cinematic", "name": "Fooocus Cinematic",
"prompt": "cinematic still {prompt} . emotional, harmonious, vignette, highly detailed, high budget, bokeh, cinemascope, moody, epic, gorgeous, film grain, grainy", "prompt": "cinematic still {prompt} . emotional, harmonious, vignette, highly detailed, high budget, bokeh, cinemascope, moody, epic, gorgeous, film grain, grainy",
"negative_prompt": "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured" "negative_prompt": "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured"
},
{
"name": "Fooocus Pony",
"prompt": "score_9, score_8_up, score_7_up, {prompt}",
"negative_prompt": "score_6, score_5, score_4"
} }
] ]

View File

@ -1,4 +0,0 @@
import sys
import pathlib
sys.path.append(pathlib.Path(f'{__file__}/../modules').parent.resolve())

View File

@ -1,74 +0,0 @@
import numbers
import os
import unittest
import modules.flags
from modules import extra_utils
class TestUtils(unittest.TestCase):
def test_try_eval_env_var(self):
test_cases = [
{
"input": ("foo", str),
"output": "foo"
},
{
"input": ("1", int),
"output": 1
},
{
"input": ("1.0", float),
"output": 1.0
},
{
"input": ("1", numbers.Number),
"output": 1
},
{
"input": ("1.0", numbers.Number),
"output": 1.0
},
{
"input": ("true", bool),
"output": True
},
{
"input": ("True", bool),
"output": True
},
{
"input": ("false", bool),
"output": False
},
{
"input": ("False", bool),
"output": False
},
{
"input": ("True", str),
"output": "True"
},
{
"input": ("False", str),
"output": "False"
},
{
"input": ("['a', 'b', 'c']", list),
"output": ['a', 'b', 'c']
},
{
"input": ("{'a':1}", dict),
"output": {'a': 1}
},
{
"input": ("('foo', 1)", tuple),
"output": ('foo', 1)
}
]
for test in test_cases:
value, expected_type = test["input"]
expected = test["output"]
actual = extra_utils.try_eval_env_var(value, expected_type)
self.assertEqual(expected, actual)

View File

@ -1,137 +0,0 @@
import os
import unittest
import modules.flags
from modules import util
class TestUtils(unittest.TestCase):
def test_can_parse_tokens_with_lora(self):
test_cases = [
{
"input": ("some prompt, very cool, <lora:hey-lora:0.4>, cool <lora:you-lora:0.2>", [], 5, True),
"output": (
[('hey-lora.safetensors', 0.4), ('you-lora.safetensors', 0.2)], 'some prompt, very cool, cool'),
},
# Test can not exceed limit
{
"input": ("some prompt, very cool, <lora:hey-lora:0.4>, cool <lora:you-lora:0.2>", [], 1, True),
"output": (
[('hey-lora.safetensors', 0.4)],
'some prompt, very cool, cool'
),
},
# test Loras from UI take precedence over prompt
{
"input": (
"some prompt, very cool, <lora:l1:0.4>, <lora:l2:-0.2>, <lora:l3:0.3>, <lora:l4:0.5>, <lora:l6:0.24>, <lora:l7:0.1>",
[("hey-lora.safetensors", 0.4)],
5,
True
),
"output": (
[
('hey-lora.safetensors', 0.4),
('l1.safetensors', 0.4),
('l2.safetensors', -0.2),
('l3.safetensors', 0.3),
('l4.safetensors', 0.5)
],
'some prompt, very cool'
)
},
# test correct matching even if there is no space separating loras in the same token
{
"input": ("some prompt, very cool, <lora:hey-lora:0.4><lora:you-lora:0.2>", [], 3, True),
"output": (
[
('hey-lora.safetensors', 0.4),
('you-lora.safetensors', 0.2)
],
'some prompt, very cool'
),
},
# test deduplication, also selected loras are never overridden with loras in prompt
{
"input": (
"some prompt, very cool, <lora:hey-lora:0.4><lora:hey-lora:0.4><lora:you-lora:0.2>",
[('you-lora.safetensors', 0.3)],
3,
True
),
"output": (
[
('you-lora.safetensors', 0.3),
('hey-lora.safetensors', 0.4)
],
'some prompt, very cool'
),
},
{
"input": ("<lora:foo:1..2>, <lora:bar:.>, <test:1.0>, <lora:baz:+> and <lora:quux:>", [], 6, True),
"output": (
[],
'<lora:foo:1..2>, <lora:bar:.>, <test:1.0>, <lora:baz:+> and <lora:quux:>'
)
}
]
for test in test_cases:
prompt, loras, loras_limit, skip_file_check = test["input"]
expected = test["output"]
actual = util.parse_lora_references_from_prompt(prompt, loras, loras_limit=loras_limit,
skip_file_check=skip_file_check)
self.assertEqual(expected, actual)
def test_can_parse_tokens_and_strip_performance_lora(self):
lora_filenames = [
'hey-lora.safetensors',
modules.flags.PerformanceLoRA.EXTREME_SPEED.value,
modules.flags.PerformanceLoRA.LIGHTNING.value,
os.path.join('subfolder', modules.flags.PerformanceLoRA.HYPER_SD.value)
]
test_cases = [
{
"input": ("some prompt, <lora:hey-lora:0.4>", [], 5, True, modules.flags.Performance.QUALITY),
"output": (
[('hey-lora.safetensors', 0.4)],
'some prompt'
),
},
{
"input": ("some prompt, <lora:hey-lora:0.4>", [], 5, True, modules.flags.Performance.SPEED),
"output": (
[('hey-lora.safetensors', 0.4)],
'some prompt'
),
},
{
"input": ("some prompt, <lora:sdxl_lcm_lora:1>, <lora:hey-lora:0.4>", [], 5, True, modules.flags.Performance.EXTREME_SPEED),
"output": (
[('hey-lora.safetensors', 0.4)],
'some prompt'
),
},
{
"input": ("some prompt, <lora:sdxl_lightning_4step_lora:1>, <lora:hey-lora:0.4>", [], 5, True, modules.flags.Performance.LIGHTNING),
"output": (
[('hey-lora.safetensors', 0.4)],
'some prompt'
),
},
{
"input": ("some prompt, <lora:sdxl_hyper_sd_4step_lora:1>, <lora:hey-lora:0.4>", [], 5, True, modules.flags.Performance.HYPER_SD),
"output": (
[('hey-lora.safetensors', 0.4)],
'some prompt'
),
}
]
for test in test_cases:
prompt, loras, loras_limit, skip_file_check, performance = test["input"]
lora_filenames = modules.util.remove_performance_lora(lora_filenames, performance)
expected = test["output"]
actual = util.parse_lora_references_from_prompt(prompt, loras, loras_limit=loras_limit, lora_filenames=lora_filenames)
self.assertEqual(expected, actual)

View File

@ -1,136 +1,4 @@
# [2.5.5](https://github.com/lllyasviel/Fooocus/releases/tag/v2.5.5) # 2.1.865
* Fix colab inpaint issue by moving an import statement
# [2.5.4](https://github.com/lllyasviel/Fooocus/releases/tag/v2.5.4)
* Fix validation for default_ip_image_* and default_inpaint_mask_sam_model
* Fix enhance mask debugging in combination with image sorting
* Fix loading of checkpoints and LoRAs when using multiple directories in config and then switching presets
# [2.5.3](https://github.com/lllyasviel/Fooocus/releases/tag/v2.5.3)
* Only load weights from non-safetensors files, preventing harmful code injection
* Add checkbox for applying/resetting styles when describing images, also allowing multiple describe content types
# [2.5.2](https://github.com/lllyasviel/Fooocus/releases/tag/v2.5.2)
* Fix not adding positive prompt when styles didn't have a {prompt} placeholder in the positive prompt
* Extend config settings for input image, see list in [PR](https://github.com/lllyasviel/Fooocus/pull/3382)
# [2.5.1](https://github.com/lllyasviel/Fooocus/releases/tag/v2.5.1)
* Update download URL in readme
* Increase speed of metadata loading
* Fix reading of metadata from jpeg, jpg and webp (exif)
* Fix debug preprocessor
* Update attributes and add inline prompt features section to readme
* Add checkbox, config and handling for saving only the final enhanced image. Use config `default_save_only_final_enhanced_image`, default False.
* Add sorting of final images when enhanced is enabled. Use argument `--disable-enhance-output-sorting` to disable.
# [2.5.0](https://github.com/lllyasviel/Fooocus/releases/tag/v2.5.0)
This version includes various package updates. If the auto-update doesn't work you can do one of the following:
1. Open a terminal in the Fooocus folder (location of config.txt) and run `git pull`
2. Update packages
- Windows (installation through zip file): open a terminal in the Fooocus folder (location of config.txt) `..\python_embeded\python.exe -m pip install -r .\requirements_versions.txt` (Windows using embedded python, installation method zip file) or download Fooocus again (zip file attached to this release)
- other: manually update the packages using `python.exe -m pip install -r requirements_versions.txt` or use the docker image
---
* Update python dependencies, add segment_anything
* Add enhance feature, which offers easy image refinement steps (similar to adetailer, but based on dynamic image detection instead of specific mask detection models). See [documentation](https://github.com/lllyasviel/Fooocus/discussions/3281).
* Rewrite async worker code, make code much more reusable to allow iterations and improve reusability
* Improve GroundingDINO and SAM image masking
* Fix inference tensor version counter tracking issue for GroundingDINO after using Enhance (see [discussion](https://github.com/lllyasviel/Fooocus/discussions/3213))
* Move checkboxes Enable Mask Upload and Invert Mask When Generating from Developer Debug Mode to Inpaint Or Outpaint
* Add persistent model cache for metadata. Use `--rebuild-hash-cache X` (X = int, number of CPU cores, default all) to manually rebuild the cache for all non-cached hashes
* Rename `--enable-describe-uov-image` to `--enable-auto-describe-image`, now also works for enhance image upload
* Rename checkbox `Enable Mask Upload` to `Enable Advanced Masking Features` to better hint to mask auto-generation feature
* Get upscale model filepath by calling downloading_upscale_model() to ensure the model exists
* Rename tab titles and translations from singular to plural
* Rename document to documentation
* Update default models to latest versions
* animaPencilXL_v400 => animaPencilXL_v500
* DreamShaperXL_Turbo_dpmppSdeKarras => DreamShaperXL_Turbo_v2_1
* SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4 => SDXL_FILM_PHOTOGRAPHY_STYLE_V1
* Add preset for pony_v6 (using ponyDiffusionV6XL)
* Add style `Fooocus Pony`
* Add restart sampler ([paper](https://arxiv.org/abs/2306.14878))
* Add config option for default_inpaint_engine_version, sets inpaint engine for pony_v6 and playground_v2.5 to None for improved results (incompatible with inpaint engine)
* Add image editor functionality to mask upload (same as for inpaint, now correctly resizes and allows more detailed mask creation)
# [2.4.3](https://github.com/lllyasviel/Fooocus/releases/tag/v2.4.3)
* Fix alphas_cumprod setter for TCD sampler
* Add parser for env var strings to expected config value types to allow override of all non-path config keys
# [2.4.2](https://github.com/lllyasviel/Fooocus/releases/tag/v2.4.2)
* Fix some small bugs (tcd scheduler when gamma is 0, chown in Dockerfile, update cmd args in readme, translation for aspect ratios, vae default after file reload)
* Fix performance LoRA replacement when data is loaded from history log and inline prompt
* Add support and preset for playground v2.5 (only works with performance Quality or Speed, use with scheduler edm_playground_v2)
* Make textboxes (incl. positive prompt) resizable
* Hide intermediate images when performance of Gradio would bottleneck the generation process (Extreme Speed, Lightning, Hyper-SD)
# [2.4.1](https://github.com/lllyasviel/Fooocus/releases/tag/v2.4.1)
* Fix some small bugs (e.g. adjust clip skip default value from 1 to 2, add type check to aspect ratios js update function)
* Add automated docker build on push to main, tagged with `edge`. See [available docker images](https://github.com/lllyasviel/Fooocus/pkgs/container/fooocus).
# [2.4.0](https://github.com/lllyasviel/Fooocus/releases/tag/v2.4.0)
* Change settings tab elements to be more compact
* Add clip skip slider
* Add select for custom VAE
* Add new style "Random Style"
* Update default anime model to animaPencilXL_v310
* Add button to reconnect the UI after Fooocus crashed without having to configure everything again (no page reload required)
* Add performance "hyper-sd" (based on [Hyper-SDXL 4 step LoRA](https://huggingface.co/ByteDance/Hyper-SD/blob/main/Hyper-SDXL-4steps-lora.safetensors))
* Add [AlignYourSteps](https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/) scheduler by Nvidia, see
* Add [TCD](https://github.com/jabir-zheng/TCD) sampler and scheduler (based on sgm_uniform)
* Add NSFW image censoring (disables intermediate image preview while generating). Set config value `default_black_out_nsfw` to True to always enable.
* Add argument `--enable-describe-uov-image` to automatically describe uploaded images for upscaling
* Add inline lora prompt references with subfolder support, example prompt: `colorful bird <lora:toucan:1.2>`
* Add size and aspect ratio recommendation on image describe
* Add inpaint brush color picker, helpful when image and mask brush have the same color
* Add automated Docker image build using Github Actions on each release.
* Add full raw prompts to history logs
* Change code ownership from @lllyasviel to @mashb1t for automated issue / MR notification
# [2.3.1](https://github.com/lllyasviel/Fooocus/releases/tag/2.3.1)
* Remove positive prompt from anime prefix to not reset prompt after switching presets
* Fix image number being reset to 1 when switching preset, now doesn't reset anymore
* Fix outpainting dimension calculation when extending left/right
* Fix LoRA compatibility for LoRAs in a1111 metadata scheme
# [2.3.0](https://github.com/lllyasviel/Fooocus/releases/tag/2.3.0)
* Add performance "lightning" (based on [SDXL-Lightning 4 step LoRA](https://huggingface.co/ByteDance/SDXL-Lightning/blob/main/sdxl_lightning_4step_lora.safetensors))
* Add preset selection to UI, disable with argument `--disable-preset-selection`. Use `--always-download-new-model` to download missing models on preset switch.
* Improve face swap consistency by switching later in the process to (synthetic) refiner
* Add temp path cleanup on startup
* Add support for wildcard subdirectories
* Add scrollable 2 column layout for styles for better structure
* Improve Colab resource needs for T4 instances (default), positively tested with all image prompt features
* Improve anime preset, now uses style `Fooocus Semi Realistic` instead of `Fooocus Negative` (less wet look images)
# [2.2.1](https://github.com/lllyasviel/Fooocus/releases/tag/2.2.1)
* Fix some small bugs (e.g. image grid, upscale fast 2x, LoRA weight width in Firefox)
* Allow prompt weights in array syntax
* Add steps override and metadata scheme to history log
# [2.2.0](https://github.com/lllyasviel/Fooocus/releases/tag/2.2.0)
* Isolate every image generation to truly allow multi-user usage
* Add array support, changes the main prompt when increasing the image number. Syntax: `[[red, green, blue]] flower`
* Add optional metadata to images, allowing you to regenerate and modify them later with the same parameters
* Now supports native PNG, JPG and WEBP image generation
* Add Docker support
# [2.1.865](https://github.com/lllyasviel/Fooocus/releases/tag/2.1.865)
* Various bugfixes * Various bugfixes
* Add authentication to --listen * Add authentication to --listen

724
webui.py

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +0,0 @@
*.txt
!animal.txt
!artist.txt
!color.txt
!color_flower.txt
!extended-color.txt
!flower.txt
!nationality.txt

View File

@ -18,7 +18,7 @@ Chihuahua
Chimpanzee Chimpanzee
Chinchilla Chinchilla
Chipmunk Chipmunk
Komodo Dragon Comodo Dragon
Cow Cow
Coyote Coyote
Crocodile Crocodile