ENFUGUE Web UI v0.2.4
New Platform
ENFUGUE has partnered with RunDiffusion to bring you a cloud-hosted solution for using ENFUGUE with just a web browser, no hardware requirements at all. Sign up for 30 minutes of free use, with further use costing as little as $0.50 an hour.
Enjoy a fine-tuned ENFUGUE experience, with numerous models pre-installed and even the lowest tier of machine more than capable of using SDXL and upscaling up to 4k.
New Features
1. IP Adapter Overhaul
Using multiple images for prompting is an exceptional way to create a personalized affectation without needing to train your own model.
The IP adapter integration has been overhauled with the following:
- Any number of IP Adapter Images can now be stacked on a node, in combination with any number of control images and a reference image. This provides an incredible way of creating a "mini-LoRA," extracting the features from numerous reference images and using them to modify your prompt.
- In addition, a total of five IP adapters are now available, selectable by using checkboxes in the interface.
- Stable Diffusion 1.5
- Stable Diffusion 1.5 with Fine-Grained Features (IP Adapter Plus)
- Stable Diffusion 1.5 with Fine-Grained Facial Features (IP Adapter Plus Face)
- Stable Diffusion XL
- Stable Diffusion XL with Fine-Grained Features (IP Adapter Plus XL)
2. QR Monster, ControlNet Conditioning Start/End
Use a strong scale and lengthy conditioning time for QR codes that are scannable.
Use a weaker scale and stop conditioning short to achieve a more subtle or "hidden-eye" effect.
- A new ControlNet, QR Code Monster, has been added to Enfugue. Simply select "QR" from the ControlNet dropdown to use it. There is no pre-processor for this ControlNet.
- In addition, sliders have been added in the UI for when to start and when to stop ControlNet conditioning. This is a per-control-image setting that tells Enfugue when to start following ControlNet's influence and when to stop, in proportion to the length of the denoising stage. For example, a conditioning start of "0.2" would tell Enfugue to start using ControlNet about 20% of the way through creating the image, which will allow Enfugue to generate it's own randomness prior to using ControlNet for more subtle effects. The same can be done to the end of the conditioning period as well.
3. Model Merger
The backend model merger has been made available in the frontend to use as desired. Select Merge Models under the Models menu to get started.
There are two modes of operation:
- Add Difference - this takes three checkpoints as input, and the output will be the first model plus the difference between the latter models - i.e. the resulting model will be of the formula
(a + (b - c))
for all weights common between them. - Weighted Sum - this takes two checkpoints as input, and the output will be a weighted blend between the models based upon an
alpha
parameter from 0 to 1, where 0 would produce entirely the first checkpoint, 1 would produce entirely the second checkpoint, and 0.5 would produce the exact mean between the two.
4. More Flexible Model Loading
Finally, model loading has been made significantly more flexible, to better facilitate sharing of resources between Enfugue and other stable diffusion applications. To this end, Enfugue will now search in configured directories to an arbitrarily nested level of directories to find versions of models before attempting to download them itself. The known filenames for each scenario have been expanded as well, see the wiki for more details.
Full Changelog: 0.2.3...0.2.4
How-To Guide
Installing and Running: Portable Distributions
Select a portable distribution if you'd like to avoid having to install other programs, or want to have an isolated executable file that doesn't interfere with other environments on your system.
Summary
Platform | Graphics API | File(s) | CUDA Version | Torch Version |
---|---|---|---|---|
MacOS | MPS | enfugue-server-0.2.4-macos-ventura-x86_64.tar.gz | N/A | 2.2.0.dev20230928 |
Windows | CUDA | enfugue-server-0.2.4-win-cuda-x86_64.zip.001 enfugue-server-0.2.4-win-cuda-x86_64.zip.002 |
12.1.1 | 2.2.0.dev20230928 |
Windows | CUDA+TensorRT | enfugue-server-0.2.4-win-tensorrt-x86_64.zip.001 enfugue-server-0.2.4-win-tensorrt-x86_64.zip.002 |
11.7.1 | 1.13.1 |
Linux | CUDA | enfugue-server-0.2.4-manylinux-cuda-x86_64.tar.gz.0 enfugue-server-0.2.4-manylinux-cuda-x86_64.tar.gz.1 enfugue-server-0.2.4-manylinux-cuda-x86_64.tar.gz.2 |
12.1.1 | 2.2.0.dev20230928 |
Linux | CUDA+TensorRT | enfugue-server-0.2.4-manylinux-tensorrt-x86_64.tar.gz.0 enfugue-server-0.2.4-manylinux-tensorrt-x86_64.tar.gz.1 enfugue-server-0.2.4-manylinux-tensorrt-x86_64.tar.gz.2 |
11.7.1 | 1.13.1 |
Linux
First, decide which version you want - with or without TensorRT support. TensorRT requires a powerful, modern Nvidia GPU.
Then, download the appropriate manylinux
files here, concatenate them and extract them. A console command to do that is:
cat enfugue-server-0.2.4* | tar -xvz
You are now ready to run the server with:
./enfugue-server/enfugue.sh
Press Ctrl+C
to exit.
Windows
Download the win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.
If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
Locate the file enfugue-server.exe
, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then select Quit
.
MacOS
Download the the macos
file here, then double-click it to extract the package. When you run the application using the command below, your Mac will warn you of running downloaded packages, and you will have to perform an administrator override to allow it to run - you will be prompted to do this. To avoid this, you can run an included command like so:
./enfugue-server/unquarantine.sh
This command finds all the files in the installation and removes the com.apple.quarantine
xattr
from the file. This does not require administrator privilege. After doing this (or if you will grant the override,) run the server with:
./enfugue-server/enfugue.sh
Note: while the MacOS packages are compiled on x86 machines, they are tested and designed for the new M1/M2 ARM machines thanks to Rosetta, Apple's machine code translation system.
Upgrading
To upgrade any distribution, download and extract the appropriate upgrade package on this release. Copy all files in the upgrade package into your Enfugue installation directory, overwriting any existing files.
Installing and Running: Conda
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
- First, choose
windows-
,linux-
ormacos-
based on your platform. - Then, choose your graphics API:
- If you are on MacOS, you only have access to MPS.
- If you have a powerful next-generation Nvidia GPU (3000 series and better with at least 12 GB of VRAM), use
tensorrt
for all of the capabilities ofcuda
with the added ability to compile TensorRT engines. If you do not plan on using TensorRT, selectcuda
for the most optimized build for this API. - If you have any other Nvidia GPU or other CUDA-compatible device, select
cuda
. - Additional graphics APIs (
rocm
anddirectml
) are being added and will be available soon.
Finally, using the file you downloaded, create your Conda environment:
conda env create -f <downloaded_file.yml>
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
conda activate enfugue
enfugue run
Optional: DWPose Support
To install DW Pose support (a better, faster pose and face detection model), after installing Enfugue, execute the following:
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
Installing and Running: Self-Managed Environment
If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via pip
. This is the only method available for AMD GPU's at present.
pip install enfugue
If you are on Linux and want TensorRT support, execute:
pip install enfugue[tensorrt]
If you are on Windows and want TensorRT support, follow the steps detailed here.