ENFUGUE Web UI v0.2.4 #83
painebenjamin
announced in
Announcements
Replies: 2 comments
-
Just to clarify, TensorRT requires 12gb of VRAM? so my 8gb 3070 won't work with TensorRT? Also, do you guys have a discord? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Sorry I was unclear! It does not have a strict memory requirement and can be quite variable between about 6 and 12 GB, and during the engine compile it will grow to use as much VRAM as it needs to speed things up. I only picked 12 GB to try and avoid disappointment from new users if their builds fail to load properly or some other issue arises. I have precious little data about TensorRT user's hardware and experience, so I would be very interested in hearing from you or anyone else that tries to get it running.
As for a Discord, I do not have one! So far direct communication from me has been enough for support, and my small number of vocal users are scattered about many communities so there hasn't been an interest in one. If more people voice a desire for one, though, I will certainly do so!
I can be found lurking various SD discords, but I'm going to be in the [RunDiffusion Discord](https://discord.gg/y7gCC3UTzK) to give support and tips for the foreseeable future. Please stop by and say hi, I'll be happy to help with any questions you may have!
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
New Features
1. IP Adapter Overhaul
Using multiple images for prompting is an exceptional way to create a personalized affectation without needing to train your own model.
The IP adapter integration has been overhauled with the following:
2. QR Monster, ControlNet Conditioning Start/End
Use a strong scale and lengthy conditioning time for QR codes that are scannable.
Use a weaker scale and stop conditioning short to achieve a more subtle or "hidden-eye" effect.
3. Model Merger
The two merge modes and their options.
The backend model merger has been made available in the frontend to use as desired. Select Merge Models under the Models menu to get started.
There are two modes of operation:
(a + (b - c))
for all weights common between them.alpha
parameter from 0 to 1, where 0 would produce entirely the first checkpoint, 1 would produce entirely the second checkpoint, and 0.5 would produce the exact mean between the two.4. More Flexible Model Loading
Finally, model loading has been made significantly more flexible, to better facilitate sharing of resources between Enfugue and other stable diffusion applications. To this end, Enfugue will now search in configured directories to an arbitrarily nested level of directories to find versions of models before attempting to download them itself. The known filenames for each scenario have been expanded as well, see the wiki for more details.
Full Changelog: 0.2.3...0.2.4
How-To Guide
Installing and Running: Portable Distributions
Select a portable distribution if you'd like to avoid having to install other programs, or want to have an isolated executable file that doesn't interfere with other environments on your system.
Summary
enfugue-server-0.2.4-win-cuda-x86_64.zip.002
enfugue-server-0.2.4-win-tensorrt-x86_64.zip.002
enfugue-server-0.2.4-manylinux-cuda-x86_64.tar.gz.1
enfugue-server-0.2.4-manylinux-cuda-x86_64.tar.gz.2
enfugue-server-0.2.4-manylinux-tensorrt-x86_64.tar.gz.1
enfugue-server-0.2.4-manylinux-tensorrt-x86_64.tar.gz.2
Linux
First, decide which version you want - with or without TensorRT support. TensorRT requires a powerful, modern Nvidia GPU.
Then, download the appropriate
manylinux
files here, concatenate them and extract them. A console command to do that is:You are now ready to run the server with:
Press
Ctrl+C
to exit.Windows
Download the
win64
files here, and extract them using a program which allows extracting from multiple archives such as 7-Zip.If you are using 7-Zip, you should not extract both files independently. If they are in the same directory when you unzip the first, 7-Zip will automatically unzip the second. The second file cannot be extracted on its own.
Locate the file
enfugue-server.exe
, and double-click it to run it. To exit, locate the icon in the bottom-right hand corner of your screen (the system tray) and right-click it, then selectQuit
.MacOS
Download the the
macos
file here, then double-click it to extract the package. When you run the application using the command below, your Mac will warn you of running downloaded packages, and you will have to perform an administrator override to allow it to run - you will be prompted to do this. To avoid this, you can run an included command like so:This command finds all the files in the installation and removes the
com.apple.quarantine
xattr
from the file. This does not require administrator privilege. After doing this (or if you will grant the override,) run the server with:Note: while the MacOS packages are compiled on x86 machines, they are tested and designed for the new M1/M2 ARM machines thanks to Rosetta, Apple's machine code translation system.
Upgrading
To upgrade any distribution, download and extract the appropriate upgrade package on this release. Copy all files in the upgrade package into your Enfugue installation directory, overwriting any existing files.
Installing and Running: Conda
To install with the provided Conda environments, you need to install a version of Conda.
After installing Conda and configuring it so it is available to your shell or command-line, download one of the environment files depending on your platform and graphics API.
windows-
,linux-
ormacos-
based on your platform.tensorrt
for all of the capabilities ofcuda
with the added ability to compile TensorRT engines. If you do not plan on using TensorRT, selectcuda
for the most optimized build for this API.cuda
.rocm
anddirectml
) are being added and will be available soon.Finally, using the file you downloaded, create your Conda environment:
You've now installed Enfugue and all dependencies. To run it, activate the environment and then run the installed binary.
Optional: DWPose Support
To install DW Pose support (a better, faster pose and face detection model), after installing Enfugue, execute the following:
Installing and Running: Self-Managed Environment
If you would like to manage dependencies yourself, or want to install Enfugue into an environment to share with another Stable Diffusion UI, you can install enfugue via
pip
. This is the only method available for AMD GPU's at present.If you are on Linux and want TensorRT support, execute:
If you are on Windows and want TensorRT support, follow the steps detailed here.
Thank you!
This discussion was created from the release ENFUGUE Web UI v0.2.4.
Beta Was this translation helpful? Give feedback.
All reactions