Skip to content

Commit

Permalink
feat: add web ui for core ml stable diffusion (#56)
Browse files Browse the repository at this point in the history
  • Loading branch information
soulteary committed Dec 22, 2022
1 parent c90b705 commit be26d3c
Show file tree
Hide file tree
Showing 3 changed files with 153 additions and 0 deletions.
45 changes: 45 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -220,6 +220,51 @@ Differences may be less or more pronounced for different inputs. Please see the

</details>


## <a name="play-with-simple-web-ui"></a> Play with simple Web UI

<details>
<summary> Click to expand </summary>

<img src="assets/webui.jpg">

After you have completed the model conversion according to the above process, you can use the following command to start a simple Web UI:

```bash
python -m python_coreml_stable_diffusion.web -i <output-mlpackages-directory> --compute-unit ALL
```

After the command is executed, we will get a log similar to the following:

```bash
WARNING:coremltools:Torch version 1.13.0 has not been tested with coremltools. You may run into unexpected errors. Torch 1.12.1 is the most recent version that has been tested.
INFO:python_coreml_stable_diffusion.pipeline:Initializing PyTorch pipe for reference configuration
...
...
INFO:python_coreml_stable_diffusion.pipeline:Done.
INFO:python_coreml_stable_diffusion.pipeline:Initializing Core ML pipe for image generation
INFO:python_coreml_stable_diffusion.pipeline:Stable Diffusion configured to generate 512x512 images
INFO:python_coreml_stable_diffusion.pipeline:Done.
Running on local URL: http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
```

Open `http://0.0.0.0:7860` in your browser to start your Core ML Stable Diffusion adventure.


Web UI relies on gradio, a great interface framework. If you have not installed it, then execute the above command, the program will try to install it automatically.

If the installation fails, you can try to manually execute the following command to complete the dependency installation.

```bash
pip install gradio
```

When the installation is complete, re-execute the above command to start the Web UI.

</details>

## <a name="faq"></a> FAQ

<details>
Expand Down
Binary file added assets/webui.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
108 changes: 108 additions & 0 deletions python_coreml_stable_diffusion/web.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
try:
import gradio as gr
import python_coreml_stable_diffusion.pipeline as pipeline
from diffusers import StableDiffusionPipeline

def init(args):
pipeline.logger.info("Initializing PyTorch pipe for reference configuration")
pytorch_pipe = StableDiffusionPipeline.from_pretrained(args.model_version,
use_auth_token=True)

user_specified_scheduler = None
if args.scheduler is not None:
user_specified_scheduler = pipeline.SCHEDULER_MAP[
args.scheduler].from_config(pytorch_pipe.scheduler.config)

coreml_pipe = pipeline.get_coreml_pipe(pytorch_pipe=pytorch_pipe,
mlpackages_dir=args.i,
model_version=args.model_version,
compute_unit=args.compute_unit,
scheduler_override=user_specified_scheduler)


def infer(prompt, steps):
pipeline.logger.info("Beginning image generation.")
image = coreml_pipe(
prompt=prompt,
height=coreml_pipe.height,
width=coreml_pipe.width,
num_inference_steps=steps,
)
images = []
images.append(image["images"][0])
return images


demo = gr.Blocks()

with demo:
gr.Markdown(
"<center><h1>Core ML Stable Diffusion</h1>Run Stable Diffusion on Apple Silicon with Core ML</center>")
with gr.Group():
with gr.Box():
with gr.Row():
with gr.Column():
with gr.Row():
text = gr.Textbox(
label="Prompt",
lines=11,
placeholder="Enter your prompt",
)
with gr.Row():
btn = gr.Button("Generate image")
with gr.Row():
steps = gr.Slider(label="Steps", minimum=1,
maximum=50, value=10, step=1)
with gr.Column():
gallery = gr.Gallery(
label="Generated image", elem_id="gallery"
)

text.submit(infer, inputs=[text, steps], outputs=gallery)
btn.click(infer, inputs=[text, steps], outputs=gallery)

demo.launch(debug=True, server_name="0.0.0.0")


if __name__ == "__main__":
parser = pipeline.argparse.ArgumentParser()

parser.add_argument(
"-i",
required=True,
help=("Path to input directory with the .mlpackage files generated by "
"python_coreml_stable_diffusion.torch2coreml"))
parser.add_argument(
"--model-version",
default="CompVis/stable-diffusion-v1-4",
help=
("The pre-trained model checkpoint and configuration to restore. "
"For available versions: https://huggingface.co/models?search=stable-diffusion"
))
parser.add_argument(
"--compute-unit",
choices=pipeline.get_available_compute_units(),
default="ALL",
help=("The compute units to be used when executing Core ML models. "
f"Options: {pipeline.get_available_compute_units()}"))
parser.add_argument(
"--scheduler",
choices=tuple(pipeline.SCHEDULER_MAP.keys()),
default=None,
help=("The scheduler to use for running the reverse diffusion process. "
"If not specified, the default scheduler from the diffusers pipeline is utilized"))

args = parser.parse_args()
init(args)

except ModuleNotFoundError as moduleNotFound:
print(f'Found that `gradio` is not installed, try to install it automatically')
try:
import subprocess
import sys

subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'gradio'])
print(f'Successfully installed missing package `gradio`.')
print(f'Now re-execute the command :D')
except subprocess.CalledProcessError:
print(f'Automatic package installation failed, try manually executing `pip install gradio`, then retry the command again.')

0 comments on commit be26d3c

Please sign in to comment.