Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Replicate demo and API  #2

Merged
merged 1 commit into from
Sep 18, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ We used the following dataset for training the model:
## Usage

<a href="https://colab.research.google.com/github/rinnakk/japanese-stable-diffusion/blob/master/scripts/txt2img.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
[![Replicate](https://replicate.com/cjwbw/japanese-stable-diffusion/badge)](https://replicate.com/cjwbw/japanese-stable-diffusion)


Firstly, install our package as follows. This package is modified [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Japanese Stable Diffusion.

Expand Down
17 changes: 17 additions & 0 deletions replicate/cog.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
build:
gpu: true
cuda: "11.6.2"
python_version: "3.10"
python_packages:
- "torch==1.12.1 --extra-index-url=https://download.pytorch.org/whl/cu116"
- "ftfy==6.1.1"
- "scipy==1.9.0"
- "transformers==4.21.1"
- "pytorch-lightning==1.4.2"
- "torchmetrics==0.6.0"
- "sentencepiece==0.1.97"
- "invisible-watermark==0.1.5"
run:
- pip install diffusers@git+https://github.com/huggingface/diffusers#egg=diffusers

predict: "predict.py:Predictor"
66 changes: 66 additions & 0 deletions replicate/predict.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
import os
import sys
from typing import List
import torch
from pytorch_lightning import seed_everything
from cog import BasePredictor, Input, Path

sys.path.insert(0, "../src")
from japanese_stable_diffusion import JapaneseStableDiffusionPipeline


class Predictor(BasePredictor):
def setup(self):
"""Load the model into memory to make running multiple predictions efficient"""
print("Loading pipeline...")

model_id = "rinna/japanese-stable-diffusion"
cache_dir = "japanese-stable-diffusion-cache"
# the model should be downloaded with auth token and saved to cache
self.pipe = JapaneseStableDiffusionPipeline.from_pretrained(
model_id,
cache_dir=cache_dir,
local_files_only=True,
).to("cuda")

@torch.inference_mode()
@torch.cuda.amp.autocast()
def predict(
self,
prompt: str = Input(description="Input prompt", default="サラリーマン 油絵"),
num_outputs: int = Input(
description="Number of images to output", choices=[1, 4], default=1
),
num_inference_steps: int = Input(
description="Number of denoising steps", ge=1, le=500, default=50
),
guidance_scale: float = Input(
description="Scale for classifier-free guidance", ge=1, le=20, default=7.5
),
seed: int = Input(
description="Random seed. Leave blank to randomize the seed", default=None
),
) -> List[Path]:
"""Run a single prediction on the model"""

if seed is None:
seed = int.from_bytes(os.urandom(2), "big")
print(f"Using seed: {seed}")

seed_everything(seed)

output = self.pipe(
prompt=[prompt] * num_outputs,
num_inference_steps=num_inference_steps,
guidance_scale=guidance_scale,
)
if any(output["nsfw_content_detected"]):
raise Exception("NSFW content detected, please try a different prompt")

output_paths = []
for i, sample in enumerate(output["sample"]):
output_path = f"/tmp/out-{i}.png"
sample.save(output_path)
output_paths.append(Path(output_path))

return output_paths