Skip to content

Latest commit

 

History

History
47 lines (22 loc) · 2.25 KB

README.md

File metadata and controls

47 lines (22 loc) · 2.25 KB

Stable Cascade Examples

First download the stable_cascade_stage_c.safetensors and stable_cascade_stage_b.safetensors checkpoints and put them in the ComfyUI/models/checkpoints folder.

Stable cascade is a 3 stage process, first a low resolution latent image is generated with the Stage C diffusion model. This latent is then upscaled using the Stage B diffusion model. This upscaled latent is then upscaled again and converted to pixel space by the Stage A VAE.

Note that you can download all images in this page and then drag or load them on ComfyUI to get the workflow embedded in the image.

Text to Image

Here is a basic text to image workflow:

Example

Image to Image

Here's an example of how to do basic image to image by encoding the image and passing it to Stage C.

Example

Image Variations

Stable Cascade supports creating variations of images using the output of CLIP vision. See the following workflow for an example:

Example

See this next workflow for how to mix multiple images together:

Example

You can find the input image for the above workflows on the unCLIP example page

ControlNet

You can download the stable cascade controlnets from: here. For these examples I have renamed the files by adding stable_cascade_ in front of the filename for example: stable_cascade_canny.safetensors, stable_cascade_inpainting.safetensors

Here is an example for how to use the Canny Controlnet:

Example

Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. A reminder that you can right click images in the LoadImage node and edit them with the mask editor.

Example