This is a prototype to build a replicate model using existing replicate models as the base.
These concepts might be incorporated into cog/replicate's dreambooth api
using cog push
to build/push a model many times results in a lot of work - downloading / building / ... But each model you build technically just has a difference of the weights.
if we could take an existing popular stable diffusion model, and throw our weights and any customization to predict.py - our image should be much smaller/faster builds/...
we only need to upload our changes (weights & predict.py)
- docker must be installed
- install
cog
and authenticate - the modelname must already exist for your username (go here to create a model on replicate, then use it with
build.sh
)
- make sure cog is authenticated
- run dreambooth trainer on replicate.com - not dreambooth api
- Download the output.zip and put it into a directory called weights
- ./build.sh r8.im/username/modelname
- parse the sha of the pushed model to let you the replicate verison / image
- speed!!! downloading layers doesn't seem useful... is there a way to skip downloading the existing r8.im layers, ... just create new layer(s) and push - as this isn't for running locally
- efficiency?!? is sharing the base image of cog-stable-diffusion helpful? does having the SD2.1 weights in an unused but present layer make things better or worse?
- build these into cog?