@@ -38,47 +38,36 @@ assumptions. If anything is unclear, please open an issue.
3838* [ See the ` dev ` branch for the latest features.] ( https://github.com/kiri-art/docker-diffusers-api/tree/dev )
3939** Pull Requests must be submitted against the dev branch.***
4040
41- ## Usage:
41+ ## Installation & Setup:
42+
43+ Setup varies depending on your use case.
4244
43- Firstly, fork and clone this repo.
45+ 1 . ** To run locally or on a * server * , with runtime downloads: ** :
4446
45- Most of the configuration happens via docker build variables. You can
46- see all the options in the [ Dockerfile] ( ./Dockerfile ) , and edit them
47- there directly, or set via docker command line or e.g. Banana's dashboard
48- UI once support for build variables land (any day now).
47+ ` docker run --gpus all -p 8000:8000 -e HF_AUTH_TOKEN=$HF_AUTH_TOKEN gadicc/diffusers-api ` .
4948
50- If you're only deploying one container, that's all you need! If you
51- intend to deploy multiple containers each with different variables
52- (e.g. a few different models), you can edit the example
53- [ ` scripts/permutations.yaml ` ] ( scripts/permutations.yaml )] file and
54- run [ ` scripts/permute.sh ` ] ( scripts/permute.sh ) to create a number
55- of sub-repos in the ` permutations ` directory.
49+ See the [ guides for various cloud providers] ( https://forums.kiri.art/t/running-on-other-cloud-providers/89/7 ) .
5650
57- Lastly, there's an option to set ` MODEL_ID=ALL ` , and * all* models will
58- be downloaded, and switched at request time (great for dev, useless for
59- serverless).
51+ 1 . ** To run * serverless* , include the model at build time** .
6052
61- ** Deploying to banana?** That's it! You're done. Commit your changes and push.
53+ 1 . [ docker-diffusers-api-build-download] ( https://github.com/kiri-art/docker-diffusers-api-build-download ) (
54+ [ banana] ( https://forums.kiri.art/t/run-diffusers-api-on-banana-dev/103 ) , others)
55+ 1 . [ docker-diffusers-api-runpod] ( https://github.com/kiri-art/docker-diffusers-api-runpod ) ,
56+ see the [ guide] ( https://forums.kiri.art/t/run-diffusers-api-on-runpod-io/102 )
6257
63- ## Running locally / development:
58+ 1 . ** Building from source ** .
6459
65- ** Building**
60+ 1 . Fork / clone this repo.
61+ 1 . ` docker build -t gadicc/diffusers-api . `
62+ 1 . See [ CONTRIBUTING.md] ( ./CONTRIBUTING.md ) for more helpful hints.
6663
67- 1 . ` docker build -t diffusers-api --build-arg HF_AUTH_TOKEN=$HF_AUTH_TOKEN . `
68- 1 . See [ CONTRIBUTING.md] ( ./CONTRIBUTING.md ) for more helpful hints.
69- 1 . Note: your first build can take a really long time, depending on
70- your PC & network speed, and * especially when using the ` CHECKPOINT_URL `
71- feature* . Great time to grab a coffee or take a walk.
64+ * Other configurations are possible but these are the most common cases*
7265
73- ** Running **
66+ Everything is set via docker build-args or environment variables.
7467
75- 1 . ` docker run -it --gpus all -p 8000:8000 diffusers-api `
76- 1 . Note: the ` -it ` is optional but makes it alot quicker/easier to stop the
77- container using ` Ctrl-C ` .
78- 1 . If you get a ` CUDA initialization: CUDA unknown error ` after suspend,
79- just stop the container, ` rmmod nvidia_uvm ` , and restart.
68+ ## Usage:
8069
81- ## Sending requests
70+ See also [ Testing ] ( #testing ) below.
8271
8372The container expects an ` HTTP POST ` request with the following JSON body:
8473
@@ -93,6 +82,7 @@ The container expects an `HTTP POST` request with the following JSON body:
9382 "seed" : 3239022079
9483 },
9584 "callInputs" : {
85+ // You can leave these out to use the default
9686 "MODEL_ID" : " runwayml/stable-diffusion-v1-5" ,
9787 "PIPELINE" : " StableDiffusionPipeline" ,
9888 "SCHEDULER" : " LMSDiscreteScheduler" ,
@@ -101,19 +91,6 @@ The container expects an `HTTP POST` request with the following JSON body:
10191}
10292```
10393
104- If you're using banana's SDK, it looks something like this:
105-
106- ``` js
107- const out = await banana .run (apiKey, modelKey, { " modelInputs" : modelInputs, " callInputs" : callInputs });
108- ```
109-
110- NB: if you're coming from another banana starter repo, note that we
111- explicitly name ` modelInputs ` above, and send a bigger object (with
112- ` modelInputs ` and ` callInputs ` keys) for the banana-sdk's
113- "modelInputs" argument.
114-
115- If provided, ` init_image ` and ` mask_image ` should be base64 encoded.
116-
11794** Schedulers** : docker-diffusers-api is simply a wrapper around diffusers,
11895literally any scheduler included in diffusers will work out of the box,
11996provided it can loaded with its default config and without requiring
@@ -123,6 +100,8 @@ schedulers are the most common and most well tested:
123100` LMSDiscreteScheduler ` , ` DDIMScheduler ` , ` PNDMScheduler ` ,
124101` EulerAncestralDiscreteScheduler ` , ` EulerDiscreteScheduler ` .
125102
103+ ** Pipelines** :
104+
126105<a name =" testing " ></a >
127106## Examples and testing
128107
@@ -155,42 +134,27 @@ Request took 3.0s (init: 2.4s, inference: 2.1s)
155134The best example of course is https://kiri.art/ and it's
156135[ source code] ( https://github.com/kiri-art/stable-diffusion-react-nextjs-mui-pwa ) .
157136
158-
159-
160- ## Troubleshooting
161-
162- * ** 403 Client Error: Forbidden for url**
163-
164- Make sure you've accepted the license on the model card of the HuggingFace model
165- specified in ` MODEL_ID ` , and that you correctly passed ` HF_AUTH_TOKEN ` to the
166- container.
137+ ## Help on [ Official Forums] ( https://forums.kiri.art/c/docker-diffusers-api/16 ) .
167138
168139## Adding other Models
169140
170141You have two options.
171142
172- 1 . For a diffusers model, simply set the ` MODEL_ID ` docker build variable to the name
143+ 1 . For a diffusers model, simply set ` MODEL_ID ` build-var / call-arg to the name
173144 of the model hosted on HuggingFace, and it will be downloaded automatically at
174145 build time.
175146
176- 1 . For a non-diffusers model, simply set the ` CHECKPOINT_URL ` docker build variable
147+ 1 . For a non-diffusers model, simply set the ` CHECKPOINT_URL ` build-var / call-arg
177148 to the URL of a ` .ckpt ` file, which will be downloaded and converted to the diffusers
178- format automatically at build time.
179-
180- ## Keeping forks up to date
181-
182- Per your personal preferences, rebase or merge, e.g.
149+ format automatically at build time. ` CHECKPOINT_CONFIG_URL ` can also be set.
183150
184- 1 . ` git fetch upstream `
185- 1 . ` git merge upstream/main `
186- 1 . ` git push `
187-
188- Or, if you're confident, do it in one step with no confirmations:
151+ ## Troubleshooting
189152
190- ` git fetch upstream && git merge upstream/main --no-edit && git push `
153+ * ** 403 Client Error: Forbidden for url **
191154
192- Check ` scripts/permute.sh ` and your git remotes, some URLs are hardcoded, I'll
193- make this easier in a future release.
155+ Make sure you've accepted the license on the model card of the HuggingFace model
156+ specified in ` MODEL_ID ` , and that you correctly passed ` HF_AUTH_TOKEN ` to the
157+ container.
194158
195159## Event logs / performance data
196160
0 commit comments