Skip to content
This repository was archived by the owner on Aug 2, 2025. It is now read-only.

Commit 0a4d147

Browse files
authored
Update README.md
Fixed docker image versions and some details in the guide.
1 parent 76ffecb commit 0a4d147

1 file changed

Lines changed: 5 additions & 5 deletions

File tree

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,13 @@ Ready-to-use Docker Image with this repo's code: https://hub.docker.com/r/3wad/r
66

77
## How to prepare Network Volume
88
- Create RunPod network volume. 15GB is just enough for the generic Foocus with Juggernaut model. You can increase its size any time if you need additional models, loras etc. But unfortunately, it cannot be reduced back.
9-
- Create a custom Pod Template and use the konieshadow/fooocus-api:latest image. I went with 30GB disk sizes, mount path /workspace, and expose http 8888 and tcp 22.
10-
- Run the network volume with the custom fooocus-api image you've just created. You don't need a strong GPU pod, the installation is CPU and download-intensive, but be aware that some older-gen pods might not support the latest CUDA versions. Let it download and install everything. After the Juggernaut model is downloaded, use the connect button to load into the Fooocus-API docs running on the pod's 8888 port. Here you should try all the API methods you plan to use because additional models are downloaded once you run inpaint, outpaint, upscale, vary and image inputs (canny, face swap etc.) endpoints for the first time.
9+
- Create a custom Pod Template and use the `konieshadow/fooocus-api:v0.3.26` image. I went with 30GB disk sizes, mount path /workspace, and expose http 8888 and tcp 22.
10+
- Run the network volume with the custom fooocus-api image you've just created. You don't need a strong GPU pod, the installation is CPU and download-intensive, but be aware that some older-gen pods might not support the required CUDA versions. Let it download and install everything. After the Juggernaut model is downloaded, use the connect button to load into the Fooocus-API docs running on the pod's 8888 port. Here you should try all the API methods you plan to use. Not only to verify they work, but also additional models are downloaded once you run inpaint, outpaint, upscale, vary and image inputs (canny, face swap etc.) endpoints for the first time.
1111
- After that you are ready to connect to the pod's console and use cp -r /app/* /workspace/ to copy everything into the persistent network volume
1212
- Once everything is copied successfully, you can terminate the pod. You have the network volume ready.
1313
- ---
14-
- Now you need to create your custom docker image that will run on the actual serverless API. Use files in this repo to build your own. Feel free to adjust handler.py based on how you want to make your requests and it's parameters.
14+
- Now you can use our premade image: `3wad/runpod-fooocus-api:0.2.4` and skip the next step OR create your custom docker image from this repo that will run on the actual serverless API. Feel free to adjust handler.py based on how you want to make your requests and it's parameters, or add additional features.
1515
- Once you build it, upload it to the Docker Hub.
16-
- Now you create a custom Serverless Pod Template using the Docker Hub image you've just uploaded. Active container disk should be slightly bigger than the size of the worker docker image.
17-
- Create a new Serverless API Endpoint. Make sure to choose your Docker Hub image and not the konieshadow/fooocus-api:latest from step 2. In Advanced settings choose your created network volume.
16+
- Now you create a custom Serverless Pod Template using the Docker Hub image you've just uploaded (or our premade one). Active container disk should be slightly bigger than the size of the worker docker image.
17+
- Create a new Serverless API Endpoint. Make sure to choose your (or ours) Docker Hub image and not the `konieshadow/fooocus-api` from step 2. In Advanced settings choose your created network volume.
1818
- Other settings are your choice, but I personally found that using 4090/L4 GPUs + Flashboot is the most cost-effective one. In frequent use, the 4090 is able to return an image in ~8s including cold start, making it ~4x cheaper to run this on RunPod than for example using DALLE-3 API. This fact can of course vary based on datacenter locations and GPU availability.

0 commit comments

Comments
 (0)