image generation

Stability AI logoComfy UI

Run your comfyUI workflow as an API

Model details

View repository

Example usage

Deploy any ComfyUI workflow as an API endpoint. To understand how ComfyUI works with Truss please read this README.

  • Export the workflow in the API JSON format and place it inside data/comfy_ui_workflow.json

1{
2  "3": {
3    "inputs": {
4      "seed": "{{seed}}",
5      "steps": 40,
6      "cfg": 7,
7      "sampler_name": "euler",
8      "scheduler": "normal",
9      "denoise": 1,
10      "model": [
11        "14",
12        0
13      ],
14      "positive": [
15        "10",
16        0
17      ],
18      "negative": [
19        "7",
20        0
21      ],
22      "latent_image": [
23        "5",
24        0
25      ]
26    },
27    "class_type": "KSampler"
28  }
29  ...
30  ...
31  ...

2. Define the inputs to the model by using handlebars templating {{variable name}}. For example, if one of your inputs is a prompt, update the data/comfy_ui_workflow.json file like so:

"6": {
  "inputs": {
    "text": "{{positive_prompt}}",
    "clip": [
      "14",
      1
    ]
  },
  "class_type": "CLIPTextEncode"
}

3. Define your models inside the data/model.json . Each model needs:

  • url : Where can the model be downloaded from

  • path: Where inside ComfyUI should the model get stored

Custom nodes can also be defined like so:

{
  "url": "https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite",
  "path": "custom_nodes"
}

Custom nodes should be placed at the top of the file and the models, LoRAs, upscalers, etc. should be placed afterwards.

 [
  {
      "url": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors",
      "path": "models/checkpoints/sd_xl_base_1.0.safetensors"
  },
  {
      "url": "https://huggingface.co/diffusers/controlnet-canny-sdxl-1.0/resolve/main/diffusion_pytorch_model.fp16.safetensors",
      "path": "models/controlnet/diffusers_xl_canny_full.safetensors"
  }
]

Input
1import requests
2import os
3import base64
4import random
5
6# Replace the empty string with your model id below
7model_id = ""
8baseten_api_key = os.environ["BASETEN_API_KEY"]
9
10values = {
11  "positive_prompt": "A highly detailed photo of a modern steampunk city, complete with elaborate gears, pipes, and machinery, 4k",
12  "negative_prompt": "blurry, text, low quality",
13  "controlnet_image": "https://storage.googleapis.com/logos-bucket-01/baseten_logo.png",
14  "seed": random.randint(1, 1000000)
15}
16
17#Call model endpoint
18res = requests.post(
19    f"https://model-{model_id}.api.baseten.co/production/predict",
20    headers={"Authorization": f"Api-Key {baseten_api_key}"},
21    json={"workflow_values": values},
22)
23
24res = res.json()
25preamble = "data:image/png;base64,"
26output = base64.b64decode(res["result"][1]["data"].replace(preamble, ""))
27
28# Save image to file
29img_file = open("comfyui.png", 'wb')
30img_file.write(output)
31img_file.close()
32os.system("open comfyui.png")
JSON output
1{
2    "result": [
3        {
4            "node_id": 16,
5            "data": "iVBOR...",
6            "format": "png"
7        },
8        {
9            "node_id": 9,
10            "data": "iNALP...",
11            "format": "png"
12        }
13    ]
14}
Preview
Preview image

image generation models

See all
Fotographer AI
Image generation

ZenCtrl

Custom Server - H100
ByteDance logo
Image generation

SDXL Lightning

1.0 - Lightning - A100
Stability AI logo
Image generation

Stable Diffusion 3 Medium

3 - A100

Stability AI models

See all
Stability AI logo
Image generation

Stable Diffusion 3 Medium

3 - A100
Stability AI logo
Image generation

Stable Diffusion XL

XL 1.0 - A10G
Stability AI logo
Image generation

Stable Video Diffusion

Video 1.0 - A100

🔥 Trending models