Skip to content

Frameworks

swm includes declarative framework definitions for common AI tools. Each framework is installed with swm setup install and managed with start/stop.

FrameworkSlugCategoryDefault port
vLLMvllmLLM inference8000
Open WebUIopen-webuiLLM chat UI8080
OllamaollamaLLM engine11434
ComfyUIcomfyuiImage/video gen8188
SwarmUIswarmuiGeneration UI7801
AxolotlaxolotlLLM fine-tuning
H2O LLM Studiollm-studioNo-code fine-tuning10101
Terminal window
swm setup list # see all frameworks
swm setup install vllm runpod:abc123 # install
swm setup start vllm runpod:abc123 # start (background)
swm setup stop vllm runpod:abc123 # stop
  • Tensor parallelism: vLLM auto-detects GPU count and sets --tensor-parallel-size
  • SSH tunnels: If a port isn’t externally mapped, swm opens a background SSH tunnel
  • Health probing: After starting, swm probes the health endpoint with retries

For tools not in the built-in list, use swm run to install and manage manually:

Terminal window
swm run runpod:abc123 "pip install my-framework"
swm run runpod:abc123 "nohup my-server --port 8000 > /workspace/server.log 2>&1 &"