Skip to content

swm pod

Full lifecycle management for GPU pod instances.

CommandDescription
pod createProvision a new GPU instance
pod listList instances across all providers
pod status <id>Detailed status of one instance
pod start <id>Start a stopped instance
pod stop <id>Stop (pause billing, preserve volume)
pod terminate <id>Permanently destroy instance
pod down <id>Push workspace + terminate
pod pruneRemove stale config entries
Terminal window
swm pod create -p runpod -g "H100 SXM" -n my-project \
--cuda 12.8 --gpu-count 4 --volume 200 \
--lifecycle auto-down --idle-timeout 30 -y
OptionDescription
-p, --providerProvider (required)
-g, --gpuGPU type
-n, --namePod name (required)
-w, --workspaceWorkspace to restore from storage
-b, --bucketStorage bucket (provider:bucket)
--no-storageSkip storage configuration entirely
--volumePersistent volume size in GB
--diskContainer disk size in GB
--gpu-countNumber of GPUs
--imageDocker image (provider default if empty)
--cuda X.YAuto-pick the newest provider image matching this CUDA major.minor (e.g. 12.8). Ignored if --image is set.
--cloud-typeRunPod cloud type: SECURE, COMMUNITY, ALL
--portsPorts to expose (default: 22/tcp,8888/http,8188/http)
--regionDatacenter/region ID
--lifecycleGuard mode: auto-down, auto-stop, remind, manual
--idle-timeout NIdle timeout in minutes
-x, --excludeGlob pattern to exclude from pull (repeatable)
-ySkip confirmation prompt

If a workspace is configured, pod create runs the full bootstrap over SSH:

  1. Install s5cmd and configure storage
  2. Pull the workspace (or initialize empty for a new one)
  3. Start the inotify watcher
  4. Start the auto-sync daemon (default interval 60s)

If any step fails, swm persists the pod ↔ workspace mapping unconditionally and prints exactly the commands you need to retry the missing steps:

⚠ Bootstrap incomplete. Re-run the remaining steps when ready:
Storage configuration: swm setup storage runpod:abc123
Workspace pull: swm sync pull runpod:abc123
Auto-sync start: swm sync auto runpod:abc123

The --cuda flag complements the Min CUDA column in swm gpus. swm cross-checks the resolved image against the GPU’s minimum CUDA and warns if the image is too old for the hardware.

Push workspace to storage and terminate in one command:

Terminal window
swm pod down my-project
swm pod down my-project --no-sync # terminate without pushing