swm pod
Full lifecycle management for GPU pod instances.
Subcommands
Section titled “Subcommands”| Command | Description |
|---|---|
pod create | Provision a new GPU instance |
pod list | List instances across all providers |
pod status <id> | Detailed status of one instance |
pod start <id> | Start a stopped instance |
pod stop <id> | Stop (pause billing, preserve volume) |
pod terminate <id> | Permanently destroy instance |
pod down <id> | Push workspace + terminate |
pod prune | Remove stale config entries |
pod create
Section titled “pod create”swm pod create -p runpod -g "H100 SXM" -n my-project \ --cuda 12.8 --gpu-count 4 --volume 200 \ --lifecycle auto-down --idle-timeout 30 -y| Option | Description |
|---|---|
-p, --provider | Provider (required) |
-g, --gpu | GPU type |
-n, --name | Pod name (required) |
-w, --workspace | Workspace to restore from storage |
-b, --bucket | Storage bucket (provider:bucket) |
--no-storage | Skip storage configuration entirely |
--volume | Persistent volume size in GB |
--disk | Container disk size in GB |
--gpu-count | Number of GPUs |
--image | Docker image (provider default if empty) |
--cuda X.Y | Auto-pick the newest provider image matching this CUDA major.minor (e.g. 12.8). Ignored if --image is set. |
--cloud-type | RunPod cloud type: SECURE, COMMUNITY, ALL |
--ports | Ports to expose (default: 22/tcp,8888/http,8188/http) |
--region | Datacenter/region ID |
--lifecycle | Guard mode: auto-down, auto-stop, remind, manual |
--idle-timeout N | Idle timeout in minutes |
-x, --exclude | Glob pattern to exclude from pull (repeatable) |
-y | Skip confirmation prompt |
If a workspace is configured, pod create runs the full bootstrap over SSH:
- Install
s5cmdand configure storage - Pull the workspace (or initialize empty for a new one)
- Start the inotify watcher
- Start the auto-sync daemon (default interval 60s)
If any step fails, swm persists the pod ↔ workspace mapping unconditionally and prints exactly the commands you need to retry the missing steps:
⚠ Bootstrap incomplete. Re-run the remaining steps when ready: Storage configuration: swm setup storage runpod:abc123 Workspace pull: swm sync pull runpod:abc123 Auto-sync start: swm sync auto runpod:abc123The --cuda flag complements the Min CUDA column in swm gpus. swm cross-checks the resolved image against the GPU’s minimum CUDA and warns if the image is too old for the hardware.
pod down
Section titled “pod down”Push workspace to storage and terminate in one command:
swm pod down my-projectswm pod down my-project --no-sync # terminate without pushing