Getting Started (CLI)
Install
Section titled “Install”# Homebrew (macOS)brew tap swm-gpu/swm && brew install swm
# or Python (3.11+)pipx install swm-gpuConfigure a provider
Section titled “Configure a provider”Add at least one GPU cloud API key:
swm config set runpod.api_key YOUR_RUNPOD_KEYOther providers: vastai.api_key, lambda.api_key, aws.access_key + aws.secret_key, etc.
Configure storage (optional)
Section titled “Configure storage (optional)”For workspace sync, configure an S3-compatible storage backend:
swm config set b2.key_id YOUR_KEY_IDswm config set b2.app_key YOUR_APP_KEYswm config set b2.bucket my-swm-bucketEnable shell autocomplete
Section titled “Enable shell autocomplete”swm has built-in tab completion for commands, options, and pod IDs. Add one line to your shell profile:
# bash (~/.bashrc)eval "$(_SWM_COMPLETE=bash_source swm)"
# zsh (~/.zshrc)eval "$(_SWM_COMPLETE=zsh_source swm)"
# fish (~/.config/fish/config.fish)eval (env _SWM_COMPLETE=fish_source swm)After reloading your shell, swm <TAB> completes commands and swm pod stop <TAB> completes pod IDs.
Search for GPUs
Section titled “Search for GPUs”swm gpus -g h100 --max-price 3.00 --sort priceCreate your first pod
Section titled “Create your first pod”swm pod create -p runpod -g "H100 SXM" -n my-first-pod \ --lifecycle auto-down --idle-timeout 30 -yInstall a framework
Section titled “Install a framework”swm setup install vllm runpod:YOUR_POD_IDswm setup start vllm runpod:YOUR_POD_IDPush your workspace and terminate
Section titled “Push your workspace and terminate”swm pod down my-first-podResume later on any cloud:
swm pod create -p lambda -g a100 -n my-first-pod -w my-first-pod -y