Skip to content

Workspaces & Storage

Every swm pod has a /workspace directory — the persistent volume. This is where your code, models, checkpoints, and venvs live. Container disk (outside /workspace) is ephemeral and wiped on stop.

swm supports three S3-compatible storage providers:

BackendSlugEndpoint
Backblaze B2b2Auto-detected
AWS S3s3Native
Google GCSgcsstorage.googleapis.com

Credentials are never written to pods — they’re injected as transient environment variables per command.

swm sync push uses a tiered strategy for optimal performance:

  1. Tier 1 (instant): inotify watcher tracks changed files. Push uploads only what changed.
  2. Tier 2 (seconds): If watcher isn’t running, find -newer scans for modifications since last push.
  3. Tier 3 (full): First push — parallel upload with s5cmd (512 workers).

swm pod create doesn’t just pull your workspace — it leaves the pod with a running auto-sync daemon. The daemon tails the inotify watcher log every 60 seconds and pushes new/changed files (and removes deleted ones) without you running anything. Inspect or change it with:

Terminal window
swm sync auto runpod:abc123 --status # daemon state + recent log
swm sync auto runpod:abc123 -i 30 # change interval to 30s
swm sync auto runpod:abc123 --stop # stop the daemon
swm sync auto runpod:abc123 # restart (default 60s)

Safety check. The daemon refuses to start unless a prior sync pull or sync push succeeded for this pod (marked by a push stamp on disk). Without that signal, a stray local deletion would propagate to storage and erase your remote copy. --force overrides — use only when the pod is definitively the authoritative copy.

Self-healing. The watcher’s exclude regex is fingerprinted on the pod. When swm is upgraded with new excludes (for example, ignoring a new log file), long-lived watchers detect the drift on the next auto-sync cycle and restart with the latest configuration. No manual swm sync watch --stop && swm sync watch needed.

By default, all sync paths are non-destructivesync push, sync pull, and pod down never remove files from your bucket. Deletions are opt-in:

  • swm sync push --delete propagates local deletions in a single push (requires an active watcher so swm has an authoritative deletion log).
  • swm sync auto propagates local deletions on every cycle (gated by the safety check above).

For workspaces with 100k+ small files:

Terminal window
swm sync push runpod:abc123 --tar
swm sync pull lambda:def456 --tar

Packs everything into a single compressed tarball with pigz (parallel gzip). One S3 object instead of hundreds of thousands of API calls.

Created the pod with --no-storage, or swm pod create’s SSH probe timed out before bootstrap finished? Attach storage in one shot:

Terminal window
swm setup workspace runpod:abc123 # ws name = pod name
swm setup workspace runpod:abc123 -n my-ws # custom name
swm setup workspace runpod:abc123 -b b2:my-bucket # explicit bucket

This installs s5cmd, configures storage, pulls (or initializes) the workspace, persists the mapping in config, and starts auto-sync. If any step fails, swm prints the recovery commands so you can pick up where it left off.

Terminal window
swm pod down my-project # push + terminate
swm pod create -w my-project # pull workspace on new pod

Before pulling, swm queries the bucket to estimate total size, compares against pod disk capacity, and offers interactive directory exclusion if the workspace won’t fit.