Workspaces & Storage
The workspace model
Section titled “The workspace model”Every swm pod has a /workspace directory — the persistent volume. This is where your code, models, checkpoints, and venvs live. Container disk (outside /workspace) is ephemeral and wiped on stop.
Storage backends
Section titled “Storage backends”swm supports three S3-compatible storage providers:
| Backend | Slug | Endpoint |
|---|---|---|
| Backblaze B2 | b2 | Auto-detected |
| AWS S3 | s3 | Native |
| Google GCS | gcs | storage.googleapis.com |
Credentials are never written to pods — they’re injected as transient environment variables per command.
Three-tier sync
Section titled “Three-tier sync”swm sync push uses a tiered strategy for optimal performance:
- Tier 1 (instant): inotify watcher tracks changed files. Push uploads only what changed.
- Tier 2 (seconds): If watcher isn’t running,
find -newerscans for modifications since last push. - Tier 3 (full): First push — parallel upload with s5cmd (512 workers).
Continuous auto-sync
Section titled “Continuous auto-sync”swm pod create doesn’t just pull your workspace — it leaves the pod with a running auto-sync daemon. The daemon tails the inotify watcher log every 60 seconds and pushes new/changed files (and removes deleted ones) without you running anything. Inspect or change it with:
swm sync auto runpod:abc123 --status # daemon state + recent logswm sync auto runpod:abc123 -i 30 # change interval to 30sswm sync auto runpod:abc123 --stop # stop the daemonswm sync auto runpod:abc123 # restart (default 60s)Safety check. The daemon refuses to start unless a prior sync pull or sync push succeeded for this pod (marked by a push stamp on disk). Without that signal, a stray local deletion would propagate to storage and erase your remote copy. --force overrides — use only when the pod is definitively the authoritative copy.
Self-healing. The watcher’s exclude regex is fingerprinted on the pod. When swm is upgraded with new excludes (for example, ignoring a new log file), long-lived watchers detect the drift on the next auto-sync cycle and restart with the latest configuration. No manual swm sync watch --stop && swm sync watch needed.
Deletion semantics
Section titled “Deletion semantics”By default, all sync paths are non-destructive — sync push, sync pull, and pod down never remove files from your bucket. Deletions are opt-in:
swm sync push --deletepropagates local deletions in a single push (requires an active watcher so swm has an authoritative deletion log).swm sync autopropagates local deletions on every cycle (gated by the safety check above).
Tar mode
Section titled “Tar mode”For workspaces with 100k+ small files:
swm sync push runpod:abc123 --tarswm sync pull lambda:def456 --tarPacks everything into a single compressed tarball with pigz (parallel gzip). One S3 object instead of hundreds of thousands of API calls.
Attaching a workspace later
Section titled “Attaching a workspace later”Created the pod with --no-storage, or swm pod create’s SSH probe timed out before bootstrap finished? Attach storage in one shot:
swm setup workspace runpod:abc123 # ws name = pod nameswm setup workspace runpod:abc123 -n my-ws # custom nameswm setup workspace runpod:abc123 -b b2:my-bucket # explicit bucketThis installs s5cmd, configures storage, pulls (or initializes) the workspace, persists the mapping in config, and starts auto-sync. If any step fails, swm prints the recovery commands so you can pick up where it left off.
pod down and resume
Section titled “pod down and resume”swm pod down my-project # push + terminateswm pod create -w my-project # pull workspace on new podPreflight checks
Section titled “Preflight checks”Before pulling, swm queries the bucket to estimate total size, compares against pod disk capacity, and offers interactive directory exclusion if the workspace won’t fit.