SPAPI on VM (Traefik + FastAPI + Celery + Redis)
A production-ready, Dockerized deployment of the **biamazed** backend (aka **spapi**) running on a single Ubuntu VM with HTTPS, background workers, and scheduled jobs. This guide is concise, approachable, and covers local dev and the VM stack so you know what runs where and how the frontend talks to it.
What's in the box
flowchart LR
U[Browser / Frontend] -->|HTTPS| T(Traefik 443/80)
T --> API(FastAPI 'spapi-api:8000')
API <-->|Broker/Results| R[(Redis)]
W[Celery Worker] <---> R
B[Celery Beat Scheduler] --> R
API --- S[Supabase Postgres + Realtime]
classDef infra fill:#eef,stroke:#99f;
classDef svc fill:#efe,stroke:#9c9;
class T,API,W,B,R infra
class S svc
- Traefik terminates TLS (Let's Encrypt). Public:
:443
and:80
. - FastAPI (
spapi-api
) serves HTTP and exposes the API used by the frontend. - Redis is the Celery broker/result backend.
- Celery worker runs ingestion/sync jobs.
- Celery beat schedules recurrent jobs (hourly/daily) based on your prefs.
- Supabase hosts your tables (credentials vault, jobs, events, prefs) and Realtime feeds consumed by the frontend.
Everything is orchestrated with docker compose and pinned behind a systemd unit (spapi.service
) so it auto-starts on boot.
File layout (server)
/opt/biamazed # repo root (git clone)
└─ backend/spapi
├─ Dockerfile # builds spapi-api image
└─ deploy/
├─ docker-compose.yml # Traefik + Redis + API + worker + beat
├─ spapi.service # systemd unit (installs to /etc/systemd/system/)
└─ .env # stack environment (NOT in git)
Control scripts (recommended):
bootstrap_spapi.sh
(you SCP this; not committed): installs Docker, firewall, clones repo, writes.env
(template), brings stack up, installsspapi.service
.stackctl.sh
(committed): small helper toup
,down
,restart
,logs
,status
.
Environment (deploy/.env)
Fill these with your values:
Key | What it does |
---|---|
API_HOST | Public hostname Traefik routes to (api.example.com ). |
ACME_EMAIL | Email for Let's Encrypt certificate. |
SUPABASE_URL / NEXT_PUBLIC_SUPABASE_URL | Your Supabase project URL. |
SUPABASE_SERVICE_ROLE_KEY | Service role key (server-side use). |
SPAPI_API_CORS | CORS origins; * for development. |
SPAPI_FERNET_KEY | Encryption key for stored credentials (generate with python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())" ). |
Internals: Redis URLs, Celery broker/result URLs are wired by compose.
Endpoints (what the frontend uses)
Base URL (public): https://<API_HOST>
Method | Path | Purpose | Auth/Notes | Body (JSON) |
---|---|---|---|---|
GET | /healthz | Health check (no auth) | Returns {"status":"ok"} when the API is healthy. | — |
GET | /creds?organization_id=<uuid> | Check if encrypted keys exist for this org | 200 payload with {stored, updated_at} or 404 JSON {stored:false} | — |
POST | /creds | Save encrypted LWA keys (required for scheduling) | Requires SPAPI_FERNET_KEY configured server-side | { "organization_id": "<uuid>", "creds": { "client_id":"...","client_secret":"...","refresh_token":"..." } } |
DELETE | /creds | Remove encrypted keys | — | { "organization_id": "<uuid>" } |
POST | /test | Test inline LWA keys (no store; quick check) | Used by UI “Test” button | { "organization_id":"<uuid>", "creds":{...} } |
POST | /enqueue | Start a job (backfill, refresh, daily) | Valid inline creds or saved creds required | { "job_type":"vendor_sales_history" | "vendor_traffic_history" | "vendor_rt_refresh" | "vendor_daily_final", "organization_id":"<uuid>", "marketplaces":["DE","FR",...], "creds":{...}?, "options":{...} } |
Frontend pattern: the web app typically calls its own Next.js API routes, which forward to SPAPI_API_URL
(this FastAPI). You can call SPAPI directly from the browser if SPAPI_API_CORS
allows it.
Job visibility: jobs and job events are written into Supabase (spapi_jobs
, spapi_job_events
). The UI subscribes to these via Supabase Realtime.
Scheduling: preferences in spapi_sync_prefs
(days of week, windows, etc). Celery beat reads these and enqueues accordingly (only works when /creds
are stored).
Table names may vary in your project (e.g.,
apapi_credentials
). Adjust the backend config accordingly—this README uses the typicalspapi_*
names.
How the setup works (alignment)
-
Traefik reads Docker labels from the
spapi
service to routeHost(API_HOST)
→spapi-api:8000
overwebsecure
(TLS). ACME TLS-ALPN gets certs automatically (requires: DNS A record to your VM IP + inbound 80/443 allowed in Hetzner fw & UFW). -
FastAPI exposes all endpoints above and uses Supabase & Redis.
-
Worker/Beat share the exact same image as
spapi-api
(keeps dependencies consistent), just different entrypoints:celery -A spapi.celery_app.app worker
celery -A spapi.celery_app.app beat
-
Credentials:
/creds
encrypts and stores LWA keys usingSPAPI_FERNET_KEY
. Only server jobs (worker/beat) can decrypt them to run scheduled syncs. You can delete them at any time. -
Systemd wraps compose so the stack starts on boot and can be managed like any service.
Local development
From backend/spapi
:
# Python env (first time)
python -m venv .venv
source .venv/bin/activate
pip install -e .
# Redis (terminal #1, from repo root)
docker run --rm -p 6379:6379 redis:7
# FastAPI (terminal #2)
uvicorn spapi.api:app --host 0.0.0.0 --port 8000
# Celery worker (terminal #3)
celery -A spapi.celery_app.app worker --loglevel=INFO
# Celery beat (terminal #4)
celery -A spapi.celery_app.app beat --loglevel=INFO
Health check: curl -s http://127.0.0.1:8000/healthz
VM quickstart (first boot → ready)
-
Provision VM (Ubuntu 24.04)
- Add user data with the
app
user and your laptop SSH key. - Open Hetzner firewall inbound: 22, 80, 443 (drop default inbound; allow outbound).
- Ensure DNS
A
record forAPI_HOST
→ VM IP.
- Add user data with the
-
SSH in and add deploy key (read-only) for private repo access:
sudo -u app mkdir -p ~/.ssh && chmod 700 ~/.ssh
sudo -u app vi ~/.ssh/id_ed25519 # paste PRIVATE deploy key
sudo -u app chmod 600 ~/.ssh/id_ed25519
sudo -u app ssh-keyscan -t ed25519 github.com >> ~/.ssh/known_hosts
- Run bootstrap (you SCP'd it to the VM):
sudo bash /path/to/bootstrap_spapi.sh
# It will:
# - install Docker & Compose
# - set up UFW (22/80/443)
# - clone repo into /opt/biamazed
# - create /opt/biamazed/backend/spapi/deploy/.env (edit it!)
# - docker compose up -d --build
# - install and enable spapi.service
- Edit
.env
with real values (API_HOST, ACME_EMAIL, Supabase, FERNET_KEY), then:
cd /opt/biamazed/backend/spapi/deploy
./stackctl.sh restart
- Verify
# Containers & health
docker ps
docker logs spapi-api --tail=100
docker exec spapi-worker celery -A spapi.celery_app.app status -q
# HTTPS health (allow some seconds for ACME)
curl -I https://<API_HOST>/healthz
stackctl.sh
cheatsheet
./stackctl.sh status # show compose and service state
./stackctl.sh up # compose up -d --build
./stackctl.sh down # compose down
./stackctl.sh restart # restart via systemd (preferred)
./stackctl.sh logs api # logs: api | worker | beat | traefik | redis
Troubleshooting (fast answers)
- TLS shows “TRAEFIK DEFAULT CERT”
ACME didn't issue yet. Check: DNS points to VM IP; inbound 80/443 open (Hetzner+UFW);
ACME_EMAIL
set;acme.json
exists and0600
(Traefik volume handles this). /healthz
over HTTPS returns 404 Verify labels onspapi
servicetraefik.http.routers.spapi.rule=Host<<{API_HOST}>>
and that the container is healthy.- Worker “unhealthy”
Healthcheck uses
celery status
orpgrep
. If usingpgrep
, ensure image includesprocps
. Also verify Redis reachable (redis
container healthy). - Beat restarts / permission denied
Ensure the beat state path is writable. Our compose uses a volume; if you changed paths, mount a writable dir (e.g.
/var/lib/celery
) or set env to a writable location.
Security notes
- The deploy key is read-only and limited to the GitHub repo.
- LWA credentials are encrypted at rest with
SPAPI_FERNET_KEY
. Only server workers/beat can decrypt for scheduled jobs. - By default, CORS is permissive (
*
)—tightenSPAPI_API_CORS
for production. - Exposed ports: 443/80 (Traefik), nothing else public.
At a glance: Endpoint & job flow
- Frontend tests keys →
POST /test
(no store). - Frontend stores keys to enable schedules →
POST /creds
. - User runs quick actions →
POST /enqueue
with inline keys (or uses stored keys). - Worker ingests → writes jobs & events to Supabase → frontend Realtime updates.
- Beat runs hourly/daily per
spapi_sync_prefs
(only when keys stored).
That's it. You now have a clean, resilient stack with HTTPS, background jobs, and a simple operational model.