#
Infrastructure
Infrastructure is split into two Terraform stacks:
The compute stack deploys a single Compute Engine VM that runs Docker Compose
for staging/production services (relay, blossom, escrow) with
docker-compose.prod-override.yml (Let's Encrypt via acme-companion).
#
Fresh setup in a new GCP account
- Create two GCP projects manually (or via
gcloud). - Update project IDs in
infrastructure/bootstrap/terraform.tfvarsandinfrastructure/var/{staging,production}.tfvars. Run the bootstrap stack:
scripts/bootstrap.shDeploy each environment:
scripts/deploy.sh staging scripts/deploy.sh production- Update your domain registrar NS records to the values shown in bootstrap outputs.
- Seed runtime secrets in Secret Manager (see below).
#
Local deployment
#
Bootstrap (once / rare)
scripts/bootstrap.sh
Creates: GCS state bucket, Cloud DNS zones (parent + staging child), MX records, NS delegation.
#
Environment deploy
scripts/deploy.sh staging
# or
scripts/deploy.sh production
This initialises the GCS backend, runs terraform apply, and optionally resets the VM.
The state bucket name is read from infrastructure/bootstrap/terraform.tfvars.
You can override it with TF_STATE_BUCKET env var.
#
CI deployment (Workload Identity Federation)
The GitHub Actions workflow (.github/workflows/infra_deploy.yaml) handles
environment deploys automatically. Authentication uses Workload Identity
Federation (WIF) — no long-lived service account keys.
#
How it works
- GitHub's OIDC provider issues a short-lived JWT for the workflow run.
- GCP exchanges the JWT for temporary credentials via the WIF pool/provider.
- The workflow impersonates the
ci-deployservice account to run Terraform.
#
First-time setup (chicken-and-egg)
WIF resources are managed in Terraform (infrastructure/ci.tf), so the first
deploy must be run locally:
scripts/deploy.sh staging
scripts/deploy.sh production
After each apply, grab the outputs:
cd infrastructure
terraform output ci_workload_identity_provider
terraform output ci_service_account_email
Then set these as environment variables in GitHub:
After that, the workflow can self-manage — subsequent terraform apply runs
update the WIF resources if needed.
Bootstrap is not run in CI — it's a one-time local operation.
#
Runtime secrets (Secret Manager)
Terraform creates the required Secret Manager secret containers for:
ESCROW_PRIVATE_KEYBLOSSOM_DASHBOARD_PASSWORDOTEL_EXPORTER_OTLP_HEADERS
Non-sensitive runtime values like DOMAIN, LETSENCRYPT_EMAIL, RPC_URL, and
ESCROW_CONTRACT_ADDR are read from .env.staging / .env.prod.
OTEL_EXPORTER_OTLP_ENDPOINT is also non-sensitive and can live in
.env.staging / .env.prod, while the auth header stays in Secret Manager.
You can seed values either:
- via Terraform variable
compose_runtime_secret_values(sensitive map), or - manually in GCP Secret Manager.
Manual example:
PROJECT_ID="<terraform output project_id>"
echo "<escrow-private-key>" | gcloud secrets versions add ESCROW_PRIVATE_KEY --project "$PROJECT_ID" --data-file=-
echo "<blossom-dashboard-password>" | gcloud secrets versions add BLOSSOM_DASHBOARD_PASSWORD --project "$PROJECT_ID" --data-file=-
#
Seeding secrets after a fresh Terraform deploy
After scripts/deploy.sh staging (or production) completes, the Secret
Manager containers exist but have no versions yet. You need to seed them
before the VM can start the escrow daemon.
#
1. Generate and store the escrow key
Generate a fresh Nostr private key and store it in one step:
# From the repo root
PROJECT_ID="hostr-staging-d4c52998" # or hostr-production-d3ba05b4
cd escrow && dart run bin/generate_nsec.dart | \
gcloud secrets versions add ESCROW_PRIVATE_KEY \
--project="$PROJECT_ID" --data-file=-
Or if you already have a key:
echo -n "<nsec-hex>" | \
gcloud secrets versions add ESCROW_PRIVATE_KEY \
--project="$PROJECT_ID" --data-file=-
#
2. Store the blossom dashboard password
echo -n "<password>" | \
gcloud secrets versions add BLOSSOM_DASHBOARD_PASSWORD \
--project="$PROJECT_ID" --data-file=-
#
3. Verify the escrow pubkey
Derive the Nostr pubkey from the stored secret to confirm it's correct and
to get the value needed for the app's bootstrapEscrowPubkeys config:
./scripts/escrow-pubkey.sh "$PROJECT_ID"
This outputs the pubkey. Update the corresponding config in
app/lib/config/env/{staging,production}.config.dart:
static const _hostrEscrowPubkey = '<pubkey-from-above>';
#
4. Deploy the contract and set the address
Deploy the MultiEscrow contract to Rootstock (see the escrow README), then set
ESCROW_CONTRACT_ADDR in .env.staging or .env.prod:
ESCROW_CONTRACT_ADDR=0x<deployed-address>
This value is not a secret — it's committed to the repo in the env file.
#
Deployment behavior
The VM startup/deploy script:
- Pulls the repo branch.
- Fetches secrets from Secret Manager into
/opt/hostr/.env.runtime. - Writes
ESCROW_CONTRACT_ADDRtodocker/data/escrow/contract_addr. - Runs:
docker compose \
--env-file /opt/hostr/.env.runtime \
--profile <staging|prod> \
-f docker-compose.yml \
-f docker-compose.prod-override.yml \
up -d --build --remove-orphans
So TLS is always deployed through docker-compose.prod-override.yml in
staging/production.