Enterprise Deployment

PhronEdge runs on your infrastructure with your keys, your database, your operational controls. Same binary as SaaS. Three environment variables change the deployment.

This guide covers when to choose self-hosted, how to configure signer and vault backends, deployment options across AWS, GCP, and Azure, and the first-deploy checklist.

SaaS or self-hosted

You should runWhen
PhronEdge SaaSYou want time-to-value measured in minutes, not weeks. You don't need data sovereignty over governance events. You don't require per-customer KMS isolation.
PhronEdge Self-HostedYou're a regulated entity (bank, insurer, healthcare, defense). Your CISO requires private keys never leave your HSM. Your data residency rules apply to governance metadata, not just business data. You need per-deployment network isolation.

Same code. Same SDK. Same CLI. Same Console. Moving between SaaS and self-hosted is an environment variable change, not a migration.

Architecture at a glance

A self-hosted PhronEdge deployment consists of:

ComponentWhat it doesWhere it lives
Gateway7-checkpoint enforcement on every tool callYour k8s or VM
BrainPolicy signing and compliance evaluationYour k8s or VM
AnchorSHA-256 hash-chained audit logYour database
Signer backendHolds the ECDSA private keyYour KMS or HSM
Vault backendStores credentials, policies, eventsYour database

Your data does not flow to PhronEdge. Governance decisions, signing, and audit all happen inside your perimeter.

The three environment variables

Shell
SIGNER_BACKEND=aws_kms
VAULT_BACKEND=postgres
POSTGRES_DSN=postgresql://phronedge:password@db.internal:5432/phronedge

That's the minimum enterprise configuration. Everything else is optional or has safe defaults.

Signer backends

The signer backend holds your tenant's ECDSA P-256 private key. Keys are created automatically on first policy sign. No manual key generation. Key rotation is a Console-only operation that creates a new version, archives the old key, and anchors KEY_ROTATED in the audit chain. Old signatures remain valid indefinitely under archived keys.

BackendSIGNER_BACKEND=Where keys liveWhen to use
Dev Filedev_fileLocal PEM files in a dev-only directoryLocal development and unit tests
Secret Managergcp_secret_managerGoogle Cloud Secret ManagerLightweight GCP deployments where HSM is not required
GCP KMSgcp_kmsGoogle Cloud KMS with Cloud HSMEnterprise on GCP. Keys never leave the HSM
AWS KMSaws_kmsAWS KMS. Key alias alias/phronedge-{tenant}-v{N}Enterprise on AWS. Keys never leave the HSM
Azure Key Vaultazure_kvAzure Key Vault. Key name phronedge-{tenant}-v{N}Enterprise on Azure. Keys never leave the HSM

Key auto-provisioning

On first policy sign, PhronEdge creates the signing key in whatever backend is configured. On AWS KMS, the key is created in your account. On GCP KMS, a new CryptoKeyVersion is minted. On Azure Key Vault, a new signing key is provisioned. No manual setup, no key material ever handled by humans.

Independent verification

Every tenant's public keys are served at:

GET /.well-known/phronedge/{tenant_id}/keys.json

No authentication required. A bank's auditor, a regulator, or any third party can fetch the public key and verify any signed artifact independently. Without trusting PhronEdge. Without calling PhronEdge. Without PhronEdge being online.

What each signer backend needs

AWS KMS:

The PhronEdge service account needs this IAM policy:

JSON
{
  "Effect": "Allow",
  "Action": [
    "kms:CreateKey",
    "kms:GetPublicKey",
    "kms:Sign",
    "kms:CreateAlias",
    "kms:UpdateAlias",
    "kms:DescribeKey",
    "kms:ListAliases"
  ],
  "Resource": "*"
}

Scope Resource to specific key IDs after first deployment if preferred. PhronEdge calls kms:Sign once per policy sign and once per credential issuance. Private keys never leave KMS. PhronEdge holds only public keys in memory.

GCP KMS:

The PhronEdge service account needs roles/cloudkms.signer and roles/cloudkms.publicKeyViewer on the key ring.

Additional environment variables:

Shell
GCP_KMS_LOCATION=your-region        # Key ring location (e.g. europe-west4, us-central1)
GCP_KMS_PROTECTION=HSM              # HSM or SOFTWARE

Azure Key Vault:

The PhronEdge identity needs Key: Sign, Key: Verify, Key: Get, Key: Create permissions on the vault.

Additional environment variable:

Shell
AZURE_VAULT_URL=https://your-vault.vault.azure.net

GCP Secret Manager:

The PhronEdge service account needs roles/secretmanager.secretAccessor and roles/secretmanager.admin for key creation on first sign.

Vault backends

The vault stores governance state: signed policies, active credentials, credential vault copies (for tamper recovery), revocations, agent states, anchor events, and tool registry.

BackendVAULT_BACKEND=Use case
ManagedmanagedDefault for SaaS tenants. Handled by PhronEdge.
PostgrespostgresEnterprise on any cloud. Your database, your retention policy, your backup schedule.

Postgres backend

On first connection, PhronEdge creates the required tables and indexes. No manual migrations.

Requirements:

  • Postgres 14 or later
  • JSONB support (standard in Postgres 14+)
  • Ability to create tables in the configured database

Connection:

Shell
POSTGRES_DSN=postgresql://phronedge:password@db.internal:5432/phronedge

For AWS RDS, Cloud SQL, or Azure Database for Postgres, use the standard connection string format.

Optional connection pool tuning:

Shell
POSTGRES_POOL_MIN=2     # default: 2
POSTGRES_POOL_MAX=10    # default: 10

Retention: The audit chain is append-only by design. Delete events to save space only if your regulatory regime permits it. For regulated industries, set up your own snapshot and archive process rather than deleting.

Vault integrity

When a credential is issued, a SHA-256 hash of the vault copy is computed and anchored as a VAULT_CREDENTIAL_ISSUED event with a vault_hash field. Before restoring a credential from the vault, the hash is recomputed and compared.

If the hashes do not match, a VAULT_INTEGRITY_BROKEN event is anchored (severity: CRITICAL) and the restore is blocked. The tampering event is permanently recorded in the chain and cannot be deleted.

This means: even if an attacker modifies a credential directly in your database, the hash mismatch is detected on the next read, the tampering is recorded, and the credential is restored from an immutable backup.

Deployment options

Docker Compose (evaluation)

For proof-of-concept on a single host:

YAML
# docker-compose.yml
services:
  postgres:
    image: postgres:15
    environment:
      POSTGRES_DB: phronedge
      POSTGRES_USER: phronedge
      POSTGRES_PASSWORD: changeme
    ports: ["5432:5432"]
    volumes: ["pg-data:/var/lib/postgresql/data"]

  phronedge:
    image: phronedge/gateway:latest
    environment:
      SIGNER_BACKEND: dev_file
      VAULT_BACKEND: postgres
      POSTGRES_DSN: postgresql://phronedge:changeme@postgres:5432/phronedge
    ports: ["8080:8080"]
    depends_on: [postgres]

volumes:
  pg-data:
Shell
docker-compose up -d
curl http://localhost:8080/api/v1/gateway/status

Evaluation only. Switch SIGNER_BACKEND to your KMS before production.

Helm (Kubernetes)

For EKS, GKE, AKS, or self-managed Kubernetes:

Shell
helm install phronedge ./helm/phronedge \
  --set signer.backend=aws_kms \
  --set vault.backend=postgres \
  --set postgres.dsn="postgresql://phronedge:password@db.internal:5432/phronedge" \
  --set image.repository=your-registry.amazonaws.com/phronedge \
  --set image.tag=2.4.6 \
  --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"="arn:aws:iam::ACCOUNT:role/phronedge-signer"

The Helm chart contains deployment, service, ingress, service account, HPA, ConfigMap, and Secret templates. Override values per environment.

ECS Fargate (AWS without Kubernetes)

For AWS-native deployments without managing a Kubernetes cluster:

JSON
{
  "family": "phronedge",
  "taskRoleArn": "arn:aws:iam::ACCOUNT:role/phronedge-task",
  "containerDefinitions": [{
    "name": "phronedge",
    "image": "ACCOUNT.dkr.ecr.REGION.amazonaws.com/phronedge:2.4.6",
    "environment": [
      {"name": "SIGNER_BACKEND", "value": "aws_kms"},
      {"name": "VAULT_BACKEND", "value": "postgres"}
    ],
    "secrets": [
      {"name": "POSTGRES_DSN", "valueFrom": "arn:aws:secretsmanager:REGION:ACCOUNT:secret:phronedge/dsn"}
    ],
    "portMappings": [{"containerPort": 8080, "protocol": "tcp"}]
  }]
}

Put the task behind an Application Load Balancer. Point your SDK and CLI at the ALB URL.

Cloud Run (GCP managed)

For GCP deployments using fully managed containers:

YAML
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: phronedge
spec:
  template:
    spec:
      serviceAccountName: phronedge@PROJECT.iam.gserviceaccount.com
      containers:
      - image: gcr.io/PROJECT/phronedge:2.4.6
        env:
        - name: SIGNER_BACKEND
          value: gcp_kms
        - name: VAULT_BACKEND
          value: postgres
        - name: POSTGRES_DSN
          valueFrom:
            secretKeyRef:
              name: phronedge-dsn
              key: value

SDK and CLI configuration

Once your self-hosted deployment is running, developers set two environment variables to point at your gateway:

Shell
export PHRONEDGE_API_KEY=pe_live_your_key_here
export PHRONEDGE_GATEWAY_URL=https://governance.internal.yourcompany.com/api/v1

Same SDK. Same @pe.govern(). Same CLI. Developer code does not change between SaaS and self-hosted. The gateway URL is the only difference. Existing integrations port with an environment variable change.

First-deploy checklist

Start with evaluation, promote to your KMS and database, then harden for production.

Day 1: Evaluation

  • Pull the PhronEdge container image
  • Run docker-compose up with SIGNER_BACKEND=dev_file
  • Hit /api/v1/gateway/status and confirm operational
  • Sign a sample policy through the CLI: phronedge policy deploy sample-policy.yaml
  • Run a governed tool call against a test agent
  • Verify the chain: phronedge chain verify
  • Inspect the audit chain in the Observer

Goal: confirm the binary works on your infrastructure before wiring KMS or shared database.

Day 2: Wire to your KMS

  • Create the KMS key ring, vault, or service account with required permissions (see signer backend section)
  • Set SIGNER_BACKEND=<aws_kms|gcp_kms|azure_kv> and restart
  • Sign a new policy. Keys are created automatically in your KMS on first sign
  • Confirm the chain shows POLICY_SIGNED with your KMS key ID
  • Fetch the public key from /.well-known/phronedge/{tenant}/keys.json
  • Run the independent verification script against the signed policy
  • Rotate the signing key through Console Settings. Confirm KEY_ROTATED in the chain
  • Sign another policy. Confirm it uses the new key version
  • Verify previously signed policies still validate against the archived public key
  • Confirm no private key material appears in any log

Goal: all cryptographic operations now happen inside your HSM.

Day 3: Wire to your database

  • Provision Postgres (RDS, Cloud SQL, Azure Database, or self-managed)
  • Set VAULT_BACKEND=postgres and POSTGRES_DSN=...
  • Restart PhronEdge. Confirm tables are created automatically
  • Sign a new policy. Confirm credentials land in Postgres
  • Verify vault integrity: issue a credential and confirm the VAULT_CREDENTIAL_ISSUED event includes a vault_hash field
  • Configure automated Postgres backups per your standard retention policy
  • Configure your database's encryption-at-rest (most managed services enable this by default)

Goal: all governance state now lives in your database.

Week 2: Production readiness

  • Run your standard penetration test against the gateway
  • Run load tests at expected traffic (calls/second per agent)
  • Configure monitoring: gateway latency, chain integrity, credential issuance rate
  • Set up alerting on chain breaks, quarantine events, kill switch activations
  • Document your disaster recovery runbook (key rotation during outage, vault restore from snapshot)
  • Train your incident response team on the Console Observer and kill switch procedure
  • Schedule periodic chain verification (daily or hourly)

Goal: production-ready with full operational ownership.

What flows outbound

In SaaS mode, the SDK calls the PhronEdge gateway at api.phronedge.com. Traffic crosses the public internet.

In self-hosted mode, the SDK calls the gateway inside your infrastructure. Nothing flows outbound to PhronEdge for normal governance operations.

The gateway may optionally call PhronEdge for:

OperationOptional?What's sent
Telemetry (aggregate metrics)Yes, off by defaultAnonymized gateway metrics. Disable with TELEMETRY_DISABLED=true
Intelligence updatesYes, off by defaultFetches new regulation mappings. Disable with INTELLIGENCE_UPDATES_DISABLED=true
License verificationYes, once per 24hSigned license token, no data

All of these can be disabled for air-gapped deployments.

Offline operation

When PhronEdge SaaS is unreachable, the self-hosted gateway continues governing. Signed credentials are ECDSA-verified locally. Policy decisions are deterministic. The audit chain is written to your Postgres.

When your network to PhronEdge is fully offline:

  • New policies can still be signed (your KMS)
  • Existing credentials still verify (no external calls)
  • The audit chain is still written (your database)
  • The Console at phronedge.com/brain may be unavailable. Use the CLI for operations.

Scaling

A single gateway container handles thousands of governed calls per second on typical hardware. For higher throughput, scale horizontally behind a load balancer.

Checkpoints are stateless. Credential verification is pure cryptographic operation. Audit chain writes are batched.

Autoscaling:

  • CPU-based HPA: target 60% average
  • Memory floor: 512 MB per replica
  • Scale-out trigger: latency p99 above 75ms sustained for 2 minutes

Network and firewall

Inbound to the gateway:

  • Port 8080 (HTTP) or 8443 (HTTPS). Terminate TLS at your ingress.
  • From your agent runtime, from the Console, from any CI/CD runner invoking the CLI.

Outbound from the gateway:

  • To your KMS (AWS, GCP, or Azure). Standard HTTPS.
  • To your Postgres. Port 5432 (or your DB's port).
  • To PhronEdge if telemetry is enabled.

Next steps

Previous
Console Guide
Next
Deployment Runbook