Enterprise Deployment
PhronEdge runs on your infrastructure with your keys, your database, your operational controls. Same binary as SaaS. Three environment variables change the deployment.
This guide covers when to choose self-hosted, how to configure signer and vault backends, deployment options across AWS, GCP, and Azure, and the first-deploy checklist.
SaaS or self-hosted
| You should run | When |
|---|---|
| PhronEdge SaaS | You want time-to-value measured in minutes, not weeks. You don't need data sovereignty over governance events. You don't require per-customer KMS isolation. |
| PhronEdge Self-Hosted | You're a regulated entity (bank, insurer, healthcare, defense). Your CISO requires private keys never leave your HSM. Your data residency rules apply to governance metadata, not just business data. You need per-deployment network isolation. |
Same code. Same SDK. Same CLI. Same Console. Moving between SaaS and self-hosted is an environment variable change, not a migration.
Architecture at a glance
A self-hosted PhronEdge deployment consists of:
| Component | What it does | Where it lives |
|---|---|---|
| Gateway | 7-checkpoint enforcement on every tool call | Your k8s or VM |
| Brain | Policy signing and compliance evaluation | Your k8s or VM |
| Anchor | SHA-256 hash-chained audit log | Your database |
| Signer backend | Holds the ECDSA private key | Your KMS or HSM |
| Vault backend | Stores credentials, policies, events | Your database |
Your data does not flow to PhronEdge. Governance decisions, signing, and audit all happen inside your perimeter.
The three environment variables
That's the minimum enterprise configuration. Everything else is optional or has safe defaults.
Signer backends
The signer backend holds your tenant's ECDSA P-256 private key. Keys are created automatically on first policy sign. No manual key generation. Key rotation is a Console-only operation that creates a new version, archives the old key, and anchors KEY_ROTATED in the audit chain. Old signatures remain valid indefinitely under archived keys.
| Backend | SIGNER_BACKEND= | Where keys live | When to use |
|---|---|---|---|
| Dev File | dev_file | Local PEM files in a dev-only directory | Local development and unit tests |
| Secret Manager | gcp_secret_manager | Google Cloud Secret Manager | Lightweight GCP deployments where HSM is not required |
| GCP KMS | gcp_kms | Google Cloud KMS with Cloud HSM | Enterprise on GCP. Keys never leave the HSM |
| AWS KMS | aws_kms | AWS KMS. Key alias alias/phronedge-{tenant}-v{N} | Enterprise on AWS. Keys never leave the HSM |
| Azure Key Vault | azure_kv | Azure Key Vault. Key name phronedge-{tenant}-v{N} | Enterprise on Azure. Keys never leave the HSM |
Key auto-provisioning
On first policy sign, PhronEdge creates the signing key in whatever backend is configured. On AWS KMS, the key is created in your account. On GCP KMS, a new CryptoKeyVersion is minted. On Azure Key Vault, a new signing key is provisioned. No manual setup, no key material ever handled by humans.
Independent verification
Every tenant's public keys are served at:
No authentication required. A bank's auditor, a regulator, or any third party can fetch the public key and verify any signed artifact independently. Without trusting PhronEdge. Without calling PhronEdge. Without PhronEdge being online.
What each signer backend needs
AWS KMS:
The PhronEdge service account needs this IAM policy:
Scope Resource to specific key IDs after first deployment if preferred. PhronEdge calls kms:Sign once per policy sign and once per credential issuance. Private keys never leave KMS. PhronEdge holds only public keys in memory.
GCP KMS:
The PhronEdge service account needs roles/cloudkms.signer and roles/cloudkms.publicKeyViewer on the key ring.
Additional environment variables:
Azure Key Vault:
The PhronEdge identity needs Key: Sign, Key: Verify, Key: Get, Key: Create permissions on the vault.
Additional environment variable:
GCP Secret Manager:
The PhronEdge service account needs roles/secretmanager.secretAccessor and roles/secretmanager.admin for key creation on first sign.
Vault backends
The vault stores governance state: signed policies, active credentials, credential vault copies (for tamper recovery), revocations, agent states, anchor events, and tool registry.
| Backend | VAULT_BACKEND= | Use case |
|---|---|---|
| Managed | managed | Default for SaaS tenants. Handled by PhronEdge. |
| Postgres | postgres | Enterprise on any cloud. Your database, your retention policy, your backup schedule. |
Postgres backend
On first connection, PhronEdge creates the required tables and indexes. No manual migrations.
Requirements:
- Postgres 14 or later
- JSONB support (standard in Postgres 14+)
- Ability to create tables in the configured database
Connection:
For AWS RDS, Cloud SQL, or Azure Database for Postgres, use the standard connection string format.
Optional connection pool tuning:
Retention: The audit chain is append-only by design. Delete events to save space only if your regulatory regime permits it. For regulated industries, set up your own snapshot and archive process rather than deleting.
Vault integrity
When a credential is issued, a SHA-256 hash of the vault copy is computed and anchored as a VAULT_CREDENTIAL_ISSUED event with a vault_hash field. Before restoring a credential from the vault, the hash is recomputed and compared.
If the hashes do not match, a VAULT_INTEGRITY_BROKEN event is anchored (severity: CRITICAL) and the restore is blocked. The tampering event is permanently recorded in the chain and cannot be deleted.
This means: even if an attacker modifies a credential directly in your database, the hash mismatch is detected on the next read, the tampering is recorded, and the credential is restored from an immutable backup.
Deployment options
Docker Compose (evaluation)
For proof-of-concept on a single host:
Evaluation only. Switch SIGNER_BACKEND to your KMS before production.
Helm (Kubernetes)
For EKS, GKE, AKS, or self-managed Kubernetes:
The Helm chart contains deployment, service, ingress, service account, HPA, ConfigMap, and Secret templates. Override values per environment.
ECS Fargate (AWS without Kubernetes)
For AWS-native deployments without managing a Kubernetes cluster:
Put the task behind an Application Load Balancer. Point your SDK and CLI at the ALB URL.
Cloud Run (GCP managed)
For GCP deployments using fully managed containers:
SDK and CLI configuration
Once your self-hosted deployment is running, developers set two environment variables to point at your gateway:
Same SDK. Same @pe.govern(). Same CLI. Developer code does not change between SaaS and self-hosted. The gateway URL is the only difference. Existing integrations port with an environment variable change.
First-deploy checklist
Start with evaluation, promote to your KMS and database, then harden for production.
Day 1: Evaluation
- Pull the PhronEdge container image
- Run
docker-compose upwithSIGNER_BACKEND=dev_file - Hit
/api/v1/gateway/statusand confirmoperational - Sign a sample policy through the CLI:
phronedge policy deploy sample-policy.yaml - Run a governed tool call against a test agent
- Verify the chain:
phronedge chain verify - Inspect the audit chain in the Observer
Goal: confirm the binary works on your infrastructure before wiring KMS or shared database.
Day 2: Wire to your KMS
- Create the KMS key ring, vault, or service account with required permissions (see signer backend section)
- Set
SIGNER_BACKEND=<aws_kms|gcp_kms|azure_kv>and restart - Sign a new policy. Keys are created automatically in your KMS on first sign
- Confirm the chain shows
POLICY_SIGNEDwith your KMS key ID - Fetch the public key from
/.well-known/phronedge/{tenant}/keys.json - Run the independent verification script against the signed policy
- Rotate the signing key through Console Settings. Confirm
KEY_ROTATEDin the chain - Sign another policy. Confirm it uses the new key version
- Verify previously signed policies still validate against the archived public key
- Confirm no private key material appears in any log
Goal: all cryptographic operations now happen inside your HSM.
Day 3: Wire to your database
- Provision Postgres (RDS, Cloud SQL, Azure Database, or self-managed)
- Set
VAULT_BACKEND=postgresandPOSTGRES_DSN=... - Restart PhronEdge. Confirm tables are created automatically
- Sign a new policy. Confirm credentials land in Postgres
- Verify vault integrity: issue a credential and confirm the
VAULT_CREDENTIAL_ISSUEDevent includes avault_hashfield - Configure automated Postgres backups per your standard retention policy
- Configure your database's encryption-at-rest (most managed services enable this by default)
Goal: all governance state now lives in your database.
Week 2: Production readiness
- Run your standard penetration test against the gateway
- Run load tests at expected traffic (calls/second per agent)
- Configure monitoring: gateway latency, chain integrity, credential issuance rate
- Set up alerting on chain breaks, quarantine events, kill switch activations
- Document your disaster recovery runbook (key rotation during outage, vault restore from snapshot)
- Train your incident response team on the Console Observer and kill switch procedure
- Schedule periodic chain verification (daily or hourly)
Goal: production-ready with full operational ownership.
What flows outbound
In SaaS mode, the SDK calls the PhronEdge gateway at api.phronedge.com. Traffic crosses the public internet.
In self-hosted mode, the SDK calls the gateway inside your infrastructure. Nothing flows outbound to PhronEdge for normal governance operations.
The gateway may optionally call PhronEdge for:
| Operation | Optional? | What's sent |
|---|---|---|
| Telemetry (aggregate metrics) | Yes, off by default | Anonymized gateway metrics. Disable with TELEMETRY_DISABLED=true |
| Intelligence updates | Yes, off by default | Fetches new regulation mappings. Disable with INTELLIGENCE_UPDATES_DISABLED=true |
| License verification | Yes, once per 24h | Signed license token, no data |
All of these can be disabled for air-gapped deployments.
Offline operation
When PhronEdge SaaS is unreachable, the self-hosted gateway continues governing. Signed credentials are ECDSA-verified locally. Policy decisions are deterministic. The audit chain is written to your Postgres.
When your network to PhronEdge is fully offline:
- New policies can still be signed (your KMS)
- Existing credentials still verify (no external calls)
- The audit chain is still written (your database)
- The Console at
phronedge.com/brainmay be unavailable. Use the CLI for operations.
Scaling
A single gateway container handles thousands of governed calls per second on typical hardware. For higher throughput, scale horizontally behind a load balancer.
Checkpoints are stateless. Credential verification is pure cryptographic operation. Audit chain writes are batched.
Autoscaling:
- CPU-based HPA: target 60% average
- Memory floor: 512 MB per replica
- Scale-out trigger: latency p99 above 75ms sustained for 2 minutes
Network and firewall
Inbound to the gateway:
- Port 8080 (HTTP) or 8443 (HTTPS). Terminate TLS at your ingress.
- From your agent runtime, from the Console, from any CI/CD runner invoking the CLI.
Outbound from the gateway:
- To your KMS (AWS, GCP, or Azure). Standard HTTPS.
- To your Postgres. Port 5432 (or your DB's port).
- To PhronEdge if telemetry is enabled.
Next steps
- Signing and verification. Full cryptographic details and independent verification
- Deployment runbook. Detailed operational procedures
- Threat model. What PhronEdge protects against and what it does not
- Compliance matrix. Regulation mapping for GDPR, EU AI Act, DORA, HIPAA, and more
- API reference. Full endpoint reference for custom integrations