← RESOLVE corpus

Governed Conversational Assistant

Seed-derived from Doc 282. ENTRACE Stack active. Empirical study (Cohen's d > 3).

Bring Your Own Key

This assistant runs on your Anthropic API key using a prepare/execute security model. When this page loaded, the server generated a unique action token. When you enter your key, it is sent exactly once to bind to that token. After binding, only the opaque token is used — your raw key is not included in subsequent request headers.

Your key and your conversation are held in server memory only — not written to disk, to a database, or to logs under the current implementation. Server restart = everything gone. Sessions auto-expire after 1 hour. The server operator's key is not used. Get an API key

Don't trust me. See how the architecture narrows the trust required and bounds specific risks.

This security model was derived from first principles — the derivation inversion applied to API key handling via the PRESTO prepare/execute pattern. The architecture narrows specific classes of risk structurally rather than by policy — prepare/execute bounds the key's transit to a single POST, in-memory-only storage keeps the key off disk, single-use token rotation limits the usable window of any intercepted token, and origin validation on the bind and chat endpoints closes cross-origin paths. Residual risks (dependency vulnerabilities, server compromise, supply-chain attacks on CDN resources, and bugs in this code) remain. Rotate your API key after use; that is the practical mitigation the architecture cannot provide. Perform a complete security audit yourself: github.com/jaredef/jaredfoy.com