This is the security follow-up to our build write-up. Same system, now hardened with production abuse controls and verification evidence.
When we launched the diagnostic assistant, security was part of v1 from day one: fixed mission catalog, conversation boundaries, input limits, rate limiting on the AI path, and hard prompt boundaries for jailbreak and abuse attempts.
We also ran adversarial prompt testing before rollout. That included prompt injection attempts, hostile/off-topic prompts, and forcing the model to invent services or prices. If it broke constraints, it did not ship.
Need the full technical build context first? Read How we built this AI assistant →
That gave us a solid baseline. The follow-up work below hardened the operational abuse surface around email capture and public write endpoints.
All sensitive credentials (AI keys, email provider keys, token signing secrets, and challenge secrets) are stored as encrypted Netlify environment variables — never shipped to the browser and never committed to the repo.
Request path for /api/diagnostic-email with explicit control points and response codes.
429 gate -> 400 validation -> HMAC token check -> 403 challenge gate -> 200 send path
We did controlled burst tests and challenge drills in production, then validated outcomes in Netlify Observability:
429 with Rate limited block reason under burst load.403.200 and sent the recommendation email.Security controls only matter if they hold under traffic and still preserve conversion. We tested for both.
After these updates, abuse resistance on diagnostics moved from high-risk to low-medium in our internal assessment. The biggest win was adding enforceable controls on public write/send endpoints and proving them with real telemetry.
Residual risk is now in predictable places: replay-hardening depth, stricter schema validation, and automated alerting quality. Those are iterative engineering tasks, not blind spots.
Full implementation notes and test evidence are documented in our internal security delta report for this rollout cycle.
We implement AI systems with measurable controls and production validation — not just demos. If you’re shipping MCP or agent workflows, we can help you move fast without cutting security corners.
See MCP Jumpstart →