c0ea1d0cb9
`messages[].content` and `pageContext` had no `.max()` — a single chat turn could ship 50 MB / 200 messages and OOM JSON.parse, balloon prompt assembly, and burn arbitrary AI-provider cost. Separately, the project-cover image-generation path concatenated user free-text into the DALL-E / Gemini prompt without any injection check, so a manager could pivot the image model into "ignore previous instructions" / role-override style attacks against downstream prompt-aware infra. - assistant-procedure-support: add `.max(10_000)` per message, `.max(2_000)` on pageContext, and a `.superRefine` aggregate cap (200 KB total bytes across all messages + page context). Constants exported so call sites and tests share one source of truth. - project-cover.generateCover: run `checkPromptInjection` over the user-supplied `prompt` field; reject with BAD_REQUEST on match. - 7 schema-bound tests covering per-message, page-context, aggregate, message-count, and happy-path cases. Covers EAPPS 3.2.7 (input bounds) / EGAI 4.6.3.2 (prompt-injection detection on user inputs). Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>