10 Commits

Author SHA1 Message Date
Hartmut e2dddd30df security: RBAC cache cross-instance invalidation + force re-login on role/perm change (#57)
- shrink roleDefaults cache TTL from 60s to 10s (safety-net staleness bound)
- publish/subscribe on capakraken:rbac-invalidate so peer instances drop
  their local role-defaults cache on mutation (ioredis pub/sub; lazy init
  so idle test files don't open connections)
- after updateUserRole/setUserPermissions/resetUserPermissions: delete
  all ActiveSession rows for that user so the next request re-auths via
  tRPC's jti check, and invalidate the role-defaults cache
- tests: peer-instance invalidation via FakeRedis pub/sub fan-out; mutation
  side-effects assert session deletion + cache invalidation on each path

Without this, demoted admins kept their JWT valid until expiry and peer
instances kept serving stale role defaults for up to the TTL window.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-17 13:01:15 +02:00
Hartmut 23c6e0e04b security: sanitise Prisma error leaks in AI-tool helpers (#53)
Five helper error mappers (timeline / project-creation / resource-creation
/ vacation-creation / task-action-execution) fell through to
`return { error: error.message }` for BAD_REQUEST and CONFLICT cases. When
the TRPCError wrapped a Prisma error, the message contained column names,
relation paths, and the offending unique-constraint value — all of which
would reach the LLM in chat context and, via audit_log.changes JSONB, the DB.

Add `sanitizeAssistantErrorMessage()` that regex-detects Prisma and raw
Postgres signatures (P2002/P2003/P2025, not-null, FK, check-constraint,
duplicate-key) and replaces them with a generic "Invalid input". Also caps
messages at 500 chars to defend against stack-trace-like payloads. Wire
the helper into all five call-sites; the developer-constructed
`AssistantVisibleError` branch in `normalizeAssistantExecutionError` is
left untouched since those strings are hand-written.

Coverage: 11 new tests in assistant-tools-error-sanitiser.test.ts; existing
vacation / task-action / resource-creation / project-creation error tests
(12 tests, 5 files) all remain green.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-17 09:40:01 +02:00
Hartmut 019702c043 security: ReDoS hardening on blueprint field validator (#52)
Admin-editable blueprint field patterns go through `new RegExp(pattern).test(userValue)`
— a classic ReDoS sink if the admin account is compromised or the
permission is ever delegated. A pattern like `^(a+)+$` against 30
'a's followed by '!' freezes the event loop for seconds per request.

Three layers of defence:
- Save-time: FieldValidationSchema.pattern now has `.max(200)` and a
  `.refine()` that rejects nested-quantifier shapes like `(x+)+`,
  `(?:x*)+`, `(x{2,})*`.
- Runtime (engine/blueprint/validator.ts):
  - isSuspectRegexPattern() runs the same heuristic. If it fires, the
    field fails validation outright — regex is never compiled.
  - Input strings are sliced to 4096 chars before .test() so even a
    benign pattern against a 10 MB payload returns in < 50 ms.
  - RegExp compile failures are caught and treated as validation
    errors rather than crashing the request.

Tests: 10 cases in packages/engine/src/__tests__/blueprint-validator-redos.test.ts,
including the canonical `^(a+)+$` attack — completes in < 50 ms.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-17 09:33:42 +02:00
Hartmut b9040cb328 test(security): scoped-caller forwarding preserves read-only proxy (#47)
Adds a regression suite asserting that the read-only Prisma proxy is
still in effect after a tool's executor forwards ctx.db into a scoped
tRPC caller (helpers.ts::createScopedCallerContext). Covers all three
attack surfaces: model writes, raw-SQL escape hatches, and interactive
$transaction / $runCommandRaw calls.

These tests pin the behaviour enforced by 1ff5c33; any future refactor
that unwraps the proxy during forwarding will fail this suite.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-17 09:28:02 +02:00
Hartmut 3d89d7d8eb security: redact sensitive fields in audit DB entries (#46)
createAuditEntry now deep-walks before/after/metadata and replaces
values of password, newPassword, currentPassword, passwordHash, token,
accessToken, refreshToken, sessionToken, apiKey, authorization, cookie,
secret, totpSecret, backupCode(s) with "[REDACTED]" before the JSONB
write.

The pino logger already redacts these paths for stdout (see
lib/logger.ts), but DB writes had no equivalent guard — the AI chat
loop at assistant-chat-loop.ts:265 blindly stores parsedArgs from tool
calls (e.g. set_user_password, create_user) into the AuditLog table.

Matching is case-insensitive; nested objects and arrays are recursed to
a depth of 8. Diffs are computed post-redaction so UPDATE entries that
only changed a sensitive field are correctly collapsed to no-op.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-17 09:25:15 +02:00
Hartmut 4ff7bc90c3 security: SSRF guard covers IPv6 + DNS-rebind defence via pinned IP (#49)
Expand the SSRF blocklist from IPv4-only to IPv6 loopback/ULA (fc00::/7)/
link-local (fe80::/10)/multicast/IPv4-mapped, plus the missing IPv4 ranges
0.0.0.0/8, 100.64.0.0/10 CGNAT, and TEST-NET/benchmark ranges. Replace the
single-lookup SSRF guard with resolveAndValidate(): resolves all DNS records
(lookup { all: true }) so a hostname returning "public + private" is
rejected, and returns the first validated address for connection pinning.

The webhook dispatcher now switches from plain fetch() to https.request()
with a custom Agent.lookup that returns the pre-validated IP. A DNS rebind
between the guard check and the TCP connect() can no longer redirect the
dial to an internal address. Hostname still flows through for SNI and
certificate validation.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-17 09:19:07 +02:00
Hartmut 3222bec8a5 security: atomic compare-and-swap for TOTP replay window (#43, part 1)
The previous SELECT → compare → UPDATE sequence let two concurrent login
requests with the same valid 6-digit code both observe a stale lastTotpAt,
both pass the in-JS replay check, and both succeed. A stolen TOTP (shoulder-
surf, phishing-proxy replay) was usable twice within its 30 s window.

Replace the three callsites (login authorize, self-service enable, self-
service verify) with a shared consumeTotpWindow() helper: a single
updateMany() expresses "window unused" as a SQL WHERE clause, so Postgres'
row lock serialises concurrent writers and whichever commits second sees
count=0 and is treated as a replay.

Backup codes (ticket part 2) are tracked as follow-up work.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-17 09:11:50 +02:00
Hartmut d1075af77d security: tighten CSP — drop provider wildcards, add object/frame/worker-src (#45)
Browser code never calls OpenAI/Azure/Gemini directly; all AI traffic is
server-side tRPC. connect-src is now locked to 'self'. Added object-src 'none',
frame-src 'none', media-src 'self', and worker-src 'self' blob:. style-src
keeps 'unsafe-inline' for React + @react-pdf/renderer (documented residual
risk — script-src is nonce-based so CSS injection cannot escalate to JS).

Added three regression tests covering connect-src no-wildcards, object/frame-src
'none', and worker-src scope.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-17 09:08:40 +02:00
Hartmut b32160d546 security: default-deny /api middleware allowlist (#44)
Previously middleware.ts listed /api/ as a public prefix, so any new
API route added under /api/** was served without a session check
unless the developer remembered to self-authenticate it. The
middleware now returns 404 for any /api path not explicitly
allowlisted (auth, trpc, sse, cron, reports, health, ready, perf) —
adding a new API route is a deliberate allowlist edit. verifyCronSecret
was already fail-closed when CRON_SECRET is unset; added unit tests.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-17 09:03:24 +02:00
Hartmut d45cc00f2f security: cookie + session hardening (#41)
Three related fixes:
- Cookie secure flag now tracks AUTH_URL scheme (https → Secure),
  not NODE_ENV — staging over HTTPS with NODE_ENV!=production used
  to ship Set-Cookie without Secure. Cookie name gains __Host-
  prefix when Secure is on.
- jwt() callback no longer swallows session-registry write failures;
  concurrent-session cap is now fail-closed.
- Session callback no longer copies token.sid onto session.user.jti.
  The tRPC route handler reads the JTI directly from the encrypted
  JWT via getToken() so it stays server-side.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-17 09:00:54 +02:00
32 changed files with 2158 additions and 236 deletions
+12 -1
View File
@@ -2,6 +2,7 @@ import { createTRPCContext, loadRoleDefaults } from "@capakraken/api";
import { appRouter } from "@capakraken/api/router";
import { prisma } from "@capakraken/db";
import { fetchRequestHandler } from "@trpc/server/adapters/fetch";
import { getToken } from "next-auth/jwt";
import type { NextRequest } from "next/server";
import { auth } from "~/server/auth.js";
@@ -42,9 +43,19 @@ const handler = async (req: NextRequest) => {
// Sessions kicked by concurrent-session limits or manual logout are rejected immediately.
// Fail-open: if the table doesn't exist yet (pending migration) the check is skipped.
// In E2E test mode the jwt callback skips registration, so skip validation too.
//
// We decode the JWT directly (not session.user.jti) because the session
// token is client-visible and therefore must not carry internal
// session-revocation identifiers — see security ticket #41.
const isE2eTestMode = process.env["E2E_TEST_MODE"] === "true";
if (session?.user && !isE2eTestMode) {
const jti = (session.user as typeof session.user & { jti?: string }).jti;
const secret = process.env["AUTH_SECRET"] ?? process.env["NEXTAUTH_SECRET"] ?? "";
const cookieName =
(process.env["AUTH_URL"] ?? "").startsWith("https://") || process.env["VERCEL"] === "1"
? "__Host-authjs.session-token"
: "authjs.session-token";
const jwt = secret ? await getToken({ req, secret, salt: cookieName }) : null;
const jti = (jwt?.["sid"] as string | undefined) ?? undefined;
if (jti) {
try {
const activeSession = await prisma.activeSession.findUnique({ where: { jti } });
+55
View File
@@ -0,0 +1,55 @@
import { afterEach, describe, expect, it } from "vitest";
import { verifyCronSecret } from "./cron-auth.js";
describe("verifyCronSecret — fail-closed when CRON_SECRET missing", () => {
const original = process.env["CRON_SECRET"];
afterEach(() => {
if (original === undefined) delete process.env["CRON_SECRET"];
else process.env["CRON_SECRET"] = original;
});
it("returns 401 when CRON_SECRET is unset", async () => {
delete process.env["CRON_SECRET"];
const req = new Request("http://localhost/api/cron/x", {
headers: { Authorization: "Bearer whatever" },
});
const res = verifyCronSecret(req);
expect(res).not.toBeNull();
expect(res?.status).toBe(401);
});
it("returns 401 when CRON_SECRET is empty string", async () => {
process.env["CRON_SECRET"] = "";
const req = new Request("http://localhost/api/cron/x", {
headers: { Authorization: "Bearer whatever" },
});
const res = verifyCronSecret(req);
expect(res).not.toBeNull();
expect(res?.status).toBe(401);
});
it("returns 401 when Authorization header is missing", () => {
process.env["CRON_SECRET"] = "real-secret";
const req = new Request("http://localhost/api/cron/x");
const res = verifyCronSecret(req);
expect(res?.status).toBe(401);
});
it("returns 401 when Authorization header mismatches", () => {
process.env["CRON_SECRET"] = "real-secret";
const req = new Request("http://localhost/api/cron/x", {
headers: { Authorization: "Bearer wrong-secret" },
});
const res = verifyCronSecret(req);
expect(res?.status).toBe(401);
});
it("returns null (allow) when Authorization header matches", () => {
process.env["CRON_SECRET"] = "real-secret";
const req = new Request("http://localhost/api/cron/x", {
headers: { Authorization: "Bearer real-secret" },
});
expect(verifyCronSecret(req)).toBeNull();
});
});
+74 -2
View File
@@ -4,8 +4,7 @@ import { NextRequest } from "next/server";
// Simulate an authenticated session so the middleware does not redirect
// and CSP headers are set on every response.
vi.mock("./server/auth-edge.js", () => ({
auth: (handler: (req: NextRequest & { auth: object | null }) => unknown) =>
(req: NextRequest) =>
auth: (handler: (req: NextRequest & { auth: object | null }) => unknown) => (req: NextRequest) =>
handler(Object.assign(req, { auth: { user: { id: "test-user", email: "test@test.com" } } })),
}));
@@ -81,4 +80,77 @@ describe("middleware — Content-Security-Policy", () => {
expect(csp).toContain("frame-ancestors 'none'");
}
});
it("connect-src has no wildcards — browser cannot call external hosts directly", async () => {
const middleware = await importMiddleware("production");
const res = await middleware(new NextRequest("http://localhost:3100/"));
const csp = res.headers.get("Content-Security-Policy") ?? "";
const connectSrc = csp.split(";").find((d: string) => d.trim().startsWith("connect-src")) ?? "";
expect(connectSrc).toMatch(/connect-src\s+'self'\s*$/);
expect(connectSrc).not.toContain("*");
expect(connectSrc).not.toContain("openai.com");
expect(connectSrc).not.toContain("azure.com");
expect(connectSrc).not.toContain("googleapis.com");
});
it("object-src, frame-src are 'none' to block legacy plugin and iframe vectors", async () => {
const middleware = await importMiddleware("production");
const res = await middleware(new NextRequest("http://localhost:3100/"));
const csp = res.headers.get("Content-Security-Policy") ?? "";
expect(csp).toContain("object-src 'none'");
expect(csp).toContain("frame-src 'none'");
});
it("worker-src restricts web workers to same-origin and blob: (for Next.js)", async () => {
const middleware = await importMiddleware("production");
const res = await middleware(new NextRequest("http://localhost:3100/"));
const csp = res.headers.get("Content-Security-Policy") ?? "";
expect(csp).toContain("worker-src 'self' blob:");
});
});
describe("middleware — API allowlist (default-deny)", () => {
afterEach(() => {
vi.unstubAllEnvs();
vi.resetModules();
});
it("allows allowlisted API routes through", async () => {
const middleware = await importMiddleware("production");
for (const url of [
"http://localhost:3100/api/trpc/project.list",
"http://localhost:3100/api/auth/signin",
"http://localhost:3100/api/sse/timeline",
"http://localhost:3100/api/cron/health-check",
"http://localhost:3100/api/reports/allocations",
"http://localhost:3100/api/health",
"http://localhost:3100/api/ready",
"http://localhost:3100/api/perf",
]) {
const res = await middleware(new NextRequest(url));
expect(res.status).not.toBe(404);
}
});
it("returns 404 for non-allowlisted /api/* routes", async () => {
const middleware = await importMiddleware("production");
for (const url of [
"http://localhost:3100/api/debug",
"http://localhost:3100/api/internal/secret",
"http://localhost:3100/api/admin/users",
]) {
const res = await middleware(new NextRequest(url));
expect(res.status).toBe(404);
}
});
});
describe("isApiAllowlisted helper", () => {
it("exported via module for testing", async () => {
const { isApiAllowlisted } = await import("./middleware.js");
expect(isApiAllowlisted("/api/trpc/foo")).toBe(true);
expect(isApiAllowlisted("/api/debug")).toBe(false);
expect(isApiAllowlisted("/api/healthz")).toBe(false);
expect(isApiAllowlisted("/api/health")).toBe(true);
});
});
+52 -14
View File
@@ -1,33 +1,62 @@
import { NextResponse } from "next/server";
import { auth } from "./server/auth-edge.js";
// Paths that are accessible without a session.
// Everything else requires a valid JWT session.
const PUBLIC_PREFIXES = [
"/auth/", // signin, forgot-password, reset-password
"/api/", // tRPC, health, auth endpoints — these manage their own auth
"/invite/", // public invite acceptance flow
// UI routes that are accessible without a session (login page, reset flow,
// public invite acceptance). All other UI routes redirect unauthenticated
// visitors to /auth/signin.
const PUBLIC_UI_PREFIXES = ["/auth/", "/invite/"];
// API allowlist — only routes listed here are served. Everything else under
// `/api/*` returns 404. Each allowlisted route MUST perform its own
// authentication (session check via auth(), CRON_SECRET bearer header, etc.)
// because the edge middleware cannot do Node-only work like Prisma queries.
// Prefix entries must end with `/`; exact entries match only the literal
// pathname. A new /api route therefore requires a deliberate allowlist edit,
// preventing accidental default-public exposure (security ticket #44).
export const SELF_AUTH_API_PREFIXES = [
"/api/auth/",
"/api/trpc/",
"/api/sse/",
"/api/cron/",
"/api/reports/",
];
function isPublicPath(pathname: string): boolean {
return PUBLIC_PREFIXES.some((prefix) => pathname.startsWith(prefix));
export const SELF_AUTH_API_EXACT = ["/api/health", "/api/ready", "/api/perf"];
export function isApiAllowlisted(pathname: string): boolean {
if (SELF_AUTH_API_EXACT.includes(pathname)) return true;
return SELF_AUTH_API_PREFIXES.some((p) => pathname.startsWith(p));
}
function isPublicUiPath(pathname: string): boolean {
return PUBLIC_UI_PREFIXES.some((prefix) => pathname.startsWith(prefix));
}
// Browser-side code never talks to AI providers directly — every OpenAI /
// Azure / Gemini call goes through a server tRPC route. Therefore connect-src
// is locked to 'self' with no wildcards (ticket #45). If a future feature
// needs a browser-originated cross-origin request, add it explicitly here.
function buildCsp(nonce: string, isProd: boolean): string {
const scriptSrc = isProd
? `'self' 'nonce-${nonce}'`
: `'self' 'unsafe-eval' 'unsafe-inline'`;
const scriptSrc = isProd ? `'self' 'nonce-${nonce}'` : `'self' 'unsafe-eval' 'unsafe-inline'`;
const imgSrc = isProd ? "'self' data: blob:" : "'self' data: blob: https:";
return [
"default-src 'self'",
`script-src ${scriptSrc}`,
// style-src keeps 'unsafe-inline' because React inlines styles from
// component-scoped CSS and @react-pdf/renderer emits inline style blocks.
// A nonce-based style-src-elem breaks both. This is an accepted residual
// risk documented in docs/security-architecture.md §5.
"style-src 'self' 'unsafe-inline'",
`img-src ${imgSrc}`,
"font-src 'self' data:",
"connect-src 'self' https://generativelanguage.googleapis.com https://*.openai.com https://*.azure.com",
"connect-src 'self'",
"frame-ancestors 'none'",
"frame-src 'none'",
"object-src 'none'",
"media-src 'self'",
"worker-src 'self' blob:",
"base-uri 'self'",
"form-action 'self'",
].join("; ");
@@ -36,8 +65,17 @@ function buildCsp(nonce: string, isProd: boolean): string {
export default auth(function middleware(request) {
const { pathname } = request.nextUrl;
// Redirect unauthenticated requests for protected routes to signin
if (!isPublicPath(pathname) && !request.auth) {
// /api/* — default-deny. Only allowlisted routes pass; everything else 404s.
// Allowlisted routes are responsible for their own auth check (they are
// reached in the route handler, not here, because edge middleware cannot do
// Prisma queries).
if (pathname.startsWith("/api/")) {
if (!isApiAllowlisted(pathname)) {
return NextResponse.json({ error: "Not Found" }, { status: 404 });
}
// fall through — continue to add CSP headers
} else if (!isPublicUiPath(pathname) && !request.auth) {
// UI route requires a session. Redirect to signin.
const signInUrl = new URL("/auth/signin", request.url);
signInUrl.searchParams.set("callbackUrl", request.url);
return NextResponse.redirect(signInUrl);
+79
View File
@@ -0,0 +1,79 @@
/**
* Cookie-hardening regression tests — security ticket #41.
*
* auth.config.ts uses module-level env reads, so we reset modules and stub
* the relevant variables before each importing the module freshly.
*/
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
function originalEnvSnapshot() {
return {
AUTH_URL: process.env["AUTH_URL"],
NEXTAUTH_URL: process.env["NEXTAUTH_URL"],
VERCEL: process.env["VERCEL"],
NODE_ENV: process.env["NODE_ENV"],
};
}
describe("auth.config cookies", () => {
let snapshot: ReturnType<typeof originalEnvSnapshot>;
beforeEach(() => {
snapshot = originalEnvSnapshot();
delete process.env["AUTH_URL"];
delete process.env["NEXTAUTH_URL"];
delete process.env["VERCEL"];
vi.resetModules();
});
afterEach(() => {
for (const [k, v] of Object.entries(snapshot)) {
if (v === undefined) delete process.env[k];
else process.env[k] = v;
}
vi.resetModules();
});
it("sets secure=true and __Host- prefix when AUTH_URL is https", async () => {
process.env["AUTH_URL"] = "https://app.example.com";
const { authConfig } = await import("./auth.config.js");
expect(authConfig.cookies?.sessionToken?.options?.secure).toBe(true);
expect(authConfig.cookies?.sessionToken?.name).toBe("__Host-authjs.session-token");
expect(authConfig.cookies?.callbackUrl?.name).toBe("__Host-authjs.callback-url");
expect(authConfig.cookies?.csrfToken?.name).toBe("__Host-authjs.csrf-token");
});
it("sets secure=false on http deployment", async () => {
process.env["AUTH_URL"] = "http://localhost:3000";
const { authConfig } = await import("./auth.config.js");
expect(authConfig.cookies?.sessionToken?.options?.secure).toBe(false);
expect(authConfig.cookies?.sessionToken?.name).toBe("authjs.session-token");
});
it("ignores NODE_ENV — secure flag tied to AUTH_URL scheme only", async () => {
// Staging: NODE_ENV=production but AUTH_URL is plain http → still insecure.
// The point is that the flag should NOT depend on NODE_ENV any more.
// (process.env.NODE_ENV is read-only in the Next.js tsconfig; force via index.)
(process.env as Record<string, string>)["NODE_ENV"] = "production";
process.env["AUTH_URL"] = "http://staging.internal";
const { authConfig } = await import("./auth.config.js");
expect(authConfig.cookies?.sessionToken?.options?.secure).toBe(false);
});
it("uses __Host- prefix on Vercel even without explicit AUTH_URL", async () => {
process.env["VERCEL"] = "1";
const { authConfig } = await import("./auth.config.js");
expect(authConfig.cookies?.sessionToken?.options?.secure).toBe(true);
expect(authConfig.cookies?.sessionToken?.name).toBe("__Host-authjs.session-token");
});
it("keeps sameSite=strict, httpOnly=true, path=/ in all configurations", async () => {
process.env["AUTH_URL"] = "https://app.example.com";
const { authConfig } = await import("./auth.config.js");
const opts = authConfig.cookies?.sessionToken?.options;
expect(opts?.sameSite).toBe("strict");
expect(opts?.httpOnly).toBe(true);
expect(opts?.path).toBe("/");
});
});
+35 -21
View File
@@ -3,6 +3,35 @@ import type { NextAuthConfig } from "next-auth";
// Edge-safe auth config — no native modules (no argon2, no prisma).
// Used by auth-edge.ts (middleware) to verify JWT sessions without
// pulling in Node.js-only packages into the Edge runtime.
// Secure cookies whenever the deployment URL is https, not only when
// NODE_ENV === "production". Staging over HTTPS must also ship Secure
// cookies, otherwise the session token is MITM-interceptable. The check
// happens at module-eval time — that's fine because the AUTH_URL / Next.js
// deployment URL does not change between requests.
function isHttpsDeployment(): boolean {
const explicit = (process.env["AUTH_URL"] ?? process.env["NEXTAUTH_URL"] ?? "").trim();
if (explicit.startsWith("https://")) return true;
// Vercel sets VERCEL=1 and the URL is always https there.
if (process.env["VERCEL"] === "1") return true;
return false;
}
const useSecure = isHttpsDeployment();
// Cookie name with __Host- prefix when secure. The __Host- prefix is an
// additional browser-enforced hardening (RFC 6265bis §4.1.3.2) that only
// accepts the cookie if Secure=true, Path="/", and no Domain attribute —
// preventing subdomain takeover from rewriting the session cookie.
const cookiePrefix = useSecure ? "__Host-" : "";
const baseCookieOptions = {
httpOnly: true,
sameSite: "strict" as const,
path: "/",
secure: useSecure,
};
export const authConfig = {
pages: {
signIn: "/auth/signin",
@@ -15,31 +44,16 @@ export const authConfig = {
},
cookies: {
sessionToken: {
name: "authjs.session-token",
options: {
httpOnly: true,
sameSite: "strict" as const,
path: "/",
secure: process.env.NODE_ENV === "production",
},
name: `${cookiePrefix}authjs.session-token`,
options: baseCookieOptions,
},
callbackUrl: {
name: "authjs.callback-url",
options: {
httpOnly: true,
sameSite: "strict" as const,
path: "/",
secure: process.env.NODE_ENV === "production",
},
name: `${cookiePrefix}authjs.callback-url`,
options: baseCookieOptions,
},
csrfToken: {
name: "authjs.csrf-token",
options: {
httpOnly: true,
sameSite: "strict" as const,
path: "/",
secure: process.env.NODE_ENV === "production",
},
name: `${cookiePrefix}authjs.csrf-token`,
options: baseCookieOptions,
},
},
} satisfies NextAuthConfig;
+75 -1
View File
@@ -14,12 +14,29 @@ import { beforeEach, describe, expect, it, vi } from "vitest";
// ── next-auth imports next/server without .js extension which fails in vitest
// node env. Mock the whole module so the error classes can be imported.
// Capture the config passed to NextAuth() so callbacks can be invoked.
const nextAuthCalls: Array<{
callbacks?: {
jwt?: (...args: unknown[]) => unknown;
session?: (...args: unknown[]) => unknown;
};
}> = [];
vi.mock("next-auth", () => {
class CredentialsSignin extends Error {
code = "credentials";
}
return {
default: vi.fn().mockReturnValue({ handlers: {}, auth: vi.fn() }),
default: vi.fn(
(cfg: {
callbacks?: {
jwt?: (...args: unknown[]) => unknown;
session?: (...args: unknown[]) => unknown;
};
}) => {
nextAuthCalls.push(cfg);
return { handlers: {}, auth: vi.fn() };
},
),
CredentialsSignin,
};
});
@@ -82,6 +99,63 @@ describe("MFA CredentialsSignin error classes — code property", () => {
});
});
describe("session() — does not leak JTI to client", () => {
const sessionCb = nextAuthCalls[0]?.callbacks?.session;
if (!sessionCb) {
it.skip("session callback not captured", () => {});
return;
}
it("never assigns token.sid onto session.user.jti", async () => {
const session = await sessionCb({
session: { user: { email: "x@e.com" }, expires: "2030-01-01" },
token: { sub: "u1", role: "USER", sid: "secret-session-id" },
});
const user = (session as { user: Record<string, unknown> }).user;
expect(user["jti"]).toBeUndefined();
expect(user["sid"]).toBeUndefined();
expect(user["id"]).toBe("u1");
expect(user["role"]).toBe("USER");
});
});
describe("jwt() — concurrent-session enforcement is fail-closed", () => {
const jwtCb = nextAuthCalls[0]?.callbacks?.jwt;
if (!jwtCb) {
it.skip("jwt callback not captured", () => {});
return;
}
beforeEach(() => {
prismaMock.systemSettings.findUnique.mockReset();
prismaMock.activeSession.create.mockReset();
prismaMock.activeSession.findMany.mockReset();
prismaMock.activeSession.deleteMany.mockReset();
});
it("throws if activeSession.create fails", async () => {
prismaMock.systemSettings.findUnique.mockResolvedValue({ maxConcurrentSessions: 3 });
prismaMock.activeSession.create.mockRejectedValue(new Error("db down"));
await expect(jwtCb({ token: {}, user: { id: "u1", role: "USER" } })).rejects.toThrow(
/Session registration failed/,
);
});
it("returns the token when session-registry writes succeed", async () => {
prismaMock.systemSettings.findUnique.mockResolvedValue({ maxConcurrentSessions: 3 });
prismaMock.activeSession.create.mockResolvedValue({});
prismaMock.activeSession.findMany.mockResolvedValue([]);
const result = (await jwtCb({ token: {}, user: { id: "u1", role: "USER" } })) as Record<
string,
unknown
>;
expect(result["role"]).toBe("USER");
expect(typeof result["sid"]).toBe("string");
});
});
describe("authorize() — login timing / enumeration defence", () => {
const authorize = credentialsCalls[0]?.authorize;
+20 -24
View File
@@ -2,6 +2,7 @@ import { prisma } from "@capakraken/db";
import { authRateLimiter } from "@capakraken/api/middleware/rate-limit";
import { createAuditEntry } from "@capakraken/api/lib/audit";
import { logger } from "@capakraken/api/lib/logger";
import { consumeTotpWindow } from "@capakraken/api/lib/totp-consume";
import NextAuth, { type NextAuthConfig } from "next-auth";
import Credentials from "next-auth/providers/credentials";
import { CredentialsSignin } from "next-auth";
@@ -188,15 +189,12 @@ const config = {
throw new InvalidTotpError();
}
// Replay-attack prevention: reject if the same 30-second window was already used
const userWithTotp = (await prisma.user.findUnique({
where: { id: user.id },
select: { lastTotpAt: true },
})) as { lastTotpAt: Date | null } | null;
if (
userWithTotp?.lastTotpAt != null &&
Date.now() - userWithTotp.lastTotpAt.getTime() < 30_000
) {
// Atomic replay-guard: a single UPDATE ... WHERE lastTotpAt is null
// OR older than 30 s both serialises concurrent logins (row lock)
// and expresses the "unused window" precondition in SQL. count=0
// means another request consumed this window first → replay.
const accepted = await consumeTotpWindow(prisma, user.id);
if (!accepted) {
logger.warn({ email, reason: "totp_replay" }, "TOTP replay attack blocked");
void createAuditEntry({
db: prisma,
@@ -210,12 +208,6 @@ const config = {
});
throw new InvalidTotpError();
}
// Record successful TOTP use to prevent replay within the same window
await (prisma.user.update as Function)({
where: { id: user.id },
data: { lastTotpAt: new Date() },
});
}
// MFA enforcement: if the user's role is in requireMfaForRoles but they
@@ -267,10 +259,9 @@ const config = {
if (token.role) {
(session.user as typeof session.user & { role: string }).role = token.role as string;
}
// Use token.sid (not token.jti) to avoid conflict with Auth.js's internal JWT ID claim
if (token.sid) {
(session.user as typeof session.user & { jti: string }).jti = token.sid as string;
}
// Do NOT expose token.sid on session.user — the JTI is an internal
// session-revocation token and must stay inside the encrypted JWT.
// Server-side handlers that need it decode the JWT via getToken().
return session;
},
async jwt({ token, user }) {
@@ -289,7 +280,11 @@ const config = {
const isE2eTestMode = process.env["E2E_TEST_MODE"] === "true";
if (isE2eTestMode) return token;
// Enforce concurrent session limit (kick-oldest strategy)
// Enforce concurrent session limit (kick-oldest strategy).
// This MUST fail-closed: if session-registry writes fail we cannot
// honour the configured session cap, so we must refuse to mint a
// session. Previously this path swallowed errors and logged-only,
// which let a DB-degradation scenario bypass the session cap.
try {
const settings = await prisma.systemSettings.findUnique({
where: { id: "singleton" },
@@ -297,12 +292,10 @@ const config = {
});
const maxSessions = settings?.maxConcurrentSessions ?? 3;
// Register this new session
await prisma.activeSession.create({
data: { userId: user.id!, jti },
});
// Count active sessions and delete the oldest if over the limit
const activeSessions = await prisma.activeSession.findMany({
where: { userId: user.id! },
orderBy: { createdAt: "asc" },
@@ -320,8 +313,11 @@ const config = {
);
}
} catch (err) {
// Non-blocking: don't prevent login if session tracking fails
logger.error({ err }, "Failed to enforce concurrent session limit");
logger.error(
{ err, userId: user.id },
"Failed to register active session — refusing to mint JWT",
);
throw new Error("Session registration failed");
}
}
return token;
+27 -1
View File
@@ -137,7 +137,9 @@ injection attempts and to surface them as audit-log entries.
## 7. HTTP Security Headers
Configured in `next.config.ts`:
Static headers are configured in `next.config.ts`. The Content-Security-Policy
is emitted per-request by `apps/web/src/middleware.ts` so it can carry a
per-request nonce.
| Header | Value |
| ------------------------- | ---------------------------------------------- |
@@ -149,6 +151,30 @@ Configured in `next.config.ts`:
| Referrer-Policy | `strict-origin-when-cross-origin` |
| Permissions-Policy | Camera, microphone, geolocation disabled |
### Content-Security-Policy directives (production)
| Directive | Value | Rationale |
| ----------------- | ------------------------- | -------------------------------------------------- |
| `default-src` | `'self'` | Baseline deny-all-cross-origin. |
| `script-src` | `'self' 'nonce-<random>'` | No `unsafe-inline` / `unsafe-eval` in prod. |
| `style-src` | `'self' 'unsafe-inline'` | Accepted residual risk — see note below. |
| `img-src` | `'self' data: blob:` | Allow base64 previews and generated blobs only. |
| `font-src` | `'self' data:` | Data URLs for inline-embedded fonts. |
| `connect-src` | `'self'` | All AI / third-party calls are server-side. |
| `frame-ancestors` | `'none'` | Clickjacking defence. |
| `frame-src` | `'none'` | No third-party iframes. |
| `object-src` | `'none'` | Blocks legacy `<object>` / Flash / applet vectors. |
| `media-src` | `'self'` | No cross-origin video / audio. |
| `worker-src` | `'self' blob:` | Next.js runtime uses blob-URL workers. |
| `base-uri` | `'self'` | Blocks `<base>` hijacks. |
| `form-action` | `'self'` | Blocks form-exfiltration to third parties. |
**Residual risk — `style-src 'unsafe-inline'`:** React inlines component-scoped
style attributes and `@react-pdf/renderer` emits inline `<style>` blocks that
cannot carry a nonce. A strict `style-src-elem` would break both. The risk is
bounded because `script-src` is nonce-based — a pure CSS-injection attack
cannot escalate to JS execution in this application.
## 8. Rate Limiting
- **Per-IP rate limiting**: via middleware on all API routes
+1
View File
@@ -12,6 +12,7 @@
"./lib/reminder-scheduler": "./src/lib/reminder-scheduler.ts",
"./lib/logger": "./src/lib/logger.ts",
"./lib/runtime-security": "./src/lib/runtime-security.ts",
"./lib/totp-consume": "./src/lib/totp-consume.ts",
"./middleware/rate-limit": "./src/middleware/rate-limit.ts"
},
"scripts": {
@@ -0,0 +1,72 @@
import { describe, expect, it } from "vitest";
import { sanitizeAssistantErrorMessage } from "../router/assistant-tools/helpers.js";
/**
* Ticket #53 — AI-tool helpers previously returned `error.message` verbatim
* for BAD_REQUEST / CONFLICT cases. When the underlying cause was a Prisma
* error (P2002 unique, P2003 FK, P2025 missing), the text included column
* names, relation paths, and the offending value — all of which ended up
* in LLM chat context and, via audit_log.changes, in the DB.
*
* `sanitizeAssistantErrorMessage` replaces those patterns with a generic
* "Invalid input" while letting hand-crafted router messages through.
*/
describe("sanitizeAssistantErrorMessage (#53)", () => {
it("replaces P2002 unique-constraint leak with generic text", () => {
const leak =
"Invalid `prisma.user.create()` invocation in\n/app/src/router/users.ts:142:5\n\nUnique constraint failed on the fields: (`email`)";
expect(sanitizeAssistantErrorMessage(leak)).toBe("Invalid input");
});
it("replaces P2003 FK-violation leak", () => {
const leak = "Foreign key constraint failed on the field: `clientId`";
expect(sanitizeAssistantErrorMessage(leak)).toBe("Invalid input");
});
it("replaces P2025 missing-record leak", () => {
const leak =
"An operation failed because it depends on one or more records that were required but not found.";
expect(sanitizeAssistantErrorMessage(leak)).toBe("Invalid input");
});
it("replaces raw Postgres unique-violation leak", () => {
const leak =
'duplicate key value violates unique constraint "User_email_key"\nDETAIL: Key (email)=(alice@example.com) already exists.';
expect(sanitizeAssistantErrorMessage(leak)).toBe("Invalid input");
});
it("replaces raw Postgres not-null leak", () => {
const leak =
'null value in column "projectId" of relation "Allocation" violates not-null constraint';
expect(sanitizeAssistantErrorMessage(leak)).toBe("Invalid input");
});
it("replaces raw Postgres check-constraint leak", () => {
const leak = 'new row for relation "Project" violates check constraint "Project_status_check"';
expect(sanitizeAssistantErrorMessage(leak)).toBe("Invalid input");
});
it("caps excessively long messages (stack-trace dump defence)", () => {
const giant = "A".repeat(600);
expect(sanitizeAssistantErrorMessage(giant)).toBe("Invalid input");
});
it("handles empty message defensively", () => {
expect(sanitizeAssistantErrorMessage("")).toBe("Invalid input");
});
it("lets short hand-crafted router messages through unchanged", () => {
const safe = "The project must have a client assigned.";
expect(sanitizeAssistantErrorMessage(safe)).toBe(safe);
});
it("lets business-rule validation text through", () => {
const safe = "Vacation cannot be approved in its current status.";
expect(sanitizeAssistantErrorMessage(safe)).toBe(safe);
});
it("lets shortCode conflict messages through (quoted value is user-provided)", () => {
const safe = 'A project with short code "ACME01" already exists.';
expect(sanitizeAssistantErrorMessage(safe)).toBe(safe);
});
});
@@ -51,6 +51,7 @@ describe("assistant user self-service MFA tools - enable flow", () => {
totpEnabled: false,
}),
update: vi.fn().mockResolvedValue({}),
updateMany: vi.fn().mockResolvedValue({ count: 1 }),
},
auditLog: {
create: vi.fn().mockResolvedValue({ id: "audit_1" }),
@@ -75,9 +76,17 @@ describe("assistant user self-service MFA tools - enable flow", () => {
lastTotpAt: true,
},
});
// Atomic-CAS replay guard: lastTotpAt is set by updateMany with a
// conditional WHERE; the subsequent update toggles totpEnabled only.
expect(db.user.updateMany).toHaveBeenCalledWith(
expect.objectContaining({
where: expect.objectContaining({ id: "user_1" }),
data: { lastTotpAt: expect.any(Date) },
}),
);
expect(db.user.update).toHaveBeenCalledWith({
where: { id: "user_1" },
data: { totpEnabled: true, lastTotpAt: expect.any(Date) },
data: { totpEnabled: true },
});
expect(db.auditLog.create).toHaveBeenCalledWith({
data: expect.objectContaining({
@@ -0,0 +1,177 @@
import { describe, expect, it, vi } from "vitest";
import { __test__, createAuditEntry } from "../lib/audit.js";
const { redactSensitive } = __test__;
describe("audit log redaction", () => {
describe("redactSensitive", () => {
it("redacts top-level password fields", () => {
const result = redactSensitive({ userId: "u1", password: "hunter2" });
expect(result).toEqual({ userId: "u1", password: "[REDACTED]" });
});
it("redacts nested password fields", () => {
const result = redactSensitive({
params: { userId: "u1", password: "hunter2" },
executed: true,
});
expect(result).toEqual({
params: { userId: "u1", password: "[REDACTED]" },
executed: true,
});
});
it("redacts password inside arrays", () => {
const result = redactSensitive({
users: [
{ id: "1", password: "secret" },
{ id: "2", password: "other" },
],
});
expect(result).toEqual({
users: [
{ id: "1", password: "[REDACTED]" },
{ id: "2", password: "[REDACTED]" },
],
});
});
it("is case-insensitive", () => {
const result = redactSensitive({
Password: "x",
PASSWORD: "y",
newPassword: "z",
currentPassword: "a",
});
expect(result).toEqual({
Password: "[REDACTED]",
PASSWORD: "[REDACTED]",
newPassword: "[REDACTED]",
currentPassword: "[REDACTED]",
});
});
it("redacts tokens, secrets, and cookies", () => {
const result = redactSensitive({
token: "t",
accessToken: "a",
refreshToken: "r",
apiKey: "k",
secret: "s",
totpSecret: "ts",
authorization: "Bearer x",
cookie: "sid=abc",
});
for (const v of Object.values(result as Record<string, unknown>)) {
expect(v).toBe("[REDACTED]");
}
});
it("leaves non-sensitive fields untouched", () => {
const result = redactSensitive({ name: "Alice", email: "a@b.c", count: 42, flag: true });
expect(result).toEqual({ name: "Alice", email: "a@b.c", count: 42, flag: true });
});
it("handles null, undefined, and primitives", () => {
expect(redactSensitive(null)).toBe(null);
expect(redactSensitive(undefined)).toBe(undefined);
expect(redactSensitive("string")).toBe("string");
expect(redactSensitive(123)).toBe(123);
});
it("stops recursion at MAX_REDACT_DEPTH", () => {
// Build a ~15-deep nested object; redaction should still work near the
// top but bail past the depth limit without throwing.
let v: Record<string, unknown> = { password: "leaf" };
for (let i = 0; i < 15; i++) {
v = { nested: v };
}
expect(() => redactSensitive(v)).not.toThrow();
});
});
describe("createAuditEntry", () => {
it("redacts passwords in `after` before the DB write", async () => {
const create = vi.fn().mockResolvedValue({});
const db = { auditLog: { create } };
await createAuditEntry({
db: db as never,
entityType: "AiToolExecution",
entityId: "call_1",
action: "CREATE",
after: { params: { userId: "u1", password: "cleartext" }, executed: true },
});
expect(create).toHaveBeenCalledTimes(1);
const data = create.mock.calls[0]![0]!.data;
const changes = data.changes as { after?: { params?: { password?: string } } };
expect(changes.after?.params?.password).toBe("[REDACTED]");
expect(changes.after?.params).toMatchObject({ userId: "u1" });
});
it("redacts passwords in before/after when non-sensitive fields also changed", async () => {
const create = vi.fn().mockResolvedValue({});
const db = { auditLog: { create } };
await createAuditEntry({
db: db as never,
entityType: "User",
entityId: "u1",
action: "UPDATE",
before: { password: "old", name: "Alice" },
after: { password: "new", name: "Bob" },
});
expect(create).toHaveBeenCalledTimes(1);
const changes = create.mock.calls[0]![0]!.data.changes as {
before?: Record<string, unknown>;
after?: Record<string, unknown>;
diff?: Record<string, { old: unknown; new: unknown }>;
};
expect(changes.before?.["password"]).toBe("[REDACTED]");
expect(changes.after?.["password"]).toBe("[REDACTED]");
// The name change survives in the diff, but the password diff collapses
// (both values are the same placeholder).
expect(changes.diff).toEqual({ name: { old: "Alice", new: "Bob" } });
});
it("skips UPDATE when both snapshots redact to the same value (empty diff)", async () => {
const create = vi.fn().mockResolvedValue({});
const db = { auditLog: { create } };
await createAuditEntry({
db: db as never,
entityType: "User",
entityId: "u1",
action: "UPDATE",
before: { password: "old" },
after: { password: "new" },
});
// Both redact to [REDACTED], diff is empty, create should NOT be called.
expect(create).not.toHaveBeenCalled();
});
it("redacts sensitive fields in metadata", async () => {
const create = vi.fn().mockResolvedValue({});
const db = { auditLog: { create } };
await createAuditEntry({
db: db as never,
entityType: "Webhook",
entityId: "wh_1",
action: "CREATE",
after: { url: "https://example.com/hook" },
metadata: { signingSecret: "ss", apiKey: "leak" },
});
const changes = create.mock.calls[0]![0]!.data.changes as {
metadata?: Record<string, unknown>;
};
expect(changes.metadata?.["apiKey"]).toBe("[REDACTED]");
// signingSecret is not in the set — verify the list is intentional
expect(changes.metadata?.["signingSecret"]).toBe("ss");
});
});
});
@@ -0,0 +1,131 @@
import { EventEmitter } from "node:events";
import { afterAll, beforeAll, beforeEach, describe, expect, it, vi } from "vitest";
/**
* Ticket #57 — verify that:
*
* 1. Publishing on RBAC_INVALIDATE_CHANNEL from node A causes node B to
* drop its local `_roleDefaultsCache`, so its next `loadRoleDefaults()`
* call re-reads from the DB (acceptance criterion:
* "2nd node sees update within 1 s" — we verify the mechanism, not the
* Redis latency).
*
* 2. `invalidateRoleDefaultsCache()` on the current node publishes on the
* same channel so peer instances receive the event.
*
* Strategy: stub `ioredis` with an EventEmitter-based fake before loading
* trpc.ts. The fake captures `publish()` calls and lets the test emit
* synthetic "message" events.
*/
// Fake Redis with two separate instances so the test mirrors the multi-node
// shape: one as subscriber, one as publisher. Both share the same module-
// level event router keyed by channel.
const channelSubscribers = new Map<string, Set<FakeRedis>>();
const publishCalls: Array<{ channel: string; message: string }> = [];
class FakeRedis extends EventEmitter {
constructor(_url: string, _opts: unknown) {
super();
}
// eslint-disable-next-line @typescript-eslint/require-await
async subscribe(channel: string): Promise<number> {
let set = channelSubscribers.get(channel);
if (!set) {
set = new Set();
channelSubscribers.set(channel, set);
}
set.add(this);
return set.size;
}
// eslint-disable-next-line @typescript-eslint/require-await
async publish(channel: string, message: string): Promise<number> {
publishCalls.push({ channel, message });
const subs = channelSubscribers.get(channel);
if (!subs) return 0;
// Fan out synchronously so the subscriber handler runs before the test
// assertion reads the cache — matches real ioredis "message" semantics
// from the subscriber's point of view.
for (const sub of subs) sub.emit("message", channel, message);
return subs.size;
}
}
vi.mock("ioredis", () => ({ Redis: FakeRedis, default: FakeRedis }));
vi.mock("../lib/logger.js", () => ({
logger: { warn: vi.fn(), error: vi.fn(), info: vi.fn(), debug: vi.fn() },
}));
// Prisma client mock — loadRoleDefaults pulls from systemRoleConfig.findMany.
const findManyCalls: number[] = [];
vi.mock("@capakraken/db", async () => {
const actual = await vi.importActual<Record<string, unknown>>("@capakraken/db");
return {
...actual,
prisma: {
systemRoleConfig: {
findMany: vi.fn().mockImplementation(async () => {
findManyCalls.push(Date.now());
return [{ role: "ADMIN", defaultPermissions: ["MANAGE_USERS"] }];
}),
},
},
};
});
// REDIS_URL is needed so trpc.ts decides to instantiate the fake Redis.
// `trpc.ts` now reads it lazily on first RBAC call, so setting it in
// beforeAll is enough; we always restore in afterAll to avoid leaking into
// other test files in the same worker.
const originalRedisUrl = process.env["REDIS_URL"];
describe("RBAC cache Redis pub/sub (#57)", () => {
beforeAll(() => {
process.env["REDIS_URL"] = "redis://fake:6379";
});
afterAll(() => {
if (originalRedisUrl === undefined) delete process.env["REDIS_URL"];
else process.env["REDIS_URL"] = originalRedisUrl;
});
beforeEach(() => {
findManyCalls.length = 0;
});
it("peer-instance invalidation: receiving a message clears the local cache", async () => {
const { loadRoleDefaults } = await import("../trpc.js");
// Warm the cache.
await loadRoleDefaults();
const hitsAfterWarm = findManyCalls.length;
expect(hitsAfterWarm).toBe(1);
// Second call within TTL should be cached — no additional findMany.
await loadRoleDefaults();
expect(findManyCalls.length).toBe(hitsAfterWarm);
// Simulate a peer instance publishing an invalidation: grab any
// subscriber on the channel and fire the event as if Redis delivered it.
const subs = channelSubscribers.get("capakraken:rbac-invalidate");
expect(subs).toBeDefined();
expect(subs!.size).toBeGreaterThanOrEqual(1);
for (const sub of subs!) sub.emit("message", "capakraken:rbac-invalidate", "1");
// Next load must hit the DB again.
await loadRoleDefaults();
expect(findManyCalls.length).toBe(hitsAfterWarm + 1);
});
it("local invalidation publishes on the RBAC channel", async () => {
const { invalidateRoleDefaultsCache } = await import("../trpc.js");
const countBefore = publishCalls.length;
invalidateRoleDefaultsCache();
// Give the microtask queue one tick (publish returns a promise).
await Promise.resolve();
const newPublishes = publishCalls.slice(countBefore);
expect(newPublishes.length).toBe(1);
expect(newPublishes[0]!.channel).toBe("capakraken:rbac-invalidate");
});
});
@@ -0,0 +1,91 @@
import { describe, expect, it } from "vitest";
import { createReadOnlyProxy } from "../lib/read-only-prisma.js";
/**
* Ticket #47 — read-only proxy must survive the scoped-caller indirection.
*
* assistant-tools.ts::executeTool swaps `ctx.db` for a read-only proxy when
* dispatching non-mutation tools. Tool executors then call
* `createScopedCallerContext(ctx)` which forwards `ctx.db` to a tRPC caller.
* If the proxy were not preserved through that forwarding, an LLM-invoked
* "read" tool could smuggle writes via the caller path.
*
* This suite asserts the proxy is not unwrapped on forwarding, and that
* every write-flavoured client method (model writes, raw SQL, interactive
* transactions, runCommandRaw) is still blocked after forwarding.
*/
describe("read-only proxy survives scoped-caller forwarding (#47)", () => {
function makeFakeClient() {
// Minimal shape that passes the Proxy's model detection (has findMany).
const user = {
findUnique: async () => ({ id: "u1" }),
findMany: async () => [],
create: async () => ({ id: "u1" }),
update: async () => ({ id: "u1" }),
};
return {
user,
$queryRaw: async () => [],
$queryRawUnsafe: async () => [],
$executeRaw: async () => 0,
$executeRawUnsafe: async () => 0,
$transaction: async () => [],
$runCommandRaw: async () => ({ ok: 1 }),
};
}
// Simulate what createScopedCallerContext does: construct a NEW object
// whose `db` key is assigned from the incoming ctx.db. This is the exact
// forwarding pattern used by helpers.ts::createScopedCallerContext.
function forwardToCaller(ctx: { db: unknown }): { db: unknown } {
return { db: ctx.db };
}
it("ctx.db retains proxy identity after forwarding", () => {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const client = makeFakeClient() as any;
const proxied = createReadOnlyProxy(client);
const forwarded = forwardToCaller({ db: proxied });
// Writes through the forwarded db must still throw.
// eslint-disable-next-line @typescript-eslint/no-explicit-any
expect(() => (forwarded.db as any).user.create({ data: {} })).toThrow(
/not permitted on read-only/,
);
});
it("raw/tx escape hatches still blocked after forwarding", () => {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const client = makeFakeClient() as any;
const proxied = createReadOnlyProxy(client);
const forwarded = forwardToCaller({ db: proxied }) as { db: Record<string, Function> };
expect(() => forwarded.db.$executeRaw!`DELETE FROM users`).toThrow(
/Raw\/escape operation "\$executeRaw" not permitted/,
);
expect(() => forwarded.db.$executeRawUnsafe!("DELETE FROM users")).toThrow(
/Raw\/escape operation "\$executeRawUnsafe" not permitted/,
);
expect(() => forwarded.db.$queryRawUnsafe!("SELECT 1")).toThrow(
/Raw\/escape operation "\$queryRawUnsafe" not permitted/,
);
expect(() => forwarded.db.$transaction!([])).toThrow(
/Raw\/escape operation "\$transaction" not permitted/,
);
expect(() => forwarded.db.$runCommandRaw!({})).toThrow(
/Raw\/escape operation "\$runCommandRaw" not permitted/,
);
});
it("reads still succeed after forwarding (positive control)", async () => {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const client = makeFakeClient() as any;
const proxied = createReadOnlyProxy(client);
const forwarded = forwardToCaller({ db: proxied }) as {
db: { user: { findUnique: (a: unknown) => Promise<unknown> } };
};
await expect(forwarded.db.user.findUnique({ where: { id: "u1" } })).resolves.toEqual({
id: "u1",
});
});
});
+125 -42
View File
@@ -1,16 +1,17 @@
import { describe, expect, it, vi } from "vitest";
import { assertWebhookUrlAllowed } from "../lib/ssrf-guard.js";
import { __test__, assertWebhookUrlAllowed, resolveAndValidate } from "../lib/ssrf-guard.js";
// Mock dns.lookup so tests do not require real DNS resolution.
// The guard now calls lookup(host, { all: true }) and receives an array.
vi.mock("node:dns/promises", () => ({
lookup: vi.fn(async (hostname: string) => {
const mapping: Record<string, string> = {
"example.com": "93.184.216.34",
"hooks.external.io": "52.1.2.3",
const mapping: Record<string, Array<{ address: string; family: number }>> = {
"example.com": [{ address: "93.184.216.34", family: 4 }],
"hooks.external.io": [{ address: "52.1.2.3", family: 4 }],
};
const ip = mapping[hostname];
if (!ip) throw new Error(`ENOTFOUND ${hostname}`);
return { address: ip, family: 4 };
const addrs = mapping[hostname];
if (!addrs) throw new Error(`ENOTFOUND ${hostname}`);
return addrs;
}),
}));
@@ -18,9 +19,7 @@ describe("assertWebhookUrlAllowed — SSRF guard", () => {
// ── Allowed targets ─────────────────────────────────────────────────────────
it("allows a valid HTTPS URL that resolves to a public IP", async () => {
await expect(
assertWebhookUrlAllowed("https://example.com/webhook"),
).resolves.toBeUndefined();
await expect(assertWebhookUrlAllowed("https://example.com/webhook")).resolves.toBeUndefined();
});
it("allows an HTTPS URL with a path and query string", async () => {
@@ -32,29 +31,29 @@ describe("assertWebhookUrlAllowed — SSRF guard", () => {
// ── Rejected schemes ─────────────────────────────────────────────────────────
it("rejects an HTTP URL (only HTTPS allowed)", async () => {
await expect(
assertWebhookUrlAllowed("http://example.com/webhook"),
).rejects.toMatchObject({ code: "BAD_REQUEST" });
await expect(assertWebhookUrlAllowed("http://example.com/webhook")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("rejects an FTP URL", async () => {
await expect(
assertWebhookUrlAllowed("ftp://example.com/file"),
).rejects.toMatchObject({ code: "BAD_REQUEST" });
await expect(assertWebhookUrlAllowed("ftp://example.com/file")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("rejects a completely invalid URL", async () => {
await expect(
assertWebhookUrlAllowed("not-a-url"),
).rejects.toMatchObject({ code: "BAD_REQUEST" });
await expect(assertWebhookUrlAllowed("not-a-url")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
// ── Blocked hostnames ────────────────────────────────────────────────────────
it("rejects localhost by hostname", async () => {
await expect(
assertWebhookUrlAllowed("https://localhost/callback"),
).rejects.toMatchObject({ code: "BAD_REQUEST" });
await expect(assertWebhookUrlAllowed("https://localhost/callback")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("rejects the AWS cloud metadata endpoint by hostname", async () => {
@@ -72,39 +71,39 @@ describe("assertWebhookUrlAllowed — SSRF guard", () => {
// ── Blocked IP ranges (direct IP addresses as hostname) ─────────────────────
it("rejects IPv4 loopback 127.0.0.1", async () => {
await expect(
assertWebhookUrlAllowed("https://127.0.0.1/callback"),
).rejects.toMatchObject({ code: "BAD_REQUEST" });
await expect(assertWebhookUrlAllowed("https://127.0.0.1/callback")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("rejects IPv4 loopback 127.1.2.3 (full /8 block)", async () => {
await expect(
assertWebhookUrlAllowed("https://127.1.2.3/callback"),
).rejects.toMatchObject({ code: "BAD_REQUEST" });
await expect(assertWebhookUrlAllowed("https://127.1.2.3/callback")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("rejects RFC 1918 private address 10.0.0.1", async () => {
await expect(
assertWebhookUrlAllowed("https://10.0.0.1/callback"),
).rejects.toMatchObject({ code: "BAD_REQUEST" });
await expect(assertWebhookUrlAllowed("https://10.0.0.1/callback")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("rejects RFC 1918 private address 172.16.0.1", async () => {
await expect(
assertWebhookUrlAllowed("https://172.16.0.1/callback"),
).rejects.toMatchObject({ code: "BAD_REQUEST" });
await expect(assertWebhookUrlAllowed("https://172.16.0.1/callback")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("rejects RFC 1918 private address 192.168.1.100", async () => {
await expect(
assertWebhookUrlAllowed("https://192.168.1.100/callback"),
).rejects.toMatchObject({ code: "BAD_REQUEST" });
await expect(assertWebhookUrlAllowed("https://192.168.1.100/callback")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("rejects link-local address 169.254.1.1", async () => {
await expect(
assertWebhookUrlAllowed("https://169.254.1.1/callback"),
).rejects.toMatchObject({ code: "BAD_REQUEST" });
await expect(assertWebhookUrlAllowed("https://169.254.1.1/callback")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
// ── DNS fail-closed behaviour ────────────────────────────────────────────────
@@ -120,10 +119,94 @@ describe("assertWebhookUrlAllowed — SSRF guard", () => {
it("rejects a public hostname that resolves to a private IP (DNS rebinding)", async () => {
const { lookup } = await import("node:dns/promises");
vi.mocked(lookup).mockResolvedValueOnce({ address: "192.168.0.1", family: 4 });
vi.mocked(lookup).mockResolvedValueOnce([{ address: "192.168.0.1", family: 4 }]);
await expect(assertWebhookUrlAllowed("https://rebind.example.com/hook")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("rejects if ANY of the resolved addresses is private (multi-record attack)", async () => {
const { lookup } = await import("node:dns/promises");
vi.mocked(lookup).mockResolvedValueOnce([
{ address: "93.184.216.34", family: 4 },
{ address: "10.0.0.5", family: 4 },
]);
await expect(assertWebhookUrlAllowed("https://multi.example.com/hook")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("resolveAndValidate returns the first validated address for connection pinning", async () => {
const resolved = await resolveAndValidate("https://example.com/hook");
expect(resolved.address).toBe("93.184.216.34");
expect(resolved.family).toBe(4);
expect(resolved.hostname).toBe("example.com");
});
// ── IPv6 blocklist ───────────────────────────────────────────────────────────
it("rejects IPv6 loopback ::1", async () => {
await expect(assertWebhookUrlAllowed("https://[::1]/hook")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("rejects IPv6 unique-local fc00::/7 (fc00::1)", async () => {
await expect(assertWebhookUrlAllowed("https://[fc00::1]/hook")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("rejects IPv6 link-local fe80::/10 (fe80::1)", async () => {
await expect(assertWebhookUrlAllowed("https://[fe80::1]/hook")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("rejects IPv4-mapped IPv6 (::ffff:192.168.1.1) pointing into private v4", async () => {
await expect(
assertWebhookUrlAllowed("https://rebind.example.com/hook"),
assertWebhookUrlAllowed("https://[::ffff:192.168.1.1]/hook"),
).rejects.toMatchObject({ code: "BAD_REQUEST" });
});
it("rejects IPv6 multicast (ff02::1)", async () => {
await expect(assertWebhookUrlAllowed("https://[ff02::1]/hook")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("rejects 0.0.0.0/8", async () => {
await expect(assertWebhookUrlAllowed("https://0.0.0.0/hook")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("rejects 100.64.0.0/10 CGNAT", async () => {
await expect(assertWebhookUrlAllowed("https://100.64.1.1/hook")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
await expect(assertWebhookUrlAllowed("https://100.127.254.254/hook")).rejects.toMatchObject({
code: "BAD_REQUEST",
});
});
it("accepts a 100.x address outside the CGNAT /10 (100.63.x is public)", async () => {
// 100.63.x is not in 100.64.0.0/10 — it is part of the public IANA pool.
expect(__test__.isBlockedIpv4("100.63.1.1")).toBe(false);
});
it("rejects 198.18.0.0/15 benchmark and TEST-NET ranges", async () => {
expect(__test__.isBlockedIpv4("198.18.0.1")).toBe(true);
expect(__test__.isBlockedIpv4("192.0.2.1")).toBe(true);
expect(__test__.isBlockedIpv4("203.0.113.1")).toBe(true);
});
it("expandIpv6 normalises short-form addresses to full 8-group form", () => {
expect(__test__.expandIpv6("::1")).toBe("0000:0000:0000:0000:0000:0000:0000:0001");
expect(__test__.expandIpv6("fe80::1")).toBe("fe80:0000:0000:0000:0000:0000:0000:0001");
expect(__test__.expandIpv6("::ffff:192.168.1.1")).toBe(
"0000:0000:0000:0000:0000:ffff:c0a8:0101",
);
});
});
@@ -0,0 +1,180 @@
import { beforeEach, describe, expect, it, vi } from "vitest";
import { SystemRole } from "@capakraken/shared";
vi.mock("../lib/audit.js", () => ({ createAuditEntry: vi.fn() }));
vi.mock("../lib/audit-helpers.js", () => ({
makeAuditLogger: () => vi.fn(),
}));
const invalidateRoleDefaultsCache = vi.hoisted(() => vi.fn());
vi.mock("../trpc.js", () => ({
invalidateRoleDefaultsCache,
}));
import {
resetUserPermissions,
setUserPermissions,
updateUserRole,
} from "../router/user-procedure-support.js";
/**
* Ticket #57 — when a privileged-state mutation happens we MUST:
* 1. delete every ActiveSession for the affected user (forces next-request
* re-auth, because the tRPC route validates `jti` against ActiveSession),
* 2. call `invalidateRoleDefaultsCache()` so peer instances drop their
* 10 s cache entries via the Redis pub/sub fan-out.
*
* Without (1), a demoted admin keeps their JWT valid until it expires, so
* permissions resolved server-side still reflect the old role. Without (2),
* peer instances keep serving the old role defaults for up to the TTL.
*/
describe("RBAC mutation side effects (#57)", () => {
beforeEach(() => {
vi.clearAllMocks();
});
function makeCtx(dbOverrides: Record<string, unknown> = {}) {
const defaultDb = {
user: {
findUnique: vi.fn(),
update: vi.fn(),
},
activeSession: {
deleteMany: vi.fn().mockResolvedValue({ count: 3 }),
},
...dbOverrides,
};
return {
ctx: {
db: defaultDb as never,
dbUser: {
id: "admin_1",
systemRole: SystemRole.ADMIN,
permissionOverrides: null,
},
session: {
user: { email: "admin@example.com", name: "Admin", image: null },
expires: "2099-01-01T00:00:00.000Z",
},
},
db: defaultDb,
};
}
describe("updateUserRole", () => {
it("deletes active sessions and invalidates cache when role changes", async () => {
const { ctx, db } = makeCtx({
user: {
findUnique: vi.fn().mockResolvedValue({
id: "user_victim",
name: "Victim",
email: "victim@example.com",
systemRole: SystemRole.ADMIN,
}),
update: vi.fn().mockResolvedValue({
id: "user_victim",
name: "Victim",
email: "victim@example.com",
systemRole: SystemRole.USER,
}),
},
});
await updateUserRole(ctx as never, {
id: "user_victim",
systemRole: SystemRole.USER,
});
expect(db.activeSession.deleteMany).toHaveBeenCalledWith({
where: { userId: "user_victim" },
});
expect(invalidateRoleDefaultsCache).toHaveBeenCalledTimes(1);
});
it("does NOT delete sessions or invalidate when role is unchanged", async () => {
const { ctx, db } = makeCtx({
user: {
findUnique: vi.fn().mockResolvedValue({
id: "user_1",
name: "Alice",
email: "alice@example.com",
systemRole: SystemRole.MANAGER,
}),
update: vi.fn().mockResolvedValue({
id: "user_1",
name: "Alice",
email: "alice@example.com",
systemRole: SystemRole.MANAGER,
}),
},
});
await updateUserRole(ctx as never, {
id: "user_1",
systemRole: SystemRole.MANAGER,
});
expect(db.activeSession.deleteMany).not.toHaveBeenCalled();
expect(invalidateRoleDefaultsCache).not.toHaveBeenCalled();
});
});
describe("setUserPermissions", () => {
it("deletes active sessions and invalidates cache on every call", async () => {
const { ctx, db } = makeCtx({
user: {
findUnique: vi.fn().mockResolvedValue({
id: "user_1",
name: "Alice",
email: "alice@example.com",
permissionOverrides: null,
}),
update: vi.fn().mockResolvedValue({
id: "user_1",
name: "Alice",
email: "alice@example.com",
permissionOverrides: { granted: ["x"], denied: [] },
}),
},
});
await setUserPermissions(ctx as never, {
userId: "user_1",
overrides: { granted: ["x"], denied: [] },
});
expect(db.activeSession.deleteMany).toHaveBeenCalledWith({
where: { userId: "user_1" },
});
expect(invalidateRoleDefaultsCache).toHaveBeenCalledTimes(1);
});
});
describe("resetUserPermissions", () => {
it("deletes active sessions and invalidates cache", async () => {
const { ctx, db } = makeCtx({
user: {
findUnique: vi.fn().mockResolvedValue({
id: "user_1",
name: "Alice",
email: "alice@example.com",
permissionOverrides: { granted: ["x"], denied: [] },
}),
update: vi.fn().mockResolvedValue({
id: "user_1",
name: "Alice",
email: "alice@example.com",
permissionOverrides: null,
}),
},
});
await resetUserPermissions(ctx as never, { userId: "user_1" });
expect(db.activeSession.deleteMany).toHaveBeenCalledWith({
where: { userId: "user_1" },
});
expect(invalidateRoleDefaultsCache).toHaveBeenCalledTimes(1);
});
});
});
+22 -6
View File
@@ -49,12 +49,20 @@ vi.mock("otpauth", () => {
const createCaller = createCallerFactory(userRouter);
function createAdminCaller(db: Record<string, unknown>) {
// Provide a no-op activeSession stub by default — some mutation paths
// (setPermissions / resetPermissions / updateRole, see ticket #57) now
// invalidate active sessions to force a re-login on privilege changes.
// Individual tests can override by passing their own `activeSession` key.
const dbWithDefaults = {
activeSession: { deleteMany: vi.fn().mockResolvedValue({ count: 0 }) },
...db,
};
return createCaller({
session: {
user: { email: "admin@example.com", name: "Admin", image: null },
expires: "2099-01-01T00:00:00.000Z",
},
db: db as never,
db: dbWithDefaults as never,
dbUser: {
id: "user_admin",
systemRole: SystemRole.ADMIN,
@@ -716,19 +724,26 @@ describe("user profile and TOTP self-service", () => {
totpEnabled: false,
});
const update = vi.fn().mockResolvedValue({});
const updateMany = vi.fn().mockResolvedValue({ count: 1 });
const caller = createAdminCaller({
user: {
findUnique,
update,
updateMany,
},
});
const result = await caller.verifyAndEnableTotp({ token: "123456" });
expect(result).toEqual({ enabled: true });
// lastTotpAt is written atomically by updateMany (the replay guard);
// user.update only toggles the enabled flag after the CAS succeeds.
expect(updateMany).toHaveBeenCalledWith(
expect.objectContaining({ data: { lastTotpAt: expect.any(Date) } }),
);
expect(update).toHaveBeenCalledWith({
where: { id: "user_admin" },
data: { totpEnabled: true, lastTotpAt: expect.any(Date) },
data: { totpEnabled: true },
});
});
@@ -743,10 +758,12 @@ describe("user profile and TOTP self-service", () => {
lastTotpAt: null,
});
const update = vi.fn().mockResolvedValue({});
const updateMany = vi.fn().mockResolvedValue({ count: 1 });
const caller = createAdminCaller({
user: {
findUnique,
update,
updateMany,
},
});
@@ -757,10 +774,9 @@ describe("user profile and TOTP self-service", () => {
where: { id: "user_admin" },
select: { id: true, totpSecret: true, totpEnabled: true, lastTotpAt: true },
});
expect(update).toHaveBeenCalledWith({
where: { id: "user_admin" },
data: { lastTotpAt: expect.any(Date) },
});
expect(updateMany).toHaveBeenCalledWith(
expect.objectContaining({ data: { lastTotpAt: expect.any(Date) } }),
);
});
it("rejects invalid login-flow TOTP tokens with UNAUTHORIZED", async () => {
@@ -71,6 +71,7 @@ function makeSelfServiceCtx(dbOverrides: Record<string, unknown> = {}) {
user: {
findUnique: vi.fn(),
update: vi.fn().mockResolvedValue({}),
updateMany: vi.fn().mockResolvedValue({ count: 1 }),
...((dbOverrides.user as object | undefined) ?? {}),
},
auditLog: {
@@ -96,6 +97,7 @@ function makePublicCtx(overrides: Record<string, unknown> = {}) {
user: {
findUnique: vi.fn(),
update: vi.fn().mockResolvedValue({}),
updateMany: vi.fn().mockResolvedValue({ count: 1 }),
...((overrides.user as object | undefined) ?? {}),
},
},
@@ -152,9 +154,12 @@ describe("verifyAndEnableTotp", () => {
token: "123456",
});
expect(result).toEqual({ enabled: true });
expect(ctx.db.user.updateMany).toHaveBeenCalledWith(
expect.objectContaining({ data: { lastTotpAt: expect.any(Date) } }),
);
expect(ctx.db.user.update).toHaveBeenCalledWith({
where: { id: "user_1" },
data: { totpEnabled: true, lastTotpAt: expect.any(Date) },
data: { totpEnabled: true },
});
});
@@ -19,6 +19,24 @@ vi.mock("../lib/logger.js", () => ({
},
}));
// Dispatcher now resolves+validates DNS before opening the HTTPS socket.
// Mock node:dns/promises so tests do not require real network.
vi.mock("node:dns/promises", () => ({
lookup: vi.fn(async (_hostname: string, _opts?: unknown) => [
{ address: "93.184.216.34", family: 4 },
]),
}));
// Mock node:https so we never open a real socket. The dispatcher calls
// https.request(opts, cb); we return a minimal EventEmitter-like stub.
const { httpsRequestMock } = vi.hoisted(() => ({
httpsRequestMock: vi.fn(),
}));
vi.mock("node:https", () => ({
Agent: vi.fn(() => ({})),
request: httpsRequestMock,
}));
describe("webhook dispatcher logging", () => {
beforeEach(() => {
vi.clearAllMocks();
@@ -82,11 +100,19 @@ describe("webhook dispatcher logging", () => {
});
it("treats non-2xx HTTP webhook responses as delivery failures", async () => {
const fetchMock = vi.fn().mockResolvedValue({
ok: false,
status: 500,
});
vi.stubGlobal("fetch", fetchMock);
// Stub https.request to deliver a 500 response synchronously via the
// response callback, so the dispatcher sees a non-2xx and logs a warn.
httpsRequestMock.mockImplementation(
(_opts: unknown, cb: (res: { statusCode: number; resume: () => void }) => void) => {
queueMicrotask(() => cb({ statusCode: 500, resume: () => {} }));
return {
on: vi.fn(),
write: vi.fn(),
end: vi.fn(),
destroy: vi.fn(),
};
},
);
const db = {
webhook: {
@@ -117,6 +143,66 @@ describe("webhook dispatcher logging", () => {
);
});
expect(fetchMock).toHaveBeenCalledTimes(1);
expect(httpsRequestMock).toHaveBeenCalledTimes(1);
// Verify the pinned IP was passed via the lookup override on the Agent.
const firstCall = httpsRequestMock.mock.calls[0]![0] as {
host: string;
servername: string;
agent: { lookup?: unknown };
};
expect(firstCall.host).toBe("example.com");
expect(firstCall.servername).toBe("example.com");
});
it("pins the validated IP via the HTTPS Agent.lookup override (DNS-rebind defence)", async () => {
const { Agent } = await import("node:https");
const AgentMock = vi.mocked(Agent);
AgentMock.mockClear();
httpsRequestMock.mockImplementation(
(_opts: unknown, cb: (res: { statusCode: number; resume: () => void }) => void) => {
queueMicrotask(() => cb({ statusCode: 204, resume: () => {} }));
return {
on: vi.fn(),
write: vi.fn(),
end: vi.fn(),
destroy: vi.fn(),
};
},
);
const db = {
webhook: {
findMany: vi.fn().mockResolvedValue([
{
id: "wh_rebind_1",
name: "Pinned Webhook",
url: "https://example.com/hook",
secret: null,
events: ["project.created"],
},
]),
},
};
dispatchWebhooks(db, "project.created", { id: "p1" });
await vi.waitFor(() => expect(httpsRequestMock).toHaveBeenCalledTimes(1));
expect(AgentMock).toHaveBeenCalledTimes(1);
const agentOptions = AgentMock.mock.calls[0]![0] as {
lookup?: (
host: string,
opts: unknown,
cb: (err: null, addr: string, family: number) => void,
) => void;
};
expect(typeof agentOptions.lookup).toBe("function");
// Invoke the lookup override to confirm it returns the pre-validated IP,
// NOT whatever DNS might be returning right now.
const cb = vi.fn();
agentOptions.lookup!("example.com", {}, cb);
expect(cb).toHaveBeenCalledWith(null, "93.184.216.34", 4);
});
});
@@ -0,0 +1,58 @@
import { beforeEach, describe, expect, it, vi } from "vitest";
import { consumeTotpWindow } from "../totp-consume.js";
describe("consumeTotpWindow — atomic replay guard", () => {
let updateMany: ReturnType<typeof vi.fn>;
let db: { user: { updateMany: typeof updateMany } };
beforeEach(() => {
updateMany = vi.fn();
db = { user: { updateMany } };
});
it("returns true when the update affected a row", async () => {
updateMany.mockResolvedValue({ count: 1 });
await expect(consumeTotpWindow(db, "user-1")).resolves.toBe(true);
});
it("returns false when another concurrent request already consumed the window", async () => {
updateMany.mockResolvedValue({ count: 0 });
await expect(consumeTotpWindow(db, "user-1")).resolves.toBe(false);
});
it("issues a WHERE clause that only updates null or older-than-30-s rows", async () => {
updateMany.mockResolvedValue({ count: 1 });
const now = new Date("2026-04-17T12:00:30.000Z");
await consumeTotpWindow(db, "user-1", now);
expect(updateMany).toHaveBeenCalledTimes(1);
const call = updateMany.mock.calls[0]![0] as {
where: { id: string; OR: Array<{ lastTotpAt: unknown }> };
data: { lastTotpAt: Date };
};
expect(call.where.id).toBe("user-1");
expect(call.where.OR).toEqual([
{ lastTotpAt: null },
{ lastTotpAt: { lt: new Date("2026-04-17T12:00:00.000Z") } },
]);
expect(call.data.lastTotpAt).toEqual(now);
});
it("simulated race: two parallel calls — exactly one wins", async () => {
// Model Postgres row-lock serialisation: the first updateMany to land
// sees count=1, the second (in the same 30-s window) sees count=0.
let served = 0;
updateMany.mockImplementation(async () => {
await new Promise((r) => setTimeout(r, 1));
return { count: served++ === 0 ? 1 : 0 };
});
const [a, b] = await Promise.all([
consumeTotpWindow(db, "user-1"),
consumeTotpWindow(db, "user-1"),
]);
expect([a, b].sort()).toEqual([false, true]);
expect(updateMany).toHaveBeenCalledTimes(2);
});
});
+83 -6
View File
@@ -20,6 +20,61 @@ interface CreateAuditEntryParams {
const INTERNAL_FIELDS = new Set(["id", "createdAt", "updatedAt"]);
// Field names whose values are never safe to persist into the audit log.
// Matching is case-insensitive and applied at every level of the object graph.
const SENSITIVE_FIELD_NAMES = new Set([
"password",
"newpassword",
"currentpassword",
"oldpassword",
"passwordhash",
"passwordconfirmation",
"confirmpassword",
"token",
"accesstoken",
"refreshtoken",
"sessiontoken",
"apikey",
"authorization",
"cookie",
"secret",
"totpsecret",
"backupcode",
"backupcodes",
]);
const REDACTED_PLACEHOLDER = "[REDACTED]";
const MAX_REDACT_DEPTH = 8;
/**
* Recursively strip values of fields whose names appear in SENSITIVE_FIELD_NAMES.
* Used to prevent password/token leaks into the audit log JSONB column.
*
* The pino logger has its own redact config for stdout; this function is the
* DB-write equivalent.
*/
function redactSensitive(value: unknown, depth: number = 0): unknown {
if (depth > MAX_REDACT_DEPTH) return value;
if (value === null || value === undefined) return value;
if (Array.isArray(value)) {
return value.map((v) => redactSensitive(v, depth + 1));
}
if (typeof value === "object") {
const out: Record<string, unknown> = {};
for (const [k, v] of Object.entries(value as Record<string, unknown>)) {
if (SENSITIVE_FIELD_NAMES.has(k.toLowerCase())) {
out[k] = REDACTED_PLACEHOLDER;
} else {
out[k] = redactSensitive(v, depth + 1);
}
}
return out;
}
return value;
}
export const __test__ = { redactSensitive, SENSITIVE_FIELD_NAMES };
/**
* Compare two snapshots and return only the changed fields.
* Skips internal fields (id, createdAt, updatedAt).
@@ -91,15 +146,34 @@ export function generateSummary(
*/
export async function createAuditEntry(params: CreateAuditEntryParams): Promise<void> {
try {
const { db, entityType, entityId, entityName, action, userId, before, after, source, metadata } = params;
const {
db,
entityType,
entityId,
entityName,
action,
userId,
before,
after,
source,
metadata,
} = params;
const auditLog = (db as Partial<PrismaClient>).auditLog;
if (!auditLog || typeof auditLog.create !== "function") {
return;
}
// Redact sensitive field values before anything else — diffs and summaries
// must all be derived from already-sanitised snapshots.
const safeBefore = before ? (redactSensitive(before) as Record<string, unknown>) : undefined;
const safeAfter = after ? (redactSensitive(after) as Record<string, unknown>) : undefined;
const safeMetadata = metadata
? (redactSensitive(metadata) as Record<string, unknown>)
: undefined;
// Compute diff if both snapshots are available
const diff = before && after ? computeDiff(before, after) : undefined;
const diff = safeBefore && safeAfter ? computeDiff(safeBefore, safeAfter) : undefined;
// Skip UPDATE entries where nothing actually changed
if (action === "UPDATE" && diff && Object.keys(diff).length === 0) {
@@ -111,10 +185,10 @@ export async function createAuditEntry(params: CreateAuditEntryParams): Promise<
// Build the changes JSONB payload
const changes: Record<string, unknown> = {};
if (before) changes.before = before;
if (after) changes.after = after;
if (safeBefore) changes.before = safeBefore;
if (safeAfter) changes.after = safeAfter;
if (diff) changes.diff = diff;
if (metadata) changes.metadata = metadata;
if (safeMetadata) changes.metadata = safeMetadata;
await auditLog.create({
data: {
@@ -130,6 +204,9 @@ export async function createAuditEntry(params: CreateAuditEntryParams): Promise<
});
} catch (error) {
// Fire-and-forget: log but never propagate
logger.error({ err: error, entityType: params.entityType, entityId: params.entityId }, "Failed to create audit entry");
logger.error(
{ err: error, entityType: params.entityType, entityId: params.entityId },
"Failed to create audit entry",
);
}
}
+159 -38
View File
@@ -1,44 +1,131 @@
/**
* SSRF guard for outbound webhook URLs.
*
* Validates that a target URL is not pointing to internal/private infrastructure
* before allowing a webhook to be stored or dispatched.
* Blocks IPv4 RFC-1918, loopback, link-local, CGNAT, cloud-metadata IPs, as
* well as IPv6 loopback, link-local (fe80::/10), unique-local (fc00::/7), and
* IPv4-mapped IPv6 addresses (::ffff:...). Resolves the hostname with
* `all: true` so a DNS record returning multiple addresses is rejected if
* ANY of them is private — an attacker who adds a private A record alongside
* a public one cannot smuggle past by hoping the fetch picks the "good" IP.
*
* DNS-rebinding defence: callers that are about to open a connection should
* use `resolveAndValidate()` and then pass the returned `address` through
* a `lookup` override on their HTTPS agent so the TCP connect uses the
* validated IP, not a freshly-resolved one that the attacker may have
* flipped after the check. See `webhook-dispatcher.ts`.
*/
import { lookup } from "node:dns/promises";
import { lookup as dnsLookup } from "node:dns/promises";
import { isIP } from "node:net";
import { TRPCError } from "@trpc/server";
/** Regex patterns matching IP ranges that must not be targeted. */
const BLOCKED_IP_PATTERNS: RegExp[] = [
// Loopback IPv4
/^127\./,
// Loopback IPv6
/^::1$/,
// RFC 1918 private
/^10\./,
/^172\.(1[6-9]|2\d|3[01])\./,
/^192\.168\./,
// Link-local
/^169\.254\./,
// Cloud metadata (AWS, GCP, Azure)
/^100\.64\./,
const IPV4_BLOCK_PATTERNS: RegExp[] = [
/^0\./, // 0.0.0.0/8 — "this network"
/^10\./, // RFC 1918
/^100\.(6[4-9]|[7-9]\d|1[01]\d|12[0-7])\./, // 100.64.0.0/10 CGNAT
/^127\./, // loopback
/^169\.254\./, // link-local incl. AWS/Azure/GCP metadata 169.254.169.254
/^172\.(1[6-9]|2\d|3[01])\./, // RFC 1918
/^192\.0\.0\./, // RFC 6890 IETF protocol assignments
/^192\.0\.2\./, // TEST-NET-1
/^192\.168\./, // RFC 1918
/^198\.(1[89])\./, // 198.18.0.0/15 benchmarking
/^198\.51\.100\./, // TEST-NET-2
/^203\.0\.113\./, // TEST-NET-3
/^2(2[4-9]|3\d)\./, // 224.0.0.0/4 multicast
/^2(4\d|5[0-5])\./, // 240.0.0.0/4 reserved + 255.255.255.255 broadcast
];
/** Hostnames that must never be resolved or contacted. */
const BLOCKED_HOSTNAMES = new Set([
"localhost",
"metadata.google.internal",
"169.254.169.254",
]);
function isBlockedIp(ip: string): boolean {
return BLOCKED_IP_PATTERNS.some((re) => re.test(ip));
function isBlockedIpv4(ip: string): boolean {
return IPV4_BLOCK_PATTERNS.some((re) => re.test(ip));
}
/**
* Throws a TRPCError if the given URL targets internal/private infrastructure.
* Performs DNS resolution to catch attempts to bypass hostname checks.
* Expand an IPv6 address to its full 8-group form so prefix matches work
* reliably (::1 → 0000:0000:0000:0000:0000:0000:0000:0001).
*/
export async function assertWebhookUrlAllowed(urlString: string): Promise<void> {
function expandIpv6(ip: string): string {
const lower = ip.toLowerCase().replace(/%.*$/, ""); // strip zone-id
// Handle IPv4-mapped suffix, e.g. ::ffff:192.168.0.1 → ::ffff:c0a8:0001
const ipv4MappedMatch = lower.match(/^(.*:)(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})$/);
let working = lower;
if (ipv4MappedMatch) {
const [, prefix, v4] = ipv4MappedMatch;
const parts = v4!.split(".").map((n) => Number.parseInt(n, 10));
if (parts.length === 4 && parts.every((n) => n >= 0 && n <= 255)) {
const hi = ((parts[0]! << 8) | parts[1]!).toString(16);
const lo = ((parts[2]! << 8) | parts[3]!).toString(16);
working = `${prefix}${hi}:${lo}`;
}
}
const parts = working.split("::");
const head = parts[0] === "" ? [] : parts[0]!.split(":");
const tail = parts.length > 1 ? (parts[1] === "" ? [] : parts[1]!.split(":")) : [];
const missing = 8 - head.length - tail.length;
const zeros = Array.from({ length: Math.max(0, missing) }, () => "0");
const full = parts.length === 1 ? head : [...head, ...zeros, ...tail];
return full.map((g) => g.padStart(4, "0")).join(":");
}
function isBlockedIpv6(ip: string): boolean {
const expanded = expandIpv6(ip);
// ::1 loopback
if (expanded === "0000:0000:0000:0000:0000:0000:0000:0001") return true;
// :: unspecified
if (expanded === "0000:0000:0000:0000:0000:0000:0000:0000") return true;
// IPv4-mapped ::ffff:0:0/96 — extract the embedded v4 and run the v4 check
if (expanded.startsWith("0000:0000:0000:0000:0000:ffff:")) {
const g6 = expanded.split(":")[6]!;
const g7 = expanded.split(":")[7]!;
const v4 = [
Number.parseInt(g6.slice(0, 2), 16),
Number.parseInt(g6.slice(2, 4), 16),
Number.parseInt(g7.slice(0, 2), 16),
Number.parseInt(g7.slice(2, 4), 16),
].join(".");
return isBlockedIpv4(v4);
}
// fc00::/7 unique-local — first byte starts with 1111110x → fc or fd
if (/^f[cd]/.test(expanded)) return true;
// fe80::/10 link-local — first 10 bits 1111111010 → fe80..febf
if (/^fe[89ab]/.test(expanded)) return true;
// ff00::/8 multicast
if (/^ff/.test(expanded)) return true;
// 2001:db8::/32 documentation
if (expanded.startsWith("2001:0db8:")) return true;
return false;
}
function isBlockedIp(ip: string): boolean {
const family = isIP(ip);
if (family === 4) return isBlockedIpv4(ip);
if (family === 6) return isBlockedIpv6(ip);
// Not a valid IP — err on the side of caution.
return true;
}
const BLOCKED_HOSTNAMES = new Set([
"localhost",
"ip6-localhost",
"ip6-loopback",
"metadata.google.internal",
"metadata.goog",
"169.254.169.254",
]);
export interface ResolvedHost {
hostname: string;
/** The pre-validated address to dial. */
address: string;
family: 4 | 6;
}
/**
* Resolve the given URL's hostname, validate every address against the
* SSRF blocklist, and return the first valid address for connection pinning.
* Rejects the URL if ANY resolved address is private — an attacker cannot
* evade by adding a private A record to a public-looking hostname.
*/
export async function resolveAndValidate(urlString: string): Promise<ResolvedHost> {
let parsed: URL;
try {
parsed = new URL(urlString);
@@ -50,21 +137,55 @@ export async function assertWebhookUrlAllowed(urlString: string): Promise<void>
throw new TRPCError({ code: "BAD_REQUEST", message: "Webhook URLs must use HTTPS." });
}
const hostname = parsed.hostname.toLowerCase();
const hostname = parsed.hostname.toLowerCase().replace(/^\[|\]$/g, "");
if (BLOCKED_HOSTNAMES.has(hostname)) {
throw new TRPCError({ code: "BAD_REQUEST", message: "Webhook URL target is not allowed." });
}
// Resolve hostname and validate the resulting IP address
try {
const { address } = await lookup(hostname);
if (isBlockedIp(address) || BLOCKED_HOSTNAMES.has(address)) {
throw new TRPCError({ code: "BAD_REQUEST", message: "Webhook URL target is not allowed." });
// Literal IP hostnames: validate directly without DNS.
const literalFamily = isIP(hostname);
if (literalFamily !== 0) {
if (isBlockedIp(hostname)) {
throw new TRPCError({
code: "BAD_REQUEST",
message: "Webhook URL target is not allowed.",
});
}
} catch (err) {
if (err instanceof TRPCError) throw err;
// DNS resolution failed — block by default (fail-closed)
return { hostname, address: hostname, family: literalFamily as 4 | 6 };
}
let addresses: Array<{ address: string; family: number }>;
try {
addresses = await dnsLookup(hostname, { all: true });
} catch {
throw new TRPCError({ code: "BAD_REQUEST", message: "Webhook URL could not be validated." });
}
if (addresses.length === 0) {
throw new TRPCError({ code: "BAD_REQUEST", message: "Webhook URL could not be validated." });
}
for (const { address } of addresses) {
if (isBlockedIp(address) || BLOCKED_HOSTNAMES.has(address)) {
throw new TRPCError({
code: "BAD_REQUEST",
message: "Webhook URL target is not allowed.",
});
}
}
const first = addresses[0]!;
return { hostname, address: first.address, family: first.family as 4 | 6 };
}
/**
* Throws a TRPCError if the given URL targets internal/private infrastructure.
* Preserved as a compatibility entrypoint for callers that only need the
* allow/deny decision without the pinned address.
*/
export async function assertWebhookUrlAllowed(urlString: string): Promise<void> {
await resolveAndValidate(urlString);
}
/** Exposed for unit tests. */
export const __test__ = { isBlockedIpv4, isBlockedIpv6, expandIpv6, isBlockedIp };
+48
View File
@@ -0,0 +1,48 @@
// Atomic compare-and-swap for TOTP replay-window consumption.
//
// The old code path was: SELECT lastTotpAt → compare in JS → UPDATE. Two
// concurrent requests with the same valid 6-digit code both see a stale
// (or null) lastTotpAt, both pass the in-JS check, and both succeed. A
// stolen TOTP (shoulder-surf, phishing-proxy replay) is therefore usable
// twice within its 30 s window — the MFA design promise is violated.
//
// A single `updateMany` expresses the entire precondition in SQL: the WHERE
// clause guarantees the row has not been consumed in the last 30 s, and the
// SET sets the new timestamp. PostgreSQL's row-level lock serialises the two
// racing writes; whichever commits second sees rows-affected = 0 and the
// caller treats it as a replay.
//
// The 30 000 ms window matches the TOTP period (RFC 6238) — codes are
// validated with `window: 1` so adjacent periods are still accepted; the
// anti-replay check is the tighter per-code, per-user bound.
// Intentionally loose structural type — Prisma's generated signature is a
// deeply-inferred generic that does not simplify to a friendly shape; we only
// need updateMany() with the documented args and a `{ count }` result.
// Keeping the internal cast isolated here means every callsite stays
// strictly typed.
interface TotpConsumeDb {
user: {
updateMany: (args: {
where: { id: string; OR: Array<{ lastTotpAt: Date | { lt: Date } | null }> };
data: { lastTotpAt: Date };
}) => Promise<{ count: number }>;
};
}
export async function consumeTotpWindow(
db: { user: { updateMany: (...args: never[]) => unknown } },
userId: string,
now: Date = new Date(),
): Promise<boolean> {
const typed = db as unknown as TotpConsumeDb;
const windowStart = new Date(now.getTime() - 30_000);
const result = await typed.user.updateMany({
where: {
id: userId,
OR: [{ lastTotpAt: null }, { lastTotpAt: { lt: windowStart } }],
},
data: { lastTotpAt: now },
});
return result.count > 0;
}
+62 -32
View File
@@ -7,9 +7,10 @@
* Fire-and-forget — errors are logged, never thrown.
*/
import { createHmac } from "node:crypto";
import { Agent, request } from "node:https";
import { logger } from "./logger.js";
import { sendSlackNotification } from "./slack-notify.js";
import { assertWebhookUrlAllowed } from "./ssrf-guard.js";
import { resolveAndValidate } from "./ssrf-guard.js";
/** Available webhook event types. */
export const WEBHOOK_EVENTS = [
@@ -27,9 +28,7 @@ export type WebhookEvent = (typeof WEBHOOK_EVENTS)[number];
interface MinimalDb {
webhook: {
findMany: (args: {
where: { isActive: boolean; events: { has: string } };
}) => Promise<
findMany: (args: { where: { isActive: boolean; events: { has: string } } }) => Promise<
Array<{
id: string;
name: string;
@@ -68,9 +67,7 @@ async function _dispatch(
const timestamp = new Date().toISOString();
const body = JSON.stringify({ event, timestamp, payload });
const promises = webhooks.map((wh) =>
_sendToWebhook(wh, event, body, timestamp, payload),
);
const promises = webhooks.map((wh) => _sendToWebhook(wh, event, body, timestamp, payload));
await Promise.allSettled(promises);
} catch (err) {
@@ -86,7 +83,12 @@ async function _sendToWebhook(
payload: Record<string, unknown>,
): Promise<void> {
try {
await assertWebhookUrlAllowed(wh.url);
// Resolve + validate ALL DNS records in a single pass and capture the
// first validated IP. The IP is then pinned at TCP-connect time via a
// custom `lookup` override on the HTTPS agent so a DNS rebind between
// the guard check and the socket `connect()` cannot redirect the dial
// to an internal address.
const resolved = await resolveAndValidate(wh.url);
// Slack-specific path: use the Slack notification helper.
// Use strict hostname match to prevent bypass via "hooks.slack.com.attacker.example.com".
@@ -101,32 +103,15 @@ async function _sendToWebhook(
"Content-Type": "application/json",
"X-Webhook-Event": event,
"X-Webhook-Timestamp": timestamp,
"Content-Length": Buffer.byteLength(body).toString(),
};
if (wh.secret) {
const signature = createHmac("sha256", wh.secret)
.update(body)
.digest("hex");
const signature = createHmac("sha256", wh.secret).update(body).digest("hex");
headers["X-Webhook-Signature"] = signature;
}
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 5_000);
try {
const response = await fetch(wh.url, {
method: "POST",
headers,
body,
signal: controller.signal,
});
if (!response.ok) {
throw new Error(`Webhook responded with HTTP ${response.status}`);
}
} finally {
clearTimeout(timeout);
}
await dispatchHttpsRequest(wh.url, resolved, headers, body);
} catch (err) {
logger.warn(
{ err, event, webhookId: wh.id, webhookName: wh.name, webhookUrl: wh.url },
@@ -135,13 +120,58 @@ async function _sendToWebhook(
}
}
/**
* Dispatch a POST to the resolved+validated target using a custom
* `https.Agent` whose DNS lookup is pinned to the address the guard
* already approved. The real hostname is still used for SNI/Host so
* certificate validation works unchanged.
*/
async function dispatchHttpsRequest(
url: string,
resolved: { address: string; family: 4 | 6 },
headers: Record<string, string>,
body: string,
): Promise<void> {
const parsed = new URL(url);
const pinnedAgent = new Agent({
keepAlive: false,
lookup: (_hostname, _opts, cb) => cb(null, resolved.address, resolved.family),
});
await new Promise<void>((resolve, reject) => {
const req = request(
{
host: parsed.hostname,
port: parsed.port || 443,
path: parsed.pathname + parsed.search,
method: "POST",
headers,
agent: pinnedAgent,
timeout: 5_000,
servername: parsed.hostname,
},
(res) => {
res.resume();
if (res.statusCode && res.statusCode >= 200 && res.statusCode < 300) {
resolve();
} else {
reject(new Error(`Webhook responded with HTTP ${res.statusCode}`));
}
},
);
req.on("timeout", () => {
req.destroy(new Error("Webhook request timed out"));
});
req.on("error", (err) => reject(err));
req.write(body);
req.end();
});
}
/**
* Format a human-readable Slack message from a webhook event.
*/
function formatSlackMessage(
event: string,
payload: Record<string, unknown>,
): string {
function formatSlackMessage(event: string, payload: Record<string, unknown>): string {
const label = event.replace(/\./g, " ").replace(/\b\w/g, (c) => c.toUpperCase());
const id = (payload["id"] as string) ?? (payload["projectId"] as string) ?? "";
const name = (payload["name"] as string) ?? "";
@@ -19,6 +19,43 @@ export class AssistantVisibleError extends Error {
}
}
// Signatures of raw Prisma / database errors that must never reach the LLM.
// We'd rather surface a generic "Invalid input" than leak column names, FK
// relation paths, or the offending value from a unique-constraint failure
// (which can include user PII on a second write attempt).
const PRISMA_LEAK_SIGNATURES = [
/Invalid\s+`prisma\./i,
/Unique constraint failed on the fields?:/i,
/Foreign key constraint failed on the field/i,
/An operation failed because it depends on one or more records/i,
/The column\s+`[^`]+`\s+does not exist/i,
/relation\s+"[^"]+"\s+does not exist/i,
/duplicate key value violates unique constraint/i,
/null value in column\s+"/i,
/violates (?:check|not-null|foreign key) constraint/i,
];
const SAFE_ERROR_FALLBACK = "Invalid input";
const MAX_ASSISTANT_ERROR_LENGTH = 500;
/**
* Sanitises a TRPCError / downstream error message before it's handed back
* to the LLM. Hand-written BAD_REQUEST / CONFLICT messages in routers are
* user-safe, but a subset of error paths pass raw Prisma text straight
* through — that would leak schema details (column names, relation paths,
* offending values) into chat context and, transitively, into audit JSONB.
*
* Strategy: regex-detect Prisma-flavoured signatures and replace with a
* generic fallback. Also hard-cap length as a belt-and-suspenders defence
* against stack-trace-like payloads.
*/
export function sanitizeAssistantErrorMessage(message: string): string {
if (!message) return SAFE_ERROR_FALLBACK;
if (message.length > MAX_ASSISTANT_ERROR_LENGTH) return SAFE_ERROR_FALLBACK;
if (PRISMA_LEAK_SIGNATURES.some((re) => re.test(message))) return SAFE_ERROR_FALLBACK;
return message;
}
export function assertPermission(ctx: ToolContext, perm: PermissionKey): void {
if (!ctx.permissions.has(perm)) {
throw new AssistantVisibleError(
@@ -293,7 +330,7 @@ export function toAssistantTimelineMutationError(
}
if (error.code === "BAD_REQUEST" || error.code === "CONFLICT") {
return { error: error.message };
return { error: sanitizeAssistantErrorMessage(error.message) };
}
}
@@ -369,7 +406,7 @@ export function toAssistantProjectCreationError(
}
if (error.code === "BAD_REQUEST" || error.code === "UNPROCESSABLE_CONTENT") {
return { error: error.message };
return { error: sanitizeAssistantErrorMessage(error.message) };
}
}
@@ -612,7 +649,7 @@ export function toAssistantResourceCreationError(error: unknown): AssistantToolE
}
if (error.code === "BAD_REQUEST" || error.code === "UNPROCESSABLE_CONTENT") {
return { error: error.message };
return { error: sanitizeAssistantErrorMessage(error.message) };
}
if (error.code === "NOT_FOUND") {
@@ -770,7 +807,7 @@ export function toAssistantVacationCreationError(error: unknown): AssistantToolE
}
if (error.code === "BAD_REQUEST") {
return { error: error.message };
return { error: sanitizeAssistantErrorMessage(error.message) };
}
}
@@ -1219,7 +1256,7 @@ export function toAssistantTaskActionError(error: unknown): AssistantToolErrorRe
if (error.message === "Assignment is already CONFIRMED") {
return { error: "Assignment is already confirmed." };
}
return { error: error.message };
return { error: sanitizeAssistantErrorMessage(error.message) };
}
if (error instanceof TRPCError && error.code === "FORBIDDEN") {
@@ -5,6 +5,7 @@ import { z } from "zod";
import { findUniqueOrThrow } from "../db/helpers.js";
import { makeAuditLogger } from "../lib/audit-helpers.js";
import type { TRPCContext } from "../trpc.js";
import { invalidateRoleDefaultsCache } from "../trpc.js";
export const CreateUserInputSchema = z.object({
email: z.string().email(),
@@ -205,6 +206,16 @@ export async function updateUserRole(
select: { id: true, name: true, email: true, systemRole: true },
});
// Force re-login: a role change (especially a demotion) must revoke
// currently-issued JWTs. Our JWT middleware checks the jti against
// ActiveSession on every tRPC call, so wiping these rows invalidates
// every outstanding session for this user on the next request.
if (before.systemRole !== updated.systemRole) {
await ctx.db.activeSession.deleteMany({ where: { userId: updated.id } });
// Also nuke the per-instance role-defaults cache (cross-node via pub/sub).
invalidateRoleDefaultsCache();
}
audit({
entityType: "User",
entityId: updated.id,
@@ -385,6 +396,12 @@ export async function setUserPermissions(
select: { id: true, name: true, email: true, permissionOverrides: true },
});
// Permission overrides can remove access — force affected sessions to
// re-authenticate so the new override set is applied immediately rather
// than waiting for the TTL. Cross-node cache invalidation via pub/sub.
await ctx.db.activeSession.deleteMany({ where: { userId: input.userId } });
invalidateRoleDefaultsCache();
audit({
entityType: "User",
entityId: input.userId,
@@ -422,6 +439,11 @@ export async function resetUserPermissions(
select: { id: true, name: true, email: true, permissionOverrides: true },
});
// Reset may remove privileges that were `granted` via override — force
// re-login so the regression applies on the next request.
await ctx.db.activeSession.deleteMany({ where: { userId: input.userId } });
invalidateRoleDefaultsCache();
audit({
entityType: "User",
entityId: input.userId,
@@ -5,6 +5,7 @@ import { TRPCError } from "@trpc/server";
import { z } from "zod";
import { findUniqueOrThrow } from "../db/helpers.js";
import { createAuditEntry } from "../lib/audit.js";
import { consumeTotpWindow } from "../lib/totp-consume.js";
import { totpRateLimiter } from "../middleware/rate-limit.js";
import type { TRPCContext } from "../trpc.js";
@@ -235,8 +236,10 @@ export async function verifyAndEnableTotp(
throw new TRPCError({ code: "BAD_REQUEST", message: "Invalid TOTP token." });
}
// Replay-attack prevention: reject if the same 30-second window was already used
if (user.lastTotpAt != null && Date.now() - user.lastTotpAt.getTime() < 30_000) {
// Atomic replay-guard: single UPDATE with WHERE-guard on lastTotpAt. See
// packages/api/src/lib/totp-consume.ts for rationale.
const accepted = await consumeTotpWindow(ctx.db, user.id);
if (!accepted) {
throw new TRPCError({
code: "BAD_REQUEST",
message: "TOTP code already used. Wait for the next code.",
@@ -245,7 +248,7 @@ export async function verifyAndEnableTotp(
await (ctx.db.user.update as Function)({
where: { id: user.id },
data: { totpEnabled: true, lastTotpAt: new Date() },
data: { totpEnabled: true },
});
void createAuditEntry({
@@ -309,17 +312,12 @@ export async function verifyTotp(
throw new TRPCError({ code: "UNAUTHORIZED", message: "Invalid TOTP token." });
}
// Replay-attack prevention: reject if the same 30-second window was already used
if (user.lastTotpAt != null && Date.now() - user.lastTotpAt.getTime() < 30_000) {
// Atomic replay-guard — see packages/api/src/lib/totp-consume.ts.
const accepted = await consumeTotpWindow(ctx.db, user.id);
if (!accepted) {
throw new TRPCError({ code: "UNAUTHORIZED", message: "Invalid TOTP token." });
}
// Record successful TOTP use to prevent replay within the same window
await (ctx.db.user.update as Function)({
where: { id: user.id },
data: { lastTotpAt: new Date() },
});
return { valid: true };
}
+98 -3
View File
@@ -1,7 +1,9 @@
import { prisma, Prisma } from "@capakraken/db";
import { resolvePermissions, PermissionKey, SystemRole } from "@capakraken/shared";
import { initTRPC, TRPCError } from "@trpc/server";
import { Redis } from "ioredis";
import { ZodError } from "zod";
import { logger } from "./lib/logger.js";
import { assertNoDevBypassInProduction, isE2eBypassActive } from "./lib/runtime-security.js";
import { loggingMiddleware } from "./middleware/logging.js";
import { apiRateLimiter } from "./middleware/rate-limit.js";
@@ -24,12 +26,87 @@ export interface TRPCContext {
clientIp: string | null;
}
// Cache role defaults for 60 seconds to avoid DB hit on every request
// Cache role defaults for 10 seconds. Short TTL is the fail-safe in case the
// Redis pub/sub invalidation below is down — even without cross-node
// invalidation the staleness window is bounded to 10 s for any revocation.
let _roleDefaultsCache: Record<string, PermissionKey[]> | null = null;
let _roleDefaultsCacheTime = 0;
const ROLE_DEFAULTS_TTL = 60_000;
const ROLE_DEFAULTS_TTL = 10_000;
// ─── Cross-instance cache invalidation via Redis pub/sub ──────────────────────
// Without this, `invalidateRoleDefaultsCache()` only clears the in-memory cache
// on the node that invoked it. Other nodes keep serving stale permissions for
// up to ROLE_DEFAULTS_TTL after a revocation, which is a real RBAC risk in
// multi-instance deployments (admin demotion, permission-override removal).
//
// We publish a single invalidate message per change; every node subscribes and
// clears its local cache on receipt. Failure to publish/subscribe is logged
// but never thrown — the TTL above is the fall-back.
const RBAC_INVALIDATE_CHANNEL = "capakraken:rbac-invalidate";
let _rbacPublisher: Redis | null = null;
let _rbacSubscriber: Redis | null = null;
let _rbacSubscriberInitialized = false;
function rbacRedisUrl(): string | null {
return process.env["REDIS_URL"] ?? null;
}
function getRbacPublisher(): Redis | null {
const url = rbacRedisUrl();
if (!url) return null;
if (!_rbacPublisher) {
try {
_rbacPublisher = new Redis(url, { lazyConnect: false, enableReadyCheck: false });
_rbacPublisher.on("error", (err: unknown) => {
logger.warn({ err, channel: RBAC_INVALIDATE_CHANNEL }, "RBAC Redis publisher error");
});
} catch (err) {
logger.warn(
{ err },
"RBAC Redis publisher init failed; cache invalidation will be local-only",
);
_rbacPublisher = null;
}
}
return _rbacPublisher;
}
function ensureRbacSubscriber(): void {
if (_rbacSubscriberInitialized) return;
const url = rbacRedisUrl();
if (!url) return;
_rbacSubscriberInitialized = true;
try {
_rbacSubscriber = new Redis(url, { lazyConnect: false, enableReadyCheck: false });
_rbacSubscriber.on("error", (err: unknown) => {
logger.warn({ err, channel: RBAC_INVALIDATE_CHANNEL }, "RBAC Redis subscriber error");
});
void _rbacSubscriber.subscribe(RBAC_INVALIDATE_CHANNEL).catch((err: unknown) => {
logger.warn({ err, channel: RBAC_INVALIDATE_CHANNEL }, "RBAC Redis subscribe failed");
});
_rbacSubscriber.on("message", (_channel: string, _message: string) => {
// Any message on this channel means "someone mutated role/permission
// state — drop our local view now". Body is ignored; the next request
// re-reads from DB.
_roleDefaultsCache = null;
_roleDefaultsCacheTime = 0;
});
} catch (err) {
logger.warn(
{ err },
"RBAC Redis subscriber init failed; cache invalidation will be local-only",
);
}
}
export async function loadRoleDefaults(): Promise<Record<string, PermissionKey[]>> {
// Lazy-init the peer-invalidation subscriber on first use. Doing this at
// first call (not module load) means test files that never touch RBAC never
// open a Redis connection, and env changes set up by specific tests are
// observed rather than snapshotted at import time.
ensureRbacSubscriber();
const now = Date.now();
if (_roleDefaultsCache && now - _roleDefaultsCacheTime < ROLE_DEFAULTS_TTL) {
return _roleDefaultsCache;
@@ -46,10 +123,28 @@ export async function loadRoleDefaults(): Promise<Record<string, PermissionKey[]
return map;
}
/** Invalidate the role defaults cache (call after updating SystemRoleConfig) */
/**
* Invalidate the role defaults cache on every running instance.
*
* Clears the local cache immediately and publishes a Redis message so peer
* instances clear theirs too. If Redis is unavailable, only the local cache
* is cleared — the 10 s TTL caps staleness on other nodes.
*
* Call this after mutating SystemRoleConfig, User.systemRole, or
* User.permissionOverrides.
*/
export function invalidateRoleDefaultsCache(): void {
_roleDefaultsCache = null;
_roleDefaultsCacheTime = 0;
const pub = getRbacPublisher();
if (!pub) return;
void pub.publish(RBAC_INVALIDATE_CHANNEL, "1").catch((err: unknown) => {
logger.warn(
{ err, channel: RBAC_INVALIDATE_CHANNEL },
"RBAC invalidation publish rejected — peer instances will rely on TTL",
);
});
}
export function createTRPCContext(opts: {
@@ -0,0 +1,117 @@
import { describe, expect, it } from "vitest";
import { FieldType, type BlueprintFieldDefinition } from "@capakraken/shared";
import {
isSuspectRegexPattern,
validateCustomFields,
MAX_PATTERN_LENGTH,
MAX_REGEX_INPUT_LENGTH,
} from "../blueprint/validator.js";
describe("blueprint validator — ReDoS hardening (#52)", () => {
describe("isSuspectRegexPattern", () => {
it("flags classic nested-quantifier shapes", () => {
expect(isSuspectRegexPattern("(a+)+")).toBe(true);
expect(isSuspectRegexPattern("(a*)*")).toBe(true);
expect(isSuspectRegexPattern("(a+)*")).toBe(true);
expect(isSuspectRegexPattern("(a*)+")).toBe(true);
expect(isSuspectRegexPattern("(.+)*")).toBe(true);
expect(isSuspectRegexPattern("(.*)+")).toBe(true);
});
it("flags grouped bounded-quantifier shapes", () => {
expect(isSuspectRegexPattern("(a{2,})+")).toBe(true);
expect(isSuspectRegexPattern("(a{2,5})*")).toBe(true);
});
it("flags the canonical ReDoS sample ^(a+)+$", () => {
expect(isSuspectRegexPattern("^(a+)+$")).toBe(true);
});
it("flags non-capturing groups too", () => {
expect(isSuspectRegexPattern("(?:a+)+")).toBe(true);
});
it("flags over-long patterns (DoS via compile cost)", () => {
const long = "a".repeat(MAX_PATTERN_LENGTH + 1);
expect(isSuspectRegexPattern(long)).toBe(true);
});
it("allows common safe patterns", () => {
expect(isSuspectRegexPattern("^[a-z]+$")).toBe(false);
expect(isSuspectRegexPattern("^\\d{3}-\\d{4}$")).toBe(false);
expect(isSuspectRegexPattern("[A-Z0-9_]+")).toBe(false);
expect(isSuspectRegexPattern("^https?://")).toBe(false);
expect(isSuspectRegexPattern("^[^\\s@]+@[^\\s@]+\\.[^\\s@]+$")).toBe(false);
});
});
describe("validateCustomFields with ReDoS pattern", () => {
const fieldDefs: BlueprintFieldDefinition[] = [
{
id: "f1",
label: "Test Field",
key: "test",
type: FieldType.TEXT,
required: false,
order: 0,
validation: { pattern: "^(a+)+$" },
} as BlueprintFieldDefinition,
];
it("rejects a suspect pattern immediately without running RegExp", () => {
// Craft the classic ReDoS input: many 'a's followed by a non-matching
// char. If the code ran RegExp.test unguarded, this would hang for
// seconds. Because the pattern is rejected at validation time, we
// get a fast failure.
const attackInput = "a".repeat(30) + "!";
const t0 = Date.now();
const errors = validateCustomFields(fieldDefs, { test: attackInput });
const elapsed = Date.now() - t0;
expect(errors).toHaveLength(1);
expect(errors[0]?.key).toBe("test");
// Must complete in < 50 ms — well below the budget set by the
// ticket's acceptance criteria.
expect(elapsed).toBeLessThan(50);
});
it("still validates benign patterns correctly", () => {
const safeFieldDefs: BlueprintFieldDefinition[] = [
{
...fieldDefs[0]!,
validation: { pattern: "^[a-z]+$" },
} as BlueprintFieldDefinition,
];
expect(validateCustomFields(safeFieldDefs, { test: "hello" })).toEqual([]);
const errors = validateCustomFields(safeFieldDefs, { test: "HELLO" });
expect(errors).toHaveLength(1);
});
it("caps input length before regex.test() (belt-and-suspenders)", () => {
// Even with a benign pattern, a 10 MB input would be slow to match.
// The validator slices to MAX_REGEX_INPUT_LENGTH first.
const safeFieldDefs: BlueprintFieldDefinition[] = [
{
...fieldDefs[0]!,
validation: { pattern: "^[a-z]+$" },
} as BlueprintFieldDefinition,
];
const huge = "a".repeat(MAX_REGEX_INPUT_LENGTH * 3);
const t0 = Date.now();
const errors = validateCustomFields(safeFieldDefs, { test: huge });
const elapsed = Date.now() - t0;
expect(errors).toEqual([]);
expect(elapsed).toBeLessThan(50);
});
it("handles syntactically-invalid patterns without throwing", () => {
const badFieldDefs: BlueprintFieldDefinition[] = [
{
...fieldDefs[0]!,
validation: { pattern: "[unclosed" },
} as BlueprintFieldDefinition,
];
const errors = validateCustomFields(badFieldDefs, { test: "any" });
expect(errors).toHaveLength(1);
});
});
});
+90 -9
View File
@@ -5,6 +5,35 @@ export interface CustomFieldValidationError {
message: string;
}
// ReDoS hardening: the blueprint field `pattern` is admin-editable. A
// catastrophic-backtracking pattern like `^(a+)+$` against a crafted input
// can freeze the event loop for multiple seconds per request. We bound the
// attack surface on both axes:
//
// 1. Pattern length capped at 200 chars (see blueprint.schema.ts too).
// 2. Input length capped at 4096 chars before regex.test() — even a bad
// pattern on a short input completes in < 50 ms.
// 3. A cheap heuristic rejects obvious nested-quantifier shapes at
// validation time so malicious patterns simply don't match.
const MAX_PATTERN_LENGTH = 200;
const MAX_REGEX_INPUT_LENGTH = 4_096;
// Heuristic: reject grouped subexpressions that contain a quantifier AND
// are themselves wrapped in an outer quantifier — that's the shape of
// every classical ReDoS pattern ((a+)+, (a|a)*, (.*?)+ etc.). This
// over-approximates: it may reject some benign patterns that happen to
// look this way, which is acceptable for admin-side form validation.
export function isSuspectRegexPattern(pattern: string): boolean {
if (pattern.length > MAX_PATTERN_LENGTH) return true;
// Match: open paren, any non-close-paren chars containing an unbounded
// quantifier (+, *, or {n,}), then close paren, then an outer quantifier
// (+, *, ?, or {).
const nestedQuantifier = /\([^)]*(?:[+*]|\{\d+,\d*\})[^)]*\)[+*?{]/;
return nestedQuantifier.test(pattern);
}
export { MAX_PATTERN_LENGTH, MAX_REGEX_INPUT_LENGTH };
/**
* Validates a `dynamicFields` record against an array of BlueprintFieldDefinitions.
* Returns an array of errors (empty = valid).
@@ -35,10 +64,16 @@ export function validateCustomFields(
if (validation) {
const num = Number(value);
if (validation.min !== undefined && num < validation.min) {
errors.push({ key: def.key, message: `${def.label} must be at least ${validation.min}` });
errors.push({
key: def.key,
message: `${def.label} must be at least ${validation.min}`,
});
}
if (validation.max !== undefined && num > validation.max) {
errors.push({ key: def.key, message: `${def.label} must be at most ${validation.max}` });
errors.push({
key: def.key,
message: `${def.label} must be at most ${validation.max}`,
});
}
}
break;
@@ -65,7 +100,10 @@ export function validateCustomFields(
const validSet = new Set(def.options.map((o) => o.value));
const invalid = (value as string[]).filter((v) => !validSet.has(v));
if (invalid.length > 0) {
errors.push({ key: def.key, message: `${def.label} contains invalid values: ${invalid.join(", ")}` });
errors.push({
key: def.key,
message: `${def.label} contains invalid values: ${invalid.join(", ")}`,
});
}
}
break;
@@ -90,13 +128,46 @@ export function validateCustomFields(
const v = def.validation;
if (v) {
if (v.minLength !== undefined && strVal.length < v.minLength) {
errors.push({ key: def.key, message: v.message ?? `${def.label} must be at least ${v.minLength} characters` });
errors.push({
key: def.key,
message: v.message ?? `${def.label} must be at least ${v.minLength} characters`,
});
}
if (v.maxLength !== undefined && strVal.length > v.maxLength) {
errors.push({ key: def.key, message: v.message ?? `${def.label} must be at most ${v.maxLength} characters` });
errors.push({
key: def.key,
message: v.message ?? `${def.label} must be at most ${v.maxLength} characters`,
});
}
if (v.pattern !== undefined) {
// ReDoS defence: reject suspect patterns OUTRIGHT (counts as
// validation failure so the admin sees a clear error) and cap
// the input before regex.test() to bound runtime even if an
// unsafe pattern somehow slipped through save-time validation.
if (isSuspectRegexPattern(v.pattern)) {
errors.push({
key: def.key,
message: v.message ?? `${def.label} pattern rejected (unsafe)`,
});
} else {
const capped =
strVal.length > MAX_REGEX_INPUT_LENGTH
? strVal.slice(0, MAX_REGEX_INPUT_LENGTH)
: strVal;
let matched = false;
try {
matched = new RegExp(v.pattern).test(capped);
} catch {
// Invalid regex syntax — treat as validation failure.
matched = false;
}
if (!matched) {
errors.push({
key: def.key,
message: v.message ?? `${def.label} has an invalid format`,
});
}
}
if (v.pattern !== undefined && !new RegExp(v.pattern).test(strVal)) {
errors.push({ key: def.key, message: v.message ?? `${def.label} has an invalid format` });
}
}
break;
@@ -110,10 +181,20 @@ export function validateCustomFields(
const v = def.validation;
if (v) {
if (v.min !== undefined && dateVal.getTime() < new Date(v.min).getTime()) {
errors.push({ key: def.key, message: v.message ?? `${def.label} must not be before ${new Date(v.min).toLocaleDateString()}` });
errors.push({
key: def.key,
message:
v.message ??
`${def.label} must not be before ${new Date(v.min).toLocaleDateString()}`,
});
}
if (v.max !== undefined && dateVal.getTime() > new Date(v.max).getTime()) {
errors.push({ key: def.key, message: v.message ?? `${def.label} must not be after ${new Date(v.max).toLocaleDateString()}` });
errors.push({
key: def.key,
message:
v.message ??
`${def.label} must not be after ${new Date(v.max).toLocaleDateString()}`,
});
}
}
}
@@ -30,19 +30,37 @@ export const FieldOptionSchema = z.object({
color: z.string().optional(),
});
// ReDoS defence: patterns are admin-editable and get passed to `new RegExp`
// at field-validation time. Cap the length and reject obviously-unsafe
// shapes at save time. Same heuristic as
// @capakraken/engine::isSuspectRegexPattern; kept in-sync to avoid a
// shared→engine dep cycle.
const RE_DOS_SAFE_PATTERN = /\([^)]*(?:[+*]|\{\d+,\d*\})[^)]*\)[+*?{]/;
export const FieldValidationSchema = z.object({
min: z.number().optional(),
max: z.number().optional(),
minLength: z.number().int().optional(),
maxLength: z.number().int().optional(),
pattern: z.string().optional(),
message: z.string().optional(),
pattern: z
.string()
.max(200, "Pattern too long (max 200 chars) — ReDoS defence")
.refine(
(p) => !RE_DOS_SAFE_PATTERN.test(p),
"Pattern has nested quantifiers and could cause catastrophic backtracking",
)
.optional(),
message: z.string().max(500).optional(),
});
export const BlueprintFieldDefinitionSchema = z.object({
id: z.string().min(1),
label: z.string().min(1).max(200),
key: z.string().min(1).max(100).regex(/^[a-z_][a-z0-9_]*$/, "Must be snake_case"),
key: z
.string()
.min(1)
.max(100)
.regex(/^[a-z_][a-z0-9_]*$/, "Must be snake_case"),
type: z.nativeEnum(FieldType),
required: z.boolean().default(false),
description: z.string().optional(),
@@ -60,12 +78,16 @@ export const CreateBlueprintSchema = z.object({
description: z.string().optional(),
fieldDefs: z.array(BlueprintFieldDefinitionSchema).default([]),
defaults: z.record(z.string(), z.unknown()).default({}),
validationRules: z.array(z.object({
validationRules: z
.array(
z.object({
field: z.string(),
rule: z.enum(["required_if", "unique", "min", "max"]),
params: z.unknown().optional(),
message: z.string().optional(),
})).default([]),
}),
)
.default([]),
});
export const UpdateBlueprintSchema = CreateBlueprintSchema.partial();