12 KiB
AI Excellence Due Diligence And Roadmap
Date: 2026-03-30 Purpose: Frank assessment of the current codebase plus a pragmatic roadmap to turn CapaKraken into a reference project for disciplined AI-assisted software engineering.
Executive Summary
CapaKraken is already well beyond a prototype. The repository shows a real domain model, a non-trivial bounded-context split, a meaningful automated test baseline, and active delivery discipline.
At the same time, the codebase still carries several risks that are typical of fast-moving AI-assisted development:
- some critical cross-cutting concerns are only partially productized
- several files and routers have grown beyond comfortable ownership size
- runtime configuration and secret handling are still too application-database centric
- the current operational model is improving, but not yet fully standardized
- production-grade multi-instance safeguards are not complete yet
The project feels strong enough to build on, but it is not yet a showcase of "how AI-built software should look" without another cleanup and hardening pass.
Current Strengths
- Clear monorepo and package split across
api,application,db,engine,shared,staffing,ui, andweb, with shared tooling throughturboandpnpm. - Product scope is substantial and business-oriented rather than CRUD-only: estimating, planning, demand/assignment, chargeability, import/export, dashboards, report building, and admin surfaces.
- CI already enforces typecheck, lint, unit tests, build, and E2E with PostgreSQL and Redis in the loop.
- Application-layer use cases exist and are not just thin router wrappers.
- Documentation coverage is materially better than average for a fast-moving product.
Status Update Since Initial Review
The highest-risk quick wins from the original review are now closed:
- SSE delivery is now audience-scoped with architecture guardrails in CI
- browser-side spreadsheet parsing now has focused regression coverage in
apps/web - the route access matrix is in place and the ready-now audience-hardening slices were completed
- comment visibility is now entity-scoped across API policy, assistant metadata, web consumers, and mention autocomplete
Due Diligence Findings
Critical
No currently open item in this review remains in the earlier "critical quick fix" class. The previously critical SSE and browser parser coverage issues were addressed during the hardening batch.
High
-
Router and UI module size is now an operational risk. Evidence: assistant-tools.ts, resource.ts, allocation.ts, timeline.ts, vacation.ts, and large frontend files such as SystemSettingsClient.tsx and TimelineProjectPanel.tsx are each well past the size where safe ownership stays easy. Risk: AI-generated changes become harder to review, humans lose local reasoning context, and regressions become more likely.
-
Secret handling is still application-database centric. Evidence: system settings mutate and persist API keys and SMTP credentials in settings.ts. Risk: operational secrets remain too coupled to the main app data plane for a gold-standard project. Update: runtime resolution is now env-first for the active secret consumers, but persistence is still transitional and should be reduced further.
-
Least-privilege is materially better documented now, but it still needs long-lived enforcement rather than relying mainly on one hardening batch. Evidence: the route audience model is now explicit in route-access-matrix.md and backed by multiple focused auth tests, but the remaining guarantee still depends on continuing test coverage and architecture guardrails as new routes evolve. Risk: future feature work can slowly widen access again if the matrix and tests are not treated as an enforced contract.
Medium
-
Rate limiting is process-local and not deployment-grade. Evidence: rate-limit.ts uses an in-memory
Mapand explicitly notes that multi-instance deployments need Redis-backed replacement. Risk: protections weaken as soon as the app scales horizontally. -
Performance hotspots are well understood but not yet structurally solved. Evidence: the current performance review identifies repeated in-memory filtering, broad invalidation, and heavyweight timeline/report derivations in performance-optimization-review-2026-03-18.md. Risk: user experience and infrastructure cost will degrade as data volume grows.
-
Production delivery is still in transition. Evidence: the current repo now has a target CI/CD path, but the old manual production path still coexists with the new image-based deploy model in cicd-target-architecture.md. Risk: the operational source of truth is not yet singular.
Overall Rating
Product Engineering Quality
8/10
This is materially better than a typical startup CRUD app and already has the bones of a serious internal platform or vertical SaaS.
Security Posture
7/10
There are good foundations, and the most obvious real-time and comment-visibility gaps were closed, but secrets policy and long-lived least-privilege enforcement still need structural work.
Maintainability
6.5/10
The architecture is promising, but file size, router density, and compatibility residue will eventually slow everyone down unless addressed deliberately.
Operational Maturity
7/10
Good CI and improving deploy discipline are in place, but production standardization still needs one more step.
AI-Excellence Readiness
7/10
The project already proves that AI can help build serious software fast. It does not yet prove that AI-assisted development can stay consistently clean, minimal, and audit-friendly at scale.
What A Showcase AI Project Should Demonstrate
To be a true showcase for AI-assisted development, this repository should visibly demonstrate:
- small, composable files with clear ownership boundaries
- explicit security and permission models at every boundary
- deterministic build and deploy flow
- measurable quality gates beyond "tests pass"
- strong documentation that explains not only what exists, but why the structure is this way
- low-friction reviewability, so humans can still govern AI speed
Roadmap
Phase 1: Close the Dangerous Gaps
Status: substantially completed
Goals:
- Keep SSE audience scoping under test and CI guardrails.
- Keep hardened spreadsheet parser boundaries under regression coverage.
- Treat the route access matrix and narrowed auth slices as maintained architecture contracts.
- Move production secrets out of regular application settings, or add an interim encrypted-secrets layer with clear migration path. Status: in progress. Runtime consumers now prefer environment overrides; the remaining gap is eliminating or encrypting compatibility persistence in the admin settings path.
Definition of done:
- standard users cannot subscribe to unrelated real-time planning events
- file import paths stay covered by focused regression tests
- every sensitive router remains explicitly classified by audience
- secret storage policy is documented and enforced
Phase 2: Cut Down Complexity
Target window: 2 to 4 weeks
Goals:
- Split oversized routers into bounded router modules by feature slice.
- Split oversized React components into container, state, and presentational layers.
- Introduce file-size and complexity guardrails for new code.
- Create "AI review rules" for generated patches: max file growth, required tests, required docs for cross-cutting changes.
Priority candidates:
packages/api/src/router/assistant-tools.tspackages/api/src/router/resource.tspackages/api/src/router/allocation.tspackages/api/src/router/timeline.tsapps/web/src/components/admin/SystemSettingsClient.tsxapps/web/src/components/timeline/*
Definition of done:
- no new source file over 500 lines without an explicit exception
- top 10 largest business-critical source files are materially reduced
- patch reviews become narrower and easier to reason about
Phase 3: Make Quality Measurable
Target window: 2 to 3 weeks
Goals:
- Add architecture fitness checks, not just lint/tests.
- Add API authorization tests for all sensitive routers.
- Add bundle-size and route-size monitoring for the web app.
- Add mutation-path audit coverage checks where business-critical state changes occur.
- Add a dependency and unsafe-library policy.
Suggested checks:
- role/permission regression tests
- SSE audience contract tests
- import abuse tests with oversized files
- max file size / max router size lint or CI checks
- coverage thresholds for critical packages
Definition of done:
- the repo can fail CI for architectural regressions, not only syntax or unit failures
- critical security assumptions are test-backed
Phase 4: Standardize Operations
Target window: 1 to 2 weeks
Goals:
- complete the move to image-based deploys as the canonical path
- document staging and production bootstrap as code, not tribal knowledge
- replace in-memory rate limits with Redis-backed limits where appropriate
- define rollback drills and incident response playbooks
Definition of done:
- one production deployment path
- one rollback path
- one source of truth for runtime configuration
Phase 5: Turn It Into A Reference Project
Target window: ongoing
Goals:
- add a concise engineering doctrine for AI-assisted development in this repo
- publish coding heuristics for humans and AI: file size limits, change budgets, ownership boundaries, review expectations
- maintain a "why this is structured this way" architecture guide
- log selected before/after refactors to demonstrate how AI was used responsibly
Artifacts to add:
docs/engineering-doctrine.mddocs/architecture-decision-records/docs/ai-collaboration-standards.md- a small set of "reference slices" that show exemplary patterns end to end
Suggested Order Of Execution
- secrets policy
- router/component decomposition
- architecture fitness checks in CI
- full operational standardization
- production-grade rate limiting
- performance hotspot reduction
Success Criteria For The Next 60 Days
- no critical or high-severity known security gap remains open without an owner and due date
- no core router continues to grow unchecked
- at least one major domain slice is refactored into a clear "reference implementation" pattern
- production deployment uses the same artifact that passed CI
- the repo gains explicit AI-development rules that improve reviewability instead of just increasing output
Bottom Line
CapaKraken is already good enough to justify further investment. It is not a cleanup disaster.
The opportunity is not to rebuild it. The opportunity is to harden the weak edges, reduce oversized ownership surfaces, and make the engineering standards visible enough that the repository becomes evidence that AI can accelerate serious software without normalizing architectural debt.