# Performance Optimization Review Date: 2026-03-18 Scope: analysis only. No runtime behavior was changed in this pass. ## Executive Summary The biggest performance costs in Planarchy currently come from three patterns: 1. Broad data fetches followed by repeated in-memory filtering/grouping. 2. Expensive client-side derivations on screens with large datasets, especially Timeline. 3. Broad cache invalidation that forces heavy queries to refetch and recompute after small mutations. The highest-value optimization target is the Timeline stack. After that, the best wins are server-side aggregation for chargeability/dashboard workloads and narrowing resource/report queries to visible scope. ## Main Hotspots ### 1. Timeline Relevant files: - [TimelineContext.tsx](/home/hartmut/Documents/Copilot/planarchy/apps/web/src/components/timeline/TimelineContext.tsx) - [TimelineView.tsx](/home/hartmut/Documents/Copilot/planarchy/apps/web/src/components/timeline/TimelineView.tsx) - [TimelineResourcePanel.tsx](/home/hartmut/Documents/Copilot/planarchy/apps/web/src/components/timeline/TimelineResourcePanel.tsx) - [TimelineProjectPanel.tsx](/home/hartmut/Documents/Copilot/planarchy/apps/web/src/components/timeline/TimelineProjectPanel.tsx) - [timeline.ts](/home/hartmut/Documents/Copilot/planarchy/packages/api/src/router/timeline.ts) Observed issues: - `TimelineContext` derives multiple large collections in the client from the same raw payload. - Resource and project filtering is partly pushed to the backend, but large read-model payloads are still constructed and further filtered/grouped in the browser. - Timeline mutations invalidate multiple heavy queries at once. - Timeline SSE events also trigger broad invalidations. Impact: - Slow initial render. - Lag on interaction and hover. - Slow recovery after edits because the full timeline dataset is often fetched again. ### 2. Chargeability Report Relevant files: - [ChargeabilityReportClient.tsx](/home/hartmut/Documents/Copilot/planarchy/apps/web/src/components/reports/ChargeabilityReportClient.tsx) - [chargeability-report.ts](/home/hartmut/Documents/Copilot/planarchy/packages/api/src/router/chargeability-report.ts) Observed issues: - The server loads matching resources, then all bookings in range, then repeatedly filters bookings and vacations per resource and per month. - The client renders a wide report grid and still performs filtering/grouping locally. - Public holiday support is still marked as TODO in the route, so future correctness work will likely add more cost unless the data model is optimized first. Impact: - CPU-heavy report generation. - Larger-than-needed payloads. - Increased render cost on wide month ranges. ### 3. Dashboard Chargeability / Analytics Widgets Relevant files: - [dashboard.ts](/home/hartmut/Documents/Copilot/planarchy/packages/api/src/router/dashboard.ts) - [get-chargeability-overview.ts](/home/hartmut/Documents/Copilot/planarchy/packages/application/src/use-cases/dashboard/get-chargeability-overview.ts) - [get-overview.ts](/home/hartmut/Documents/Copilot/planarchy/packages/application/src/use-cases/dashboard/get-overview.ts) Observed issues: - `getDashboardChargeabilityOverview()` fetches all relevant resources and all bookings for the month, then filters the full booking set again for each resource. - Similar patterns exist in dashboard summary use cases where relatively small widgets are backed by broad read logic. Impact: - Avoidable server CPU and latency. - Widgets become more expensive as the database grows even if the UI remains small. ### 4. Resources Screen Relevant files: - [ResourcesClient.tsx](/home/hartmut/Documents/Copilot/planarchy/apps/web/src/app/(app)/resources/ResourcesClient.tsx) - [resource.ts](/home/hartmut/Documents/Copilot/planarchy/packages/api/src/router/resource.ts) Observed issues: - Infinite scrolling is in place, which is good, but the screen still does substantial client-side sort/filter/order work. - Separate global stats queries can be expensive when only the currently visible rows need enriched values. - Large dropdown/filter datasets are fully loaded to support selection UIs. Impact: - Good enough at small scale, but it will degrade with larger teams and imported planning data. ### 5. Shared Booking Read Model Relevant files: - [list-assignment-bookings.ts](/home/hartmut/Documents/Copilot/planarchy/packages/application/src/use-cases/allocation/list-assignment-bookings.ts) Observed issues: - This is a clean shared entry point, but it returns broad booking rows and leaves most grouping/aggregation to callers. - Several heavy endpoints call this and then repeatedly scan the returned array. Impact: - Logic is easy to reuse, but performance scales poorly because each consumer re-aggregates independently. ## Current Database Index Baseline Relevant schema: - [schema.prisma](/home/hartmut/Documents/Copilot/planarchy/packages/db/prisma/schema.prisma) Good coverage already exists for: - `Assignment`: `resourceId`, `projectId`, `status`, date ranges, and composite indices. - `DemandRequirement`: `projectId`, `status`, date ranges. - `Vacation`: `resourceId`, `status`, date ranges. - `Resource`: `chapter`, `isActive`, `countryId`, `orgUnitId`, `resourceType`. - `Project`: `status`, date range, `orderType`, `clientId`. Likely gaps for current query patterns: - `Resource` filters used together in analytics screens: `isActive`, `chgResponsibility`, `departed`, `rolledOff`, `countryId`, `managementLevelGroupId`. - `Project` filters used in timeline/report contexts: `clientId` combined with active date windows. - Potential overlap-heavy access paths where PostgreSQL may benefit more from specialized query shapes or materialized read models than from adding more conventional B-tree indexes. Conclusion: The schema is not unindexed. The main issue is query shape and repeated in-memory work, not just missing indexes. Index work should support query redesign, not replace it. ## Recommended Optimizations ### A. Move Timeline Derivations Closer to the Backend Proposal: - Add a timeline read endpoint that returns data already grouped for the active view mode and active filters. - Return normalized maps and compact slices instead of large repeated nested objects. - Keep the raw endpoint only if another screen truly needs it. Pros: - Largest likely user-visible win. - Reduces browser CPU, garbage collection, and hover lag. - Shrinks payload size if redundant nested fields are removed. Cons: - More backend read-model code. - Slightly higher coupling between API shape and timeline UI needs. - Requires careful migration to avoid breaking existing interactions. Recommended priority: P0 Estimated effort: medium to large ### B. Build Indexed Maps Once Instead of Repeated Array Scans Proposal: - In timeline/report/dashboard code, replace repeated `.filter()`, `.find()`, and `.some()` passes with prebuilt maps keyed by `resourceId`, `projectId`, `monthKey`, and status where appropriate. - Standardize this as shared helper utilities for read-model construction. Pros: - Low-risk improvement. - Can be introduced incrementally. - Helps both client and server hot paths. Cons: - Does not solve oversized payloads by itself. - Adds some code complexity if done ad hoc instead of through shared helpers. Recommended priority: P0 Estimated effort: small to medium ### C. Replace Broad Invalidations with Scoped Cache Updates Proposal: - Audit timeline mutations and SSE handlers that currently invalidate `getEntries`, `getEntriesView`, `getProjectContext`, and `getBudgetStatus` together. - Invalidate only the affected query keys. - Where practical, patch query cache locally after small edits instead of refetching everything. Pros: - Faster feedback after drag/drop and edit flows. - Less network churn. - Lower server load during active planning sessions. Cons: - Cache correctness becomes more complex. - Needs strong regression coverage around drag/drop and split allocation flows. Recommended priority: P0 Estimated effort: medium ### D. Push Chargeability Aggregation into Dedicated Server Read Models Proposal: - Replace repeated per-resource filtering of a flat booking array with server-side grouped structures. - Precompute `bookingsByResource` and `vacationsByResource` once before month iteration. - Consider a dedicated monthly chargeability read model for reports and dashboard widgets. Pros: - Strong improvement for reports and widgets. - Eliminates duplicate aggregation logic across dashboard/resource/report endpoints. - Better foundation for public holiday integration. Cons: - Requires consolidating overlapping business logic. - Read-model duplication is possible if not designed carefully. Recommended priority: P1 Estimated effort: medium ### E. Narrow Data Fetches to Visible Scope Proposal: - Avoid loading large supporting datasets such as `resource.list({ limit: 500 })` or “all projects” queries when only label resolution for selected IDs is required. - Add lightweight lookup endpoints like `getByIds()` or `resolveLabels()`. - On the resources page, fetch chargeability/stat enrichments only for the visible page or current filtered result slice. Pros: - Smaller payloads. - Faster filter bar and screen startup. - Simple, targeted changes. Cons: - More API endpoints or variants. - Some UI components become slightly more stateful. Recommended priority: P1 Estimated effort: small to medium ### F. Add Read-Optimized Analytics Endpoints or Materialized Views Proposal: - For expensive monthly analytics, introduce a read-optimized table or materialized view refreshed on schedule or after import batches. - Use this for dashboard and chargeability summaries, not for transactional edit screens. Pros: - Best long-term scalability for imported dispo-scale data. - Predictable response times for reports. - Reduces repeated heavy joins and calculations. Cons: - More infrastructure and refresh logic. - Risk of stale analytics if refresh semantics are unclear. - Higher implementation complexity than direct query improvements. Recommended priority: P2 Estimated effort: large ### G. Add Composite Indexes Only Where Query Patterns Justify Them Proposal: - Review actual PostgreSQL query plans for: - resource analytics filters - timeline date-overlap filters - project/client/date combinations - Add composite indexes only after measuring the slow plans. Pros: - Cheap improvement when aligned to real plans. - Complements query redesign. Cons: - Limited value if broad in-memory processing remains. - Too many indexes can slow writes/imports. Recommended priority: P1 Estimated effort: small Suggested candidates to measure first: - `Resource(isActive, departed, rolledOff, chgResponsibility, countryId)` - `Resource(isActive, managementLevelGroupId, countryId)` - `Project(clientId, status, startDate, endDate)` ### H. Consider Row Virtualization for Heavy Tables Proposal: - Add virtualization for very large row-based screens if profiling confirms DOM/render cost remains high after data/query optimizations. - Candidate screens: resources table, chargeability report table, timeline side panels. Pros: - Strong render win when many rows are mounted. - Independent of backend changes. Cons: - More UI complexity. - Can complicate drag/drop, sticky headers, and keyboard navigation. - Should not be the first fix for timeline if the main problem is data churn. Recommended priority: P2 Estimated effort: medium ## Priority Order ### Phase 1 - A. Move timeline derivations closer to the backend. - B. Replace repeated array scans with indexed maps. - C. Scope cache invalidation for timeline mutations and SSE. Expected result: - Best chance of making timeline interaction materially faster without removing functionality. ### Phase 2 - D. Server-side chargeability read models. - E. Narrow data fetches to visible scope. - G. Add measured composite indexes. Expected result: - Faster reports, dashboard widgets, and filter-heavy screens. ### Phase 3 - F. Materialized analytics/read models. - H. Virtualization where profiling still shows render bottlenecks. Expected result: - Better scalability once data volume grows further. ## Suggested Measurement Plan Before implementation: - Capture browser performance traces for: - timeline initial load - timeline hover/selection - timeline mutation save path - chargeability report load - resources page load and scroll - Capture server timings for: - `timeline.getEntriesView` - `chargeabilityReport.getReport` - `dashboard.getChargeabilityOverview` - `resource.getChargeabilityStats` - Record PostgreSQL query plans for the slowest endpoints. During implementation: - Compare payload sizes before and after endpoint changes. - Track React commit duration on Timeline and Resources screens. - Measure query invalidation counts after common edits. Success criteria: - Timeline hover and selection feel immediate under imported dispo-scale data. - Timeline mutation flows no longer trigger full-screen refetch storms. - Chargeability report load time drops materially for multi-month ranges. - Dashboard widgets stop scanning large booking arrays per resource. ## No-Regression Guidance Do not trade away any of the following: - Existing planning semantics for assignments, proposed work, and demand visibility. - Timeline drag/drop correctness. - Tooltip, right-click, and selection behavior. - Anonymization behavior. - Filter parity between resources, timeline, and reports. Recommended safeguards: - Add endpoint-level benchmarks for the heavy routes. - Add UI regression checks around timeline interaction paths. - Roll out timeline changes behind a temporary feature flag if the payload shape changes significantly. - Compare old and new aggregation outputs on the same imported dataset before removing legacy paths. ## Suggested Ticket Breakdown 1. Profile and baseline the four slowest endpoints plus timeline render path. 2. Refactor timeline read model to return pre-grouped data for current filters/view. 3. Replace timeline broad invalidations with scoped invalidation and selective cache patching. 4. Refactor chargeability report aggregation into grouped server-side structures. 5. Narrow lookup/filter support queries to selected IDs or visible pages. 6. Review PostgreSQL query plans and add only measured composite indexes. 7. Re-profile and decide whether virtualization or materialized analytics are still needed. ## Bottom Line This codebase can be made substantially faster without removing functionality. The main gains will not come from cosmetic micro-optimizations. They will come from changing where data is shaped, reducing repeated scans, and stopping broad cache churn on heavy views.