Files
CapaKraken/docs/demand-assignment-migration-cutover.md
Hartmut cd78f72f33 chore: full technical rename planarchy → capakraken
Complete rename of all technical identifiers across the codebase:

Package names (11 packages):
- @planarchy/* → @capakraken/* in all package.json, tsconfig, imports

Import statements: 277 files, 548 occurrences replaced

Database & Docker:
- PostgreSQL user/db: planarchy → capakraken
- Docker volumes: planarchy_pgdata → capakraken_pgdata
- Connection strings updated in docker-compose, .env, CI

CI/CD:
- GitHub Actions workflow: all filter commands updated
- Test database credentials updated

Infrastructure:
- Redis channel: planarchy:sse → capakraken:sse
- Logger service name: planarchy-api → capakraken-api
- Anonymization seed updated
- Start/stop/restart scripts updated

Test data:
- Seed emails: @planarchy.dev → @capakraken.dev
- E2E test credentials: all 11 spec files updated
- Email defaults: @planarchy.app → @capakraken.app
- localStorage keys: planarchy_* → capakraken_*

Documentation: 30+ .md files updated

Verification:
- pnpm install: workspace resolution works
- TypeScript: only pre-existing TS2589 (no new errors)
- Engine: 310/310 tests pass
- Staffing: 37/37 tests pass

Co-Authored-By: claude-flow <ruv@ruv.net>
2026-03-27 13:18:09 +01:00

11 KiB

Demand And Assignment Migration Cutover

Date: 2026-03-13 Purpose: Canonical go/no-go, artifact, and execution guide for the additive Allocation to DemandRequirement / Assignment persistence split.

Scope

This cutover document governs the additive migration from mixed legacy Allocation rows to first-class DemandRequirement and Assignment rows.

It assumes:

  • legacy compatibility API paths remain available during the migration window
  • the backfill is additive and idempotent
  • cleanup of legacy rows happens only after readiness is green and signoff is recorded

Canonical Commands

Use the readiness command as the primary gate:

pnpm db:readiness:demand-assignment --write-artifacts

Supporting commands:

pnpm --filter @capakraken/db db:audit:demand-assignment --json --fail-on-blockers
pnpm --filter @capakraken/db db:backfill:demand-assignment --json --fail-on-blockers
pnpm --filter @capakraken/db db:backfill:demand-assignment --apply

pnpm db:readiness:demand-assignment fails with a non-zero exit code when the workspace is not ready for --apply. Use --allow-blockers only when collecting review artifacts before remediation.

Generated artifacts default to docs/migration-artifacts/demand-assignment/ with deterministic filenames:

  • workspace-audit.json
  • workspace-dry-run.json
  • workspace-readiness.json

Project-scoped runs use project-<projectId>-*.json.

Go/No-Go Criteria

--apply is allowed only when all of the following are true in the readiness report:

  1. goNoGo is go
  2. audit.pendingDemandBackfills is 0
  3. audit.pendingAssignmentBackfills is 0
  4. audit.invalidStaffedAllocationsWithoutResource is 0
  5. audit.orphanedDemandRequirementsWithLegacyLink is 0
  6. audit.orphanedAssignmentsWithLegacyLink is 0

Warnings are review items, not automatic blockers. In particular, placeholder rows with a resource and staffed rows with headcount > 1 must be acknowledged in the cutover review, but they do not by themselves block --apply.

Required Artifacts And Signoff

Before the first production --apply, attach all of the following to the cutover review:

  1. readiness artifact set from pnpm db:readiness:demand-assignment --write-artifacts
  2. exact command transcript or CI job link for the readiness run
  3. environment/date stamp for the target database
  4. operator signoff confirming no blockers remain
  5. product/engineering signoff approving the migration window

No production backfill should run from an ad hoc shell session without the saved artifact set.

Staged Sequence

Stage 0: Compatibility Freeze

  • keep compatibility facades enabled
  • avoid schema or router changes that introduce new legacy-only write paths
  • verify the latest application/API tests for demand-assignment compatibility are green

Stage 1: Readiness Review

  • run pnpm db:readiness:demand-assignment --write-artifacts
  • review blockers and warnings from the readiness artifact
  • remediate invalid legacy rows or orphaned legacy links before continuing

Stage 2: Dry-Run Signoff

  • rerun readiness until goNoGo is go
  • confirm the dry-run artifact shows zero pending creates and zero invalid skips
  • record operator and product/engineering approval against the saved artifacts

Stage 3: Apply Backfill

  • run pnpm --filter @capakraken/db db:backfill:demand-assignment --apply
  • immediately rerun pnpm db:readiness:demand-assignment --write-artifacts
  • require the post-apply readiness report to remain go

Stage 4: Observe And Stabilize

  • keep legacy facades active during the observation window
  • monitor create/update/fill flows for stale-link compatibility behavior
  • treat the remaining raw legacy writes as an explicit allowlist only:
    • compatibility delete paths that still accept legacy allocation ids
    • true legacy-id update mirrors that preserve old allocation.update behavior
  • do not remove legacy rows or legacy-compatible reads until the post-apply observation window is complete

Stage 5: Legacy Cleanup

  • plan legacy Allocation cleanup only after a separate review
  • retire the Stage 4 allowlist branches individually and verify each removal with focused application/API compatibility tests plus a Docker smoke check
  • preserve deterministic linkage or archived evidence for any rows cleaned up
  • keep ambiguity rules below in force during cleanup

Ambiguity And Orphan Policy

The migration is intentionally strict. When data is ambiguous, stop and remediate instead of guessing.

  • If a legacy allocation id resolves to both a demand and an assignment after cleanup, treat it as an error and resolve it manually.
  • If a DemandRequirement or Assignment has a legacyAllocationId that no longer exists, treat it as an orphan and block cutover until reviewed.
  • If a staffed legacy allocation has no resourceId, do not synthesize an assignment. Fix or retire the row first.
  • If a placeholder legacy allocation still has a resourceId, preserve it as demand metadata only; do not auto-create an assignment from that signal.
  • If a staffed legacy allocation has headcount > 1, preserve one assignment row and treat the headcount discrepancy as a documented warning, not inferred fan-out.

Current Readiness Baseline

The latest workspace readiness run on 2026-03-13T20:10:43.050Z reported:

  • 132 legacy allocations in scope
  • 0 pending demand backfills
  • 0 pending assignment backfills
  • 0 orphaned demand links
  • 0 orphaned assignment links
  • 0 invalid staffed allocations without a resource
  • dry-run creates: 0 demand, 0 assignment
  • goNoGo: "go"

The first real pnpm --filter @capakraken/db db:backfill:demand-assignment --apply --json was executed on 2026-03-13 and completed as a no-op:

  • demandCreates: 0
  • assignmentCreates: 0
  • demandSkips: 2
  • assignmentSkips: 130
  • errors: 0
  • warnings: 0

Current interpretation: the workspace is already fully backfilled for the additive split, and future cutover work is now primarily about final compatibility-branch retirement and operational signoff rather than data creation.

Migration Completion Status (2026-03-14)

As of 2026-03-14, the split persistence migration is fully complete through Stage 5:

  • All reads are split-authoritative. No runtime code queries db.allocation.findMany() or db.allocation.findUnique(). buildSplitAllocationReadModel works exclusively from demandRequirements and assignments.
  • All writes are split-authoritative. Create, update, delete, and fill flows route through DemandRequirement/Assignment paths.
  • Legacy Allocation table dropped. The Prisma model and database table have been removed. 132 legacy rows were dropped.
  • legacyAllocationId columns removed. Both DemandRequirement and Assignment no longer carry legacy link columns. All @unique constraints and @@index entries have been dropped.
  • getAllocationCompatibilityId removed. SSE event emissions now use entity IDs directly. The helper function and all ~20 call sites have been replaced with direct .id access.
  • getLegacyAllocationLinks removed. findAllocationFacadeEntry no longer falls back to legacy link resolution. IDs that don't match a current demand/assignment return null.
  • isPlaceholder is a derived read-model property. Demand vs. assignment intent is derived from entity type at read-model build time (DemandRequirementisPlaceholder: true, AssignmentisPlaceholder: false). The shared Allocation type exposes isPlaceholder as a computed property for frontend consumption.
  • fillPlaceholder API procedure removed. Open demand fills use fillOpenDemandByAllocation. The UI component has been renamed from FillPlaceholderModal to FillOpenDemandModal.
  • kind: "legacy" facade resolution removed. AllocationFacadeResolution only has kind: "demand" and kind: "assignment".
  • All legacy allocation write-back removed. No code touches the legacy Allocation table.
  • Migration tooling deleted. Backfill, audit, readiness scripts, and the role-string migration script have been removed along with their npm script entries.
  • Seed data cleaned. All allocation.create and allocation.deleteMany calls removed from seed.
  • Deleted files (complete list):
    • allocation-compatibility-id.tsgetAllocationCompatibilityId helper
    • get-legacy-allocation-links.ts — legacy link resolution
    • legacy-allocation-record-store.ts — low-level db.allocation CRUD wrapper
    • ensure-legacy-allocation-split-persistence.ts — backfill helper
    • ensure-demand-requirement-for-legacy-placeholder.ts — placeholder demand backfill
    • has-backing-legacy-allocation.ts, delete-allocation-with-compatibility.ts — dead code
    • create-allocation.ts — legacy allocation creation
    • create-legacy-allocation-with-compatibility.ts, create-legacy-allocation-compatibility-bundle.ts — dead compatibility wrappers
    • create-demand-requirement-with-compatibility.ts, create-assignment-with-compatibility.ts, create-demand-and-assignment-with-compatibility.ts — dead compatibility wrappers
    • create-split-records-from-legacy-allocation.ts — legacy-to-split record factory
    • FillPlaceholderModal.tsx — replaced by FillOpenDemandModal.tsx
    • legacy-allocation-links.test.ts — tests for deleted getLegacyAllocationLinks
    • backfill-demand-assignment.ts, audit-demand-assignment-split.ts, run-demand-assignment-readiness.ts — migration tooling
    • demand-assignment-readiness.ts, demand-assignment-readiness.test.ts — readiness checks
    • demand-assignment-backfill.ts, demand-assignment-backfill.test.ts — backfill logic
    • migrate-role-strings.ts — one-time Phase 5 migration

Facade Retirement (Complete)

The former compatibility facades have been renamed to clean domain names:

  • update-allocation-with-compatibilityupdate-allocation-entry
  • delete-allocation-facade-entrydelete-allocation-entry
  • fill-open-demand-with-compatibilityfill-open-demand
  • load-allocation-facade-entryload-allocation-entry

All AllocationFacade* type prefixes have been replaced with AllocationEntry*. All compatibilityId and source fields have been removed from booking interfaces. Variable names like demandRequirementByCompatibilityId and existingFacade have been renamed to demandRequirementById and resolved. Test descriptions referencing "compatibility" have been updated.

No legacy compatibility naming (e.g. compatibilityId, AllocationFacade*, WithCompatibility, FillPlaceholder) remains in the codebase. The isPlaceholder property and "placeholder" strategy values are intentionally retained as derived read-model contracts — they describe the demand-vs-assignment distinction at the frontend consumption layer, not legacy migration artifacts.

Parallel Workstream Note

As of 2026-03-13, the Dispo v2 chargeability and resource planning stream runs in parallel with this migration. That stream adds new Prisma models (Country, MetroCity, OrgUnit, UtilizationCategory, Client, ManagementLevelGroup, ManagementLevel) and extends the Resource model with new fields. It does not touch Allocation, DemandRequirement, or Assignment models or any compatibility facades. The only serialization point is schema.prisma edits — both streams must not edit the schema file concurrently. See samples/Dispov2/plan-overview.md and docs/product-roadmap.md for the Dispo v2 plans.