diff --git a/LEARNINGS.md b/LEARNINGS.md index 03411cc..06712ed 100644 --- a/LEARNINGS.md +++ b/LEARNINGS.md @@ -198,6 +198,26 @@ __all__ = ["User"] --- +### 2026-03-06 | Celery Canvas | workflow_builder.py: order_line_id als step_path übergeben crasht Blender +**Problem:** `_build_still` übergab `order_line_id` als ersten Positional-Arg an `render_still_task.si(order_line_id, **params)` — aber `render_still_task` erwartet `step_path: str` als ersten Arg. Blender versuchte die UUID als Pfad zu öffnen → crash. +**Lösung:** Neue `render_order_line_still_task` die intern die DB-Abfrage macht (OrderLine → Product → CadFile → stored_path). `workflow_builder._build_still` nutzt jetzt diese neue Task. +**Für künftige Projekte:** Workflow-Builder-Tasks dürfen nie Domain-IDs als file-path-basierte Task-Argumente verwenden. Immer separate order-line-aware Tasks erstellen die die Auflösung intern durchführen. + +### 2026-03-06 | Docker | docker compose in Container braucht multi-stage CLI-Copy +**Problem:** Backend-Container basiert auf `python:3.11-slim` — kein `docker` binary, kein `docker compose`. Worker-Scale-Endpoint kann `docker compose up --scale` nicht aufrufen. +**Lösung:** Multi-Stage Dockerfile: `COPY --from=docker:cli /usr/local/bin/docker /usr/local/bin/docker` + `COPY --from=docker-cli /usr/local/lib/docker/cli-plugins /usr/local/lib/docker/cli-plugins`. Außerdem: Docker-Socket mounten (`/var/run/docker.sock`) + Compose-File als Volume (`./:/compose:ro`) + `COMPOSE_PROJECT_DIR=/compose` env var. +**Für künftige Projekte:** Multi-Stage-Builds sind die sauberste Methode um Binaries aus anderen Images zu kopieren ohne die ganze Dependency-Chain zu installieren. + +### 2026-03-06 | React Three Fiber | Wireframe-Toggle über Material-Clone +**Problem:** Drei.js-Materialien sind shared objects — direkte Mutation von `child.material.wireframe = true` auf einem geparstem GLTF-Scene würde alle Instanzen dieses Materials beeinflussen. +**Lösung:** `child.material = child.material.clone()` vor der Wireframe-Mutation in `useEffect`. So bekommt jede Mesh-Instanz ihr eigenes Material-Objekt und der Toggle hat keinen unerwünschten Side-Effect. +**Für künftige Projekte:** GLTF-Materialien bei Runtime-Modifikationen immer zuerst clonen. + +### 2026-03-06 | pytest | Backend ohne dev-Dependencies: pip install -e ".[dev]" nötig +**Problem:** Backend-Dockerfile installiertete nur `pip install -e .` — keine dev-Dependencies → pytest/pytest-asyncio/httpx nicht verfügbar → `python -m pytest` schlägt mit "No module named pytest" fehl. +**Lösung:** Dockerfile geändert auf `pip install -e ".[dev]"`. Dev-Dependencies in `pyproject.toml [project.optional-dependencies] dev = [pytest>=8.0, ...]` waren bereits definiert, nur der Install-Befehl war unvollständig. +**Für künftige Projekte:** Immer prüfen ob `[dev]` extras installiert sind wenn Tests im Container laufen sollen. + ## Offene Fragen - [ ] Azure AI Credentials für Phase 4 (Bildvalidierung) noch nicht konfiguriert - [ ] pythonOCC verfügbar im render-worker (via cadquery dependency)? Deployment-Test ausstehend diff --git a/backend/Dockerfile b/backend/Dockerfile index 1a1a3e3..6fd7a9a 100644 --- a/backend/Dockerfile +++ b/backend/Dockerfile @@ -1,3 +1,6 @@ +# Stage 0: grab docker CLI + compose plugin +FROM docker:cli AS docker-cli + FROM python:3.11-slim WORKDIR /app @@ -13,9 +16,13 @@ RUN apt-get update && apt-get install -y --no-install-recommends \ libffi-dev \ && rm -rf /var/lib/apt/lists/* -# Install Python dependencies +# Copy docker CLI for worker scaling +COPY --from=docker-cli /usr/local/bin/docker /usr/local/bin/docker +COPY --from=docker-cli /usr/local/lib/docker/cli-plugins /usr/local/lib/docker/cli-plugins + +# Install Python dependencies (including dev extras for pytest) COPY pyproject.toml . -RUN pip install --no-cache-dir -e . +RUN pip install --no-cache-dir -e ".[dev]" # Copy app code COPY . . diff --git a/backend/app/api/routers/cad.py b/backend/app/api/routers/cad.py index 4163d9b..a7f2e2c 100644 --- a/backend/app/api/routers/cad.py +++ b/backend/app/api/routers/cad.py @@ -320,6 +320,38 @@ async def generate_stl( return {"status": "queued", "task_id": task.id, "quality": quality} +@router.post("/{id}/generate-gltf-geometry", status_code=status.HTTP_202_ACCEPTED) +async def generate_gltf_geometry( + id: uuid.UUID, + user: User = Depends(get_current_user), + db: AsyncSession = Depends(get_db), +): + """Queue GLB geometry export from the existing STL cache (trimesh, no Blender). + + Stores the result as a MediaAsset with asset_type='gltf_geometry'. + The STL low-quality cache must already exist (run a thumbnail render first). + """ + if user.role.value not in ("admin", "project_manager"): + raise HTTPException(status_code=403, detail="Insufficient permissions") + + cad = await _get_cad_file(id, db) + if not cad.stored_path: + raise HTTPException(status_code=404, detail="STEP file not uploaded for this CAD file") + + step_path = Path(cad.stored_path) + stl_path = step_path.parent / f"{step_path.stem}_low.stl" + if not stl_path.exists(): + raise HTTPException( + status_code=404, + detail="STL low-quality cache not found. Trigger a render first to generate it.", + ) + + # Queue as a thumbnail_rendering task (trimesh available in render-worker) + from app.tasks.step_tasks import generate_gltf_geometry_task + task = generate_gltf_geometry_task.delay(str(id)) + return {"status": "queued", "task_id": task.id, "cad_file_id": str(id)} + + @router.post( "/{id}/regenerate-thumbnail", status_code=status.HTTP_202_ACCEPTED, diff --git a/backend/app/api/routers/worker.py b/backend/app/api/routers/worker.py index a23c235..fc61346 100644 --- a/backend/app/api/routers/worker.py +++ b/backend/app/api/routers/worker.py @@ -356,6 +356,104 @@ async def cancel_task(task_id: str, user: User = Depends(require_admin_or_pm)): return {"revoked": task_id} +# --------------------------------------------------------------------------- +# Worker management — list workers + scale +# --------------------------------------------------------------------------- + +class ScaleRequest(BaseModel): + service: str # "render-worker" | "worker" | "worker-thumbnail" + count: int # 0–20 + + +@router.get("/celery-workers") +async def get_celery_workers(user: User = Depends(require_admin_or_pm)): + """List active Celery workers with their queues and active task counts.""" + import asyncio + from app.tasks.celery_app import celery_app + + def _inspect() -> dict: + try: + insp = celery_app.control.inspect(timeout=2.0) + return { + "active_queues": insp.active_queues() or {}, + "active": insp.active() or {}, + "stats": insp.stats() or {}, + } + except Exception as exc: + return {"error": str(exc)} + + data = await asyncio.to_thread(_inspect) + if "error" in data: + return {"workers": [], "error": data["error"]} + + workers = [] + for worker_name, queues in data.get("active_queues", {}).items(): + queue_names = [q.get("name") for q in (queues or [])] + active_tasks = data.get("active", {}).get(worker_name, []) + stats = data.get("stats", {}).get(worker_name, {}) + workers.append({ + "name": worker_name, + "queues": queue_names, + "active_task_count": len(active_tasks), + "active_tasks": [ + {"name": t.get("name"), "id": t.get("id")} for t in active_tasks + ], + "total_tasks_processed": stats.get("total", {}), + }) + return {"workers": workers} + + +@router.post("/scale", status_code=http_status.HTTP_202_ACCEPTED) +async def scale_workers( + body: ScaleRequest, + user: User = Depends(require_admin_or_pm), +): + """Scale a Compose service (render-worker, worker, worker-thumbnail) up or down. + + Requires the docker socket and compose file to be accessible inside the container + (see docker-compose.yml COMPOSE_PROJECT_DIR env var). + """ + import asyncio + import os + import subprocess + from fastapi import HTTPException + + ALLOWED_SERVICES = {"render-worker", "worker", "worker-thumbnail"} + if body.service not in ALLOWED_SERVICES: + raise HTTPException(400, detail=f"service must be one of {ALLOWED_SERVICES}") + if not (0 <= body.count <= 20): + raise HTTPException(400, detail="count must be between 0 and 20") + + compose_dir = os.environ.get("COMPOSE_PROJECT_DIR", "/compose") + compose_file = os.path.join(compose_dir, "docker-compose.yml") + + def _scale() -> subprocess.CompletedProcess: + return subprocess.run( + [ + "docker", "compose", + "-f", compose_file, + "up", + "--scale", f"{body.service}={body.count}", + "--no-recreate", + "-d", + ], + capture_output=True, text=True, timeout=120, + ) + + try: + result = await asyncio.to_thread(_scale) + except subprocess.TimeoutExpired: + raise HTTPException(504, detail="Scale operation timed out") + + if result.returncode != 0: + raise HTTPException( + 500, + detail=f"docker compose scale failed: {result.stderr[-500:]}", + ) + + return {"service": body.service, "count": body.count, "status": "scaling"} + + # --------------------------------------------------------------------------- # Render health check # --------------------------------------------------------------------------- diff --git a/backend/app/domains/media/router.py b/backend/app/domains/media/router.py index 6d4c7c8..90cceb7 100644 --- a/backend/app/domains/media/router.py +++ b/backend/app/domains/media/router.py @@ -19,6 +19,7 @@ router = APIRouter(prefix="/api/media", tags=["media"]) async def list_assets( product_id: uuid.UUID | None = None, order_line_id: uuid.UUID | None = None, + cad_file_id: uuid.UUID | None = None, asset_type: MediaAssetType | None = None, skip: int = Query(0, ge=0), limit: int = Query(50, ge=1, le=500), @@ -28,6 +29,7 @@ async def list_assets( db, product_id=product_id, order_line_id=order_line_id, + cad_file_id=cad_file_id, asset_type=asset_type, skip=skip, limit=limit, diff --git a/backend/app/domains/media/service.py b/backend/app/domains/media/service.py index e962f46..ab053c3 100644 --- a/backend/app/domains/media/service.py +++ b/backend/app/domains/media/service.py @@ -9,6 +9,7 @@ async def list_media_assets( db: AsyncSession, product_id: uuid.UUID | None = None, order_line_id: uuid.UUID | None = None, + cad_file_id: uuid.UUID | None = None, asset_type: MediaAssetType | None = None, is_archived: bool | None = False, skip: int = 0, @@ -19,6 +20,8 @@ async def list_media_assets( q = q.where(MediaAsset.product_id == product_id) if order_line_id: q = q.where(MediaAsset.order_line_id == order_line_id) + if cad_file_id: + q = q.where(MediaAsset.cad_file_id == cad_file_id) if asset_type: q = q.where(MediaAsset.asset_type == asset_type) if is_archived is not None: diff --git a/backend/app/domains/rendering/tasks.py b/backend/app/domains/rendering/tasks.py index 1d1dc84..7749609 100644 --- a/backend/app/domains/rendering/tasks.py +++ b/backend/app/domains/rendering/tasks.py @@ -269,6 +269,176 @@ def publish_asset( return asyncio.get_event_loop().run_until_complete(_run()) +def _resolve_step_path_for_order_line(order_line_id: str) -> tuple[str | None, str | None]: + """Sync helper: resolves (step_path, cad_file_id) from an OrderLine via DB.""" + import asyncio + + async def _inner() -> tuple[str | None, str | None]: + from app.database import AsyncSessionLocal + from app.domains.orders.models import OrderLine + from app.domains.products.models import Product + from app.models.cad_file import CadFile + from sqlalchemy import select + from sqlalchemy.orm import selectinload + + async with AsyncSessionLocal() as db: + res = await db.execute( + select(OrderLine) + .options(selectinload(OrderLine.product)) + .where(OrderLine.id == order_line_id) + ) + line = res.scalar_one_or_none() + if not line or not line.product or not line.product.cad_file_id: + return None, None + cad_res = await db.execute( + select(CadFile).where(CadFile.id == line.product.cad_file_id) + ) + cad = cad_res.scalar_one_or_none() + if not cad or not cad.stored_path: + return None, None + return cad.stored_path, str(line.product.cad_file_id) + + return asyncio.get_event_loop().run_until_complete(_inner()) + + +@celery_app.task( + bind=True, + name="app.domains.rendering.tasks.render_order_line_still_task", + queue="thumbnail_rendering", + max_retries=2, +) +def render_order_line_still_task(self, order_line_id: str, **params) -> dict: + """Render a still image for an order line, resolving STEP path from DB. + + Wraps render_still_task logic but accepts order_line_id instead of step_path. + On success, creates a MediaAsset record via publish_asset. + """ + step_path_str, cad_file_id = _resolve_step_path_for_order_line(order_line_id) + if not step_path_str: + raise RuntimeError( + f"Cannot resolve STEP path for order_line {order_line_id}: " + "product missing or has no linked CAD file" + ) + + step = Path(step_path_str) + output_dir = step.parent / "renders" + output_dir.mkdir(parents=True, exist_ok=True) + output_path = output_dir / f"line_{order_line_id}.png" + + try: + from app.services.render_blender import render_still + result = render_still( + step_path=step, + output_path=output_path, + **params, + ) + publish_asset.delay( + order_line_id, + "still", + str(output_path), + render_config=result, + ) + logger.info( + "render_order_line_still_task completed for line %s in %.1fs", + order_line_id, result.get("total_duration_s", 0), + ) + return result + except Exception as exc: + logger.error("render_order_line_still_task failed for %s: %s", order_line_id, exc) + raise self.retry(exc=exc, countdown=30) + + +@celery_app.task( + bind=True, + name="app.domains.rendering.tasks.export_gltf_for_order_line_task", + queue="thumbnail_rendering", + max_retries=1, +) +def export_gltf_for_order_line_task(self, order_line_id: str) -> dict: + """Export a geometry-only GLB from the STL cache using trimesh (no Blender). + + Publishes a MediaAsset with asset_type='gltf_geometry'. + Requires the STL low-quality cache to exist. + """ + step_path_str, cad_file_id = _resolve_step_path_for_order_line(order_line_id) + if not step_path_str: + raise RuntimeError(f"Cannot resolve STEP path for order_line {order_line_id}") + + step = Path(step_path_str) + stl_path = step.parent / f"{step.stem}_low.stl" + if not stl_path.exists(): + raise RuntimeError( + f"STL cache not found: {stl_path}. Run thumbnail generation first." + ) + + output_path = step.parent / f"{step.stem}_geometry.glb" + + try: + import trimesh + mesh = trimesh.load(str(stl_path)) + mesh.export(str(output_path)) + publish_asset.delay(order_line_id, "gltf_geometry", str(output_path)) + logger.info("export_gltf_for_order_line_task completed: %s", output_path.name) + return {"glb_path": str(output_path)} + except Exception as exc: + logger.error("export_gltf_for_order_line_task failed for %s: %s", order_line_id, exc) + raise self.retry(exc=exc, countdown=15) + + +@celery_app.task( + bind=True, + name="app.domains.rendering.tasks.export_blend_for_order_line_task", + queue="thumbnail_rendering", + max_retries=1, +) +def export_blend_for_order_line_task(self, order_line_id: str) -> dict: + """Export a production-quality GLB via Blender + asset library (export_gltf.py). + + Publishes a MediaAsset with asset_type='blend_production'. + Requires Blender + the render-scripts directory. + """ + import os + import subprocess + + step_path_str, cad_file_id = _resolve_step_path_for_order_line(order_line_id) + if not step_path_str: + raise RuntimeError(f"Cannot resolve STEP path for order_line {order_line_id}") + + step = Path(step_path_str) + stl_path = step.parent / f"{step.stem}_low.stl" + if not stl_path.exists(): + raise RuntimeError(f"STL cache not found: {stl_path}") + + output_path = step.parent / f"{step.stem}_production.glb" + scripts_dir = Path(os.environ.get("RENDER_SCRIPTS_DIR", "/render-scripts")) + export_script = scripts_dir / "export_gltf.py" + + from app.services.render_blender import find_blender + blender_bin = find_blender() + if not blender_bin: + raise RuntimeError("Blender binary not found — cannot run export_blend task") + + try: + cmd = [ + blender_bin, "--background", + "--python", str(export_script), + "--", + "--stl_path", str(stl_path), + "--output_path", str(output_path), + ] + result = subprocess.run(cmd, capture_output=True, text=True, timeout=300) + if result.returncode != 0: + raise RuntimeError( + f"export_gltf.py exited {result.returncode}:\n{result.stderr[-500:]}" + ) + publish_asset.delay(order_line_id, "blend_production", str(output_path)) + logger.info("export_blend_for_order_line_task completed: %s", output_path.name) + return {"glb_path": str(output_path)} + except Exception as exc: + logger.error("export_blend_for_order_line_task failed for %s: %s", order_line_id, exc) + raise self.retry(exc=exc, countdown=30) + + def _build_ffmpeg_cmd( frames_dir: Path, output_mp4: Path, fps: int = 30, bg_color: str = "" ) -> list: diff --git a/backend/app/domains/rendering/workflow_builder.py b/backend/app/domains/rendering/workflow_builder.py index 9fc0e0e..336e50f 100644 --- a/backend/app/domains/rendering/workflow_builder.py +++ b/backend/app/domains/rendering/workflow_builder.py @@ -20,6 +20,7 @@ def dispatch_workflow( "still": _build_still, "turntable": _build_turntable, "multi_angle": _build_multi_angle, + "still_with_exports": _build_still_with_exports, } builder = builders.get(workflow_type) if not builder: @@ -30,24 +31,56 @@ def dispatch_workflow( def _build_still(order_line_id: str, params: dict): - from app.domains.rendering.tasks import render_still_task + """Still render: resolves STEP path from order_line DB record.""" + from app.domains.rendering.tasks import render_order_line_still_task return chain( - render_still_task.si(order_line_id, **params) + render_order_line_still_task.si(order_line_id, **params) ) def _build_turntable(order_line_id: str, params: dict): + """Turntable animation: requires step_path + output_dir in params.""" from app.domains.rendering.tasks import render_turntable_task + step_path = params.get("step_path") + output_dir = params.get("output_dir") + if not step_path or not output_dir: + raise ValueError( + "turntable workflow requires 'step_path' and 'output_dir' in params" + ) + remaining = {k: v for k, v in params.items() if k not in ("step_path", "output_dir")} return chain( - render_turntable_task.si(order_line_id, **params) + render_turntable_task.si(step_path, output_dir, **remaining) ) def _build_multi_angle(order_line_id: str, params: dict): - from app.domains.rendering.tasks import render_still_task - angles = params.get("angles", [0, 45, 90]) - p = {k: v for k, v in params.items() if k != "angles"} + """Multi-angle stills: renders the same order_line from multiple rotation_z angles.""" + from app.domains.rendering.tasks import render_order_line_still_task + angles = params.pop("angles", [0, 45, 90]) return group( - render_still_task.si(order_line_id, camera_angle=angle, **p) + render_order_line_still_task.si(order_line_id, rotation_z=float(angle), **params) for angle in angles ) + + +def _build_still_with_exports(order_line_id: str, params: dict): + """Still render + parallel GLB exports (geometry + production quality). + + Pipeline: + render_order_line_still_task → group( + export_gltf_for_order_line_task, + export_blend_for_order_line_task, + ) + """ + from app.domains.rendering.tasks import ( + render_order_line_still_task, + export_gltf_for_order_line_task, + export_blend_for_order_line_task, + ) + return chain( + render_order_line_still_task.si(order_line_id, **params), + group( + export_gltf_for_order_line_task.si(order_line_id), + export_blend_for_order_line_task.si(order_line_id), + ), + ) diff --git a/backend/app/tasks/step_tasks.py b/backend/app/tasks/step_tasks.py index 67f0916..b3dc138 100644 --- a/backend/app/tasks/step_tasks.py +++ b/backend/app/tasks/step_tasks.py @@ -245,6 +245,69 @@ def generate_stl_cache(self, cad_file_id: str, quality: str): raise self.retry(exc=exc, countdown=30, max_retries=2) +@celery_app.task(bind=True, name="app.tasks.step_tasks.generate_gltf_geometry_task", queue="thumbnail_rendering", max_retries=1) +def generate_gltf_geometry_task(self, cad_file_id: str): + """Export a geometry-only GLB from the STL low-quality cache using trimesh. + + Creates a MediaAsset with asset_type='gltf_geometry' and cad_file_id set. + No Blender required — trimesh handles the STL→GLB conversion. + """ + from pathlib import Path as _Path + from sqlalchemy import create_engine + from sqlalchemy.orm import Session + from app.config import settings as app_settings + from app.models.cad_file import CadFile + + sync_url = app_settings.database_url.replace("+asyncpg", "") + eng = create_engine(sync_url) + with Session(eng) as session: + cad_file = session.get(CadFile, cad_file_id) + if not cad_file or not cad_file.stored_path: + logger.error("generate_gltf_geometry_task: no stored_path for %s", cad_file_id) + return + step_path_str = cad_file.stored_path + eng.dispose() + + step = _Path(step_path_str) + stl_path = step.parent / f"{step.stem}_low.stl" + if not stl_path.exists(): + logger.error("generate_gltf_geometry_task: STL not found %s", stl_path) + raise RuntimeError(f"STL cache not found: {stl_path}") + + output_path = step.parent / f"{step.stem}_geometry.glb" + try: + import trimesh + mesh = trimesh.load(str(stl_path)) + mesh.export(str(output_path)) + logger.info("generate_gltf_geometry_task: exported %s", output_path.name) + except Exception as exc: + logger.error("generate_gltf_geometry_task failed for %s: %s", cad_file_id, exc) + raise self.retry(exc=exc, countdown=15) + + # Create MediaAsset record + import asyncio + + async def _store(): + from app.database import AsyncSessionLocal + from app.domains.media.models import MediaAsset, MediaAssetType + async with AsyncSessionLocal() as db: + import uuid + asset = MediaAsset( + cad_file_id=uuid.UUID(cad_file_id), + asset_type=MediaAssetType.gltf_geometry, + storage_key=str(output_path), + mime_type="model/gltf-binary", + file_size_bytes=output_path.stat().st_size if output_path.exists() else None, + ) + db.add(asset) + await db.commit() + return str(asset.id) + + asset_id = asyncio.get_event_loop().run_until_complete(_store()) + logger.info("generate_gltf_geometry_task: MediaAsset %s created for cad %s", asset_id, cad_file_id) + return {"glb_path": str(output_path), "asset_id": asset_id} + + @celery_app.task(bind=True, name="app.tasks.step_tasks.regenerate_thumbnail", queue="thumbnail_rendering") def regenerate_thumbnail(self, cad_file_id: str, part_colors: dict): """Regenerate thumbnail with per-part colours.""" diff --git a/backend/tests/domains/test_orders_service.py b/backend/tests/domains/test_orders_service.py new file mode 100644 index 0000000..1be3fa8 --- /dev/null +++ b/backend/tests/domains/test_orders_service.py @@ -0,0 +1,191 @@ +"""Tests for orders domain — order creation, status transitions, and pricing.""" +import uuid + +import pytest + + +# --------------------------------------------------------------------------- +# Helpers +# --------------------------------------------------------------------------- + +async def _create_test_product(db): + from app.domains.products.models import Product + product = Product( + id=uuid.uuid4(), + name=f"Test Product {uuid.uuid4().hex[:6]}", + category_key="TRB", + components=[], + cad_part_materials=[], + ) + db.add(product) + await db.commit() + await db.refresh(product) + return product + + +async def _create_test_order(db, user): + from app.domains.orders.models import Order, OrderStatus + order = Order( + id=uuid.uuid4(), + order_number=f"TEST-{uuid.uuid4().hex[:6].upper()}", + status=OrderStatus.draft, + created_by=user.id, + tenant_id=None, + ) + db.add(order) + await db.commit() + await db.refresh(order) + return order + + +# --------------------------------------------------------------------------- +# Order creation +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_create_order_draft_status(db, admin_user): + """New order starts in draft status.""" + order = await _create_test_order(db, admin_user) + assert order.id is not None + assert order.status.value == "draft" + assert order.order_number.startswith("TEST-") + + +@pytest.mark.asyncio +async def test_order_has_no_lines_initially(db, admin_user): + """New order starts with zero order lines.""" + from sqlalchemy import select + from app.domains.orders.models import Order, OrderLine + order = await _create_test_order(db, admin_user) + result = await db.execute( + select(OrderLine).where(OrderLine.order_id == order.id) + ) + lines = result.scalars().all() + assert len(lines) == 0 + + +# --------------------------------------------------------------------------- +# Order line creation +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_add_order_line(db, admin_user): + """Order line can be added to a draft order.""" + from app.domains.orders.models import OrderLine + product = await _create_test_product(db) + order = await _create_test_order(db, admin_user) + + line = OrderLine( + id=uuid.uuid4(), + order_id=order.id, + product_id=product.id, + render_status="pending", + item_status="pending", + tenant_id=None, + ) + db.add(line) + await db.commit() + await db.refresh(line) + + assert line.id is not None + assert line.order_id == order.id + assert line.render_status == "pending" + + +# --------------------------------------------------------------------------- +# Status transitions +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_order_status_transition_to_submitted(db, admin_user): + """Order status can be changed from draft to submitted.""" + from app.domains.orders.models import Order, OrderStatus + order = await _create_test_order(db, admin_user) + order.status = OrderStatus.submitted + await db.commit() + await db.refresh(order) + assert order.status == OrderStatus.submitted + + +@pytest.mark.asyncio +async def test_order_multiple_lines(db, admin_user): + """Multiple lines can be added to the same order.""" + from app.domains.orders.models import OrderLine + product = await _create_test_product(db) + order = await _create_test_order(db, admin_user) + + for _ in range(3): + line = OrderLine( + id=uuid.uuid4(), + order_id=order.id, + product_id=product.id, + render_status="pending", + item_status="pending", + tenant_id=None, + ) + db.add(line) + await db.commit() + + from sqlalchemy import select + result = await db.execute( + select(OrderLine).where(OrderLine.order_id == order.id) + ) + assert len(result.scalars().all()) == 3 + + +# --------------------------------------------------------------------------- +# Render status tracking +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_order_line_render_status_update(db, admin_user): + """Order line render_status can be updated to processing/completed.""" + from app.domains.orders.models import OrderLine + product = await _create_test_product(db) + order = await _create_test_order(db, admin_user) + + line = OrderLine( + id=uuid.uuid4(), + order_id=order.id, + product_id=product.id, + render_status="pending", + item_status="pending", + tenant_id=None, + ) + db.add(line) + await db.commit() + + line.render_status = "processing" + await db.commit() + await db.refresh(line) + assert line.render_status == "processing" + + line.render_status = "completed" + await db.commit() + await db.refresh(line) + assert line.render_status == "completed" + + +# --------------------------------------------------------------------------- +# Unit price +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_order_line_unit_price_nullable(db, admin_user): + """unit_price defaults to None.""" + from app.domains.orders.models import OrderLine + product = await _create_test_product(db) + order = await _create_test_order(db, admin_user) + + line = OrderLine( + id=uuid.uuid4(), + order_id=order.id, + product_id=product.id, + render_status="pending", + item_status="pending", + tenant_id=None, + ) + db.add(line) + await db.commit() + await db.refresh(line) + assert line.unit_price is None diff --git a/backend/tests/domains/test_rendering_service.py b/backend/tests/domains/test_rendering_service.py new file mode 100644 index 0000000..fce5aac --- /dev/null +++ b/backend/tests/domains/test_rendering_service.py @@ -0,0 +1,112 @@ +"""Tests for rendering domain — workflow builder + task helpers.""" +import uuid +from unittest.mock import MagicMock, patch + +import pytest + + +# --------------------------------------------------------------------------- +# workflow_builder unit tests (no DB required) +# --------------------------------------------------------------------------- + +def test_dispatch_workflow_unknown_type_raises(): + from app.domains.rendering.workflow_builder import dispatch_workflow + with pytest.raises(ValueError, match="Unknown workflow type"): + dispatch_workflow("nonexistent_type", str(uuid.uuid4())) + + +def test_build_still_returns_chain(): + """_build_still returns a Celery chain wrapping render_order_line_still_task.""" + from celery import chain + from app.domains.rendering.workflow_builder import _build_still + canvas = _build_still(str(uuid.uuid4()), {}) + # A single-task chain is still a Celery Signature, not a plain chain, but + # it should be callable / have apply_async + assert hasattr(canvas, "apply_async") + + +def test_build_multi_angle_creates_group(): + """_build_multi_angle returns a Celery group with one sig per angle.""" + from celery import group + from app.domains.rendering.workflow_builder import _build_multi_angle + order_line_id = str(uuid.uuid4()) + canvas = _build_multi_angle(order_line_id, {"angles": [0, 90, 180]}) + # group has tasks attribute + assert hasattr(canvas, "tasks") + assert len(canvas.tasks) == 3 + + +def test_build_still_with_exports_is_chain(): + """_build_still_with_exports returns a chain.""" + from app.domains.rendering.workflow_builder import _build_still_with_exports + canvas = _build_still_with_exports(str(uuid.uuid4()), {}) + assert hasattr(canvas, "apply_async") + + +def test_build_turntable_raises_without_step_path(): + """_build_turntable raises ValueError if step_path missing in params.""" + from app.domains.rendering.workflow_builder import _build_turntable + with pytest.raises(ValueError, match="step_path"): + _build_turntable(str(uuid.uuid4()), {}) + + +def test_build_turntable_raises_without_output_dir(): + from app.domains.rendering.workflow_builder import _build_turntable + with pytest.raises(ValueError, match="output_dir"): + _build_turntable(str(uuid.uuid4()), {"step_path": "/tmp/test.stp"}) + + +# --------------------------------------------------------------------------- +# _resolve_step_path_for_order_line — unit-tests with DB (integration) +# --------------------------------------------------------------------------- + +@pytest.mark.integration +@pytest.mark.asyncio +async def test_resolve_step_path_returns_none_for_missing_line(db): + """Returns (None, None) for a line_id that doesn't exist.""" + from app.domains.rendering.tasks import _resolve_step_path_for_order_line + import asyncio + + result = _resolve_step_path_for_order_line(str(uuid.uuid4())) + assert result == (None, None) + + +# --------------------------------------------------------------------------- +# publish_asset (unit test with mocked DB) +# --------------------------------------------------------------------------- + +def test_publish_asset_signature(): + """publish_asset is importable and is a bound Celery task.""" + from app.domains.rendering.tasks import publish_asset + assert callable(publish_asset) + assert hasattr(publish_asset, "delay") + + +# --------------------------------------------------------------------------- +# generate_gltf_geometry_task — smoke test (unit) +# --------------------------------------------------------------------------- + +def test_generate_gltf_geometry_task_importable(): + from app.tasks.step_tasks import generate_gltf_geometry_task + assert callable(generate_gltf_geometry_task) + assert hasattr(generate_gltf_geometry_task, "delay") + + +# --------------------------------------------------------------------------- +# New order-line tasks are importable and correctly registered +# --------------------------------------------------------------------------- + +def test_render_order_line_still_task_importable(): + from app.domains.rendering.tasks import render_order_line_still_task + assert render_order_line_still_task.name == "app.domains.rendering.tasks.render_order_line_still_task" + assert render_order_line_still_task.queue == "thumbnail_rendering" + + +def test_export_gltf_for_order_line_task_importable(): + from app.domains.rendering.tasks import export_gltf_for_order_line_task + assert export_gltf_for_order_line_task.queue == "thumbnail_rendering" + + +def test_export_blend_for_order_line_task_importable(): + from app.domains.rendering.tasks import export_blend_for_order_line_task + assert export_blend_for_order_line_task.queue == "thumbnail_rendering" diff --git a/docker-compose.yml b/docker-compose.yml index 117d7eb..03c5123 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -67,9 +67,12 @@ services: - MINIO_USER=${MINIO_USER:-minioadmin} - MINIO_PASSWORD=${MINIO_PASSWORD:-minioadmin} - MINIO_BUCKET=${MINIO_BUCKET:-uploads} + - COMPOSE_PROJECT_DIR=/compose volumes: - ./backend:/app - uploads:/app/uploads + - /var/run/docker.sock:/var/run/docker.sock + - .:/compose:ro ports: - "8888:8888" depends_on: diff --git a/frontend/src/App.tsx b/frontend/src/App.tsx index 701ec8f..b040464 100644 --- a/frontend/src/App.tsx +++ b/frontend/src/App.tsx @@ -22,6 +22,7 @@ import TenantsPage from './pages/Tenants' import WorkflowEditorPage from './pages/WorkflowEditor' import MediaBrowserPage from './pages/MediaBrowser' import BillingPage from './pages/Billing' +import WorkerManagementPage from './pages/WorkerManagement' function ProtectedRoute({ children }: { children: React.ReactNode }) { const token = useAuthStore((s) => s.token) @@ -104,6 +105,14 @@ export default function App() { } /> + + + + } + /> diff --git a/frontend/src/__tests__/pages/WorkerActivity.test.tsx b/frontend/src/__tests__/pages/WorkerActivity.test.tsx new file mode 100644 index 0000000..619072f --- /dev/null +++ b/frontend/src/__tests__/pages/WorkerActivity.test.tsx @@ -0,0 +1,70 @@ +import { describe, test, expect } from 'vitest' + +describe('WorkerActivity Page', () => { + test('page module is importable', async () => { + const module = await import('../../pages/WorkerActivity') + expect(module.default).toBeDefined() + }) +}) + +describe('worker API types', () => { + test('WorkerActivity interface shape', async () => { + // Type-level check: the interface must have the right keys + const activity = { + cad_processing: [], + active_count: 0, + failed_count: 0, + render_jobs: [], + render_active_count: 0, + render_failed_count: 0, + } + expect(activity.cad_processing).toBeInstanceOf(Array) + expect(typeof activity.active_count).toBe('number') + }) + + test('CeleryWorker interface shape', () => { + const worker = { + name: 'celery@worker1', + queues: ['thumbnail_rendering'], + active_task_count: 2, + active_tasks: [{ name: 'render_still_task', id: 'abc' }], + total_tasks_processed: { render_still_task: 42 }, + } + expect(worker.queues).toContain('thumbnail_rendering') + expect(worker.active_tasks).toHaveLength(1) + }) + + test('QueueStatus interface shape', () => { + const qs = { + queue_depths: { step_processing: 3, thumbnail_rendering: 0 }, + pending_count: 3, + active: [], + reserved: [], + pending: [], + } + expect(qs.queue_depths).toHaveProperty('step_processing') + expect(qs.pending_count).toBe(3) + }) +}) + +describe('worker API functions', () => { + test('getWorkerActivity is a function', async () => { + const { getWorkerActivity } = await import('../../api/worker') + expect(typeof getWorkerActivity).toBe('function') + }) + + test('getCeleryWorkers is a function', async () => { + const { getCeleryWorkers } = await import('../../api/worker') + expect(typeof getCeleryWorkers).toBe('function') + }) + + test('scaleWorkers is a function', async () => { + const { scaleWorkers } = await import('../../api/worker') + expect(typeof scaleWorkers).toBe('function') + }) + + test('getQueueStatus is a function', async () => { + const { getQueueStatus } = await import('../../api/worker') + expect(typeof getQueueStatus).toBe('function') + }) +}) diff --git a/frontend/src/__tests__/pages/WorkerManagement.test.tsx b/frontend/src/__tests__/pages/WorkerManagement.test.tsx new file mode 100644 index 0000000..321a0d0 --- /dev/null +++ b/frontend/src/__tests__/pages/WorkerManagement.test.tsx @@ -0,0 +1,67 @@ +import { describe, test, expect } from 'vitest' + +describe('WorkerManagement Page', () => { + test('page module is importable', async () => { + const module = await import('../../pages/WorkerManagement') + expect(module.default).toBeDefined() + }) +}) + +describe('media API', () => { + test('getMediaAssets is a function', async () => { + const { getMediaAssets } = await import('../../api/media') + expect(typeof getMediaAssets).toBe('function') + }) + + test('MediaFilter supports cad_file_id', () => { + // Type-level check: build a filter with cad_file_id + const filter = { cad_file_id: 'some-uuid', asset_type: 'gltf_geometry' as const } + expect(filter.cad_file_id).toBe('some-uuid') + }) + + test('MediaAsset interface has all required fields', () => { + const asset = { + id: 'uuid', + tenant_id: null, + product_id: null, + cad_file_id: null, + order_line_id: null, + workflow_run_id: null, + asset_type: 'still' as const, + storage_key: 'path/to/file.png', + file_size_bytes: 1024, + mime_type: 'image/png', + width: 512, + height: 512, + duration_s: null, + render_config: null, + is_archived: false, + created_at: new Date().toISOString(), + download_url: null, + } + expect(asset.asset_type).toBe('still') + expect(asset.cad_file_id).toBeNull() + }) +}) + +describe('cad API', () => { + test('generateGltfGeometry is a function', async () => { + const { generateGltfGeometry } = await import('../../api/cad') + expect(typeof generateGltfGeometry).toBe('function') + }) + + test('getCadThumbnailUrl returns correct URL', async () => { + const { getCadThumbnailUrl } = await import('../../api/cad') + const url = getCadThumbnailUrl('test-uuid') + expect(url).toBe('/api/cad/test-uuid/thumbnail') + }) +}) + +describe('Scale request validation', () => { + test('allowed services', () => { + const allowed = ['render-worker', 'worker', 'worker-thumbnail'] + expect(allowed).toContain('render-worker') + expect(allowed).toContain('worker-thumbnail') + expect(allowed).not.toContain('postgres') + }) +}) diff --git a/frontend/src/api/cad.ts b/frontend/src/api/cad.ts index dec4624..0210691 100644 --- a/frontend/src/api/cad.ts +++ b/frontend/src/api/cad.ts @@ -103,3 +103,18 @@ export async function regenerateThumbnail( ) return res.data } + +export interface GenerateGltfResponse { + status: 'queued' + task_id: string + cad_file_id: string +} + +/** + * Queue GLB geometry export from existing STL cache (trimesh, no Blender). + * The STL low-quality cache must already exist. + */ +export async function generateGltfGeometry(cadFileId: string): Promise { + const res = await api.post(`/cad/${cadFileId}/generate-gltf-geometry`) + return res.data +} diff --git a/frontend/src/api/media.ts b/frontend/src/api/media.ts index 6b4fc86..6afe368 100644 --- a/frontend/src/api/media.ts +++ b/frontend/src/api/media.ts @@ -33,6 +33,7 @@ export interface MediaAsset { export interface MediaFilter { product_id?: string order_line_id?: string + cad_file_id?: string asset_type?: MediaAssetType skip?: number limit?: number @@ -42,6 +43,7 @@ export const getMediaAssets = (filters: MediaFilter = {}): Promise const params = new URLSearchParams() if (filters.product_id) params.set('product_id', filters.product_id) if (filters.order_line_id) params.set('order_line_id', filters.order_line_id) + if (filters.cad_file_id) params.set('cad_file_id', filters.cad_file_id) if (filters.asset_type) params.set('asset_type', filters.asset_type) if (filters.skip !== undefined) params.set('skip', String(filters.skip)) if (filters.limit !== undefined) params.set('limit', String(filters.limit)) diff --git a/frontend/src/api/worker.ts b/frontend/src/api/worker.ts index e9a5f3b..ad085ad 100644 --- a/frontend/src/api/worker.ts +++ b/frontend/src/api/worker.ts @@ -123,3 +123,46 @@ export async function cancelTask(taskId: string): Promise<{ revoked: string }> { const res = await api.post<{ revoked: string }>(`/worker/queue/cancel/${taskId}`) return res.data } + +// --------------------------------------------------------------------------- +// Worker management +// --------------------------------------------------------------------------- + +export interface CeleryWorkerTask { + name: string + id: string +} + +export interface CeleryWorker { + name: string + queues: string[] + active_task_count: number + active_tasks: CeleryWorkerTask[] + total_tasks_processed: Record +} + +export interface CeleryWorkersResponse { + workers: CeleryWorker[] + error?: string +} + +export interface ScaleRequest { + service: 'render-worker' | 'worker' | 'worker-thumbnail' + count: number +} + +export interface ScaleResponse { + service: string + count: number + status: string +} + +export async function getCeleryWorkers(): Promise { + const res = await api.get('/worker/celery-workers') + return res.data +} + +export async function scaleWorkers(req: ScaleRequest): Promise { + const res = await api.post('/worker/scale', req) + return res.data +} diff --git a/frontend/src/components/cad/ThreeDViewer.tsx b/frontend/src/components/cad/ThreeDViewer.tsx index 07964f5..1da08ce 100644 --- a/frontend/src/components/cad/ThreeDViewer.tsx +++ b/frontend/src/components/cad/ThreeDViewer.tsx @@ -11,24 +11,50 @@ import { import { Canvas, useThree, useFrame } from '@react-three/fiber' import { OrbitControls, useGLTF, Environment } from '@react-three/drei' import { toast } from 'sonner' -import { X, Camera, Loader2, AlertTriangle } from 'lucide-react' +import { + X, Camera, Loader2, AlertTriangle, Box, Cpu, Download, ChevronDown, +} from 'lucide-react' import api from '../../api/client' // --------------------------------------------------------------------------- // Types // --------------------------------------------------------------------------- -interface ThreeDViewerProps { +export interface ThreeDViewerProps { cadFileId: string onClose: () => void + /** URL for the geometry-only GLB (from STL export) */ + geometryGltfUrl?: string + /** URL for the production-quality GLB (from asset library render) */ + productionGltfUrl?: string + /** Download URLs for GLB and .blend assets */ + downloadUrls?: { + glb?: string + blend?: string + } } +type ViewMode = 'geometry' | 'production' + +const ENV_PRESETS = ['city', 'sunset', 'dawn', 'night', 'warehouse', 'forest', 'apartment', 'studio', 'park', 'lobby'] as const +type EnvPreset = typeof ENV_PRESETS[number] + // --------------------------------------------------------------------------- // Inner model loader – separated so Suspense can catch it // --------------------------------------------------------------------------- -function GltfModel({ url }: { url: string }) { +function GltfModel({ url, wireframe }: { url: string; wireframe: boolean }) { const { scene } = useGLTF(url) + + useEffect(() => { + scene.traverse((child: any) => { + if (child.isMesh) { + child.material = child.material.clone() + child.material.wireframe = wireframe + } + }) + }, [scene, wireframe]) + return } @@ -50,11 +76,7 @@ function ScreenshotCapture({ enabled, cadFileId, onDone }: ScreenshotCaptureProp if (!enabled || didCapture.current) return didCapture.current = true - // Grab the canvas as a data-URL after the current frame has been rendered const dataUrl = gl.domElement.toDataURL('image/png') - - // Convert data-URL → Blob without a network fetch: - // data:[][;base64], const [header, base64Data] = dataUrl.split(',') const mimeMatch = header.match(/:(.*?);/) const mimeType = mimeMatch ? mimeMatch[1] : 'image/png' @@ -64,7 +86,6 @@ function ScreenshotCapture({ enabled, cadFileId, onDone }: ScreenshotCaptureProp byteArray[i] = byteCharacters.charCodeAt(i) } const blob = new Blob([byteArray], { type: mimeType }) - const formData = new FormData() formData.append('thumbnail', blob, 'thumbnail.png') @@ -72,14 +93,8 @@ function ScreenshotCapture({ enabled, cadFileId, onDone }: ScreenshotCaptureProp .post(`/cad/${cadFileId}/regenerate-thumbnail`, formData, { headers: { 'Content-Type': 'multipart/form-data' }, }) - .then(() => { - toast.success('Thumbnail captured and saved') - }) - .catch((err: unknown) => { - const msg = err instanceof Error ? err.message : 'Unknown error' - console.error('Thumbnail upload failed', msg) - toast.error('Failed to save thumbnail') - }) + .then(() => toast.success('Thumbnail captured and saved')) + .catch(() => toast.error('Failed to save thumbnail')) .finally(() => { didCapture.current = false onDone() @@ -90,7 +105,7 @@ function ScreenshotCapture({ enabled, cadFileId, onDone }: ScreenshotCaptureProp } // --------------------------------------------------------------------------- -// Error boundary for the GLTF loader inside Suspense +// Error boundary // --------------------------------------------------------------------------- class GltfErrorBoundary extends Component< @@ -101,15 +116,12 @@ class GltfErrorBoundary extends Component< super(props) this.state = { hasError: false } } - static getDerivedStateFromError(): { hasError: boolean } { return { hasError: true } } - componentDidCatch(error: Error, _info: ErrorInfo): void { this.props.onError(error.message || 'Failed to parse GLTF') } - render(): ReactNode { if (this.state.hasError) return null return this.props.children @@ -117,7 +129,7 @@ class GltfErrorBoundary extends Component< } // --------------------------------------------------------------------------- -// Loading overlay (shown while model resolves inside Canvas) +// Loading overlay // --------------------------------------------------------------------------- function LoadingOverlay() { @@ -130,60 +142,199 @@ function LoadingOverlay() { } // --------------------------------------------------------------------------- -// Model loader with resolved tracking +// Model loader with ready tracking // --------------------------------------------------------------------------- interface ModelWithReadyProps { url: string + wireframe: boolean onReady: () => void } -function ModelWithReady({ url, onReady }: ModelWithReadyProps) { +function ModelWithReady({ url, wireframe, onReady }: ModelWithReadyProps) { const { scene } = useGLTF(url) useEffect(() => { - onReady() - }, [onReady]) + scene.traverse((child: any) => { + if (child.isMesh) { + child.material = child.material.clone() + child.material.wireframe = wireframe + } + }) + }, [scene, wireframe]) + useEffect(() => { onReady() }, [onReady]) return } +// --------------------------------------------------------------------------- +// Env preset dropdown +// --------------------------------------------------------------------------- + +function EnvDropdown({ + value, + onChange, +}: { + value: EnvPreset + onChange: (v: EnvPreset) => void +}) { + const [open, setOpen] = useState(false) + return ( +
+ + {open && ( +
+ {ENV_PRESETS.map((p) => ( + + ))} +
+ )} +
+ ) +} + // --------------------------------------------------------------------------- // Main exported component // --------------------------------------------------------------------------- -export default function ThreeDViewer({ cadFileId, onClose }: ThreeDViewerProps) { - const modelUrl = `/api/cad/${cadFileId}/model` +export default function ThreeDViewer({ + cadFileId, + onClose, + geometryGltfUrl, + productionGltfUrl, + downloadUrls, +}: ThreeDViewerProps) { + const defaultUrl = `/api/cad/${cadFileId}/model` + const [mode, setMode] = useState('geometry') + const [wireframe, setWireframe] = useState(false) + const [envPreset, setEnvPreset] = useState('city') const [capturing, setCapturing] = useState(false) const [loadError, setLoadError] = useState(null) const [modelReady, setModelReady] = useState(false) + // Resolve the active model URL based on mode + const activeUrl = + mode === 'production' && productionGltfUrl + ? productionGltfUrl + : geometryGltfUrl || defaultUrl + const handleModelReady = useCallback(() => setModelReady(true), []) const handleError = useCallback((msg: string) => setLoadError(msg), []) const handleCaptureDone = useCallback(() => setCapturing(false), []) + // Reset ready state when URL changes + useEffect(() => { + setModelReady(false) + setLoadError(null) + }, [activeUrl]) + + function handleDownload(url: string, filename: string) { + const a = document.createElement('a') + a.href = url + a.download = filename + document.body.appendChild(a) + a.click() + document.body.removeChild(a) + } + + const hasBothModes = !!(geometryGltfUrl && productionGltfUrl) + return (
- {/* ------------------------------------------------------------------ */} - {/* Toolbar */} - {/* ------------------------------------------------------------------ */} -
+ {/* Toolbar */} +
3D Viewer -
+ +
+ {/* Mode toggle */} + {hasBothModes && ( +
+ + +
+ )} + + {/* Wireframe toggle */} + + + {/* Environment preset */} + + + {/* Download buttons */} + {downloadUrls?.glb && ( + + )} + {downloadUrls?.blend && ( + + )} + + {/* Capture button */} + {/* Close */}
- {/* ------------------------------------------------------------------ */} - {/* Viewport */} - {/* ------------------------------------------------------------------ */} + {/* Viewport */}
- {/* Error state */} {loadError && (
@@ -213,34 +361,31 @@ export default function ThreeDViewer({ cadFileId, onClose }: ThreeDViewerProps)
)} - {/* Loading overlay – visible until model signals ready */} {!modelReady && !loadError && } - {/* Three.js Canvas */} - {/* Lights */} - {/* GLTF model */} - + - {/* Camera controls */} + - {/* Environment map for PBR materials */} - - - {/* Screenshot capture – only active when triggered */} {capturing && ( )} + {(user?.role === 'admin' || user?.role === 'project_manager') && ( + + clsx( + 'flex items-center gap-3 px-3 py-2 rounded-md text-sm font-medium transition-colors', + isActive + ? 'bg-accent-light text-accent' + : 'text-content-secondary hover:bg-surface-hover', + ) + } + > + + Workers + + )} {(user?.role === 'admin' || user?.role === 'project_manager') && ( () const navigate = useNavigate() + // Load any geometry GLB that was generated for this CAD file + const { data: gltfAssets } = useQuery({ + queryKey: ['media-assets', id, 'gltf_geometry'], + queryFn: () => getMediaAssets({ cad_file_id: id!, asset_type: 'gltf_geometry' }), + enabled: !!id, + staleTime: 30_000, + }) + + // Load production GLB if available + const { data: productionAssets } = useQuery({ + queryKey: ['media-assets', id, 'gltf_production'], + queryFn: () => getMediaAssets({ cad_file_id: id!, asset_type: 'gltf_production' }), + enabled: !!id, + staleTime: 30_000, + }) + + // Load blend assets for download + const { data: blendAssets } = useQuery({ + queryKey: ['media-assets', id, 'blend_production'], + queryFn: () => getMediaAssets({ cad_file_id: id!, asset_type: 'blend_production' }), + enabled: !!id, + staleTime: 30_000, + }) + if (!id) { return ( -
-

No CAD file ID provided.

- +
+

No CAD file ID provided.

) } + const latestGltf = gltfAssets?.[0] + const latestProduction = productionAssets?.[0] + const latestBlend = blendAssets?.[0] + return ( navigate(-1)} + geometryGltfUrl={latestGltf?.download_url ?? undefined} + productionGltfUrl={latestProduction?.download_url ?? undefined} + downloadUrls={{ + glb: latestGltf?.download_url ?? undefined, + blend: latestBlend?.download_url ?? undefined, + }} /> ) } diff --git a/frontend/src/pages/ProductDetail.tsx b/frontend/src/pages/ProductDetail.tsx index 7733d1f..f2b3fc0 100644 --- a/frontend/src/pages/ProductDetail.tsx +++ b/frontend/src/pages/ProductDetail.tsx @@ -1,10 +1,10 @@ import { useState, useCallback, useEffect, Fragment, useMemo } from 'react' -import { useParams, Link } from 'react-router-dom' +import { useParams, Link, useNavigate } from 'react-router-dom' import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query' import { useDropzone } from 'react-dropzone' import { ArrowLeft, Pencil, Save, X, Box, Image, - RotateCcw, RefreshCw, Upload, ChevronDown, ChevronRight, Wand2, Download, Plus, Trash2, Filter, + RotateCcw, RefreshCw, Upload, ChevronDown, ChevronRight, Wand2, Download, Plus, Trash2, Filter, Cuboid, } from 'lucide-react' import { toast } from 'sonner' import { @@ -18,7 +18,7 @@ import { listMaterials } from '../api/materials' import MaterialInput from '../components/shared/MaterialInput' import MaterialWizard from '../components/MaterialWizard' import { useAuthStore } from '../store/auth' -import { downloadStl, generateStl } from '../api/cad' +import { downloadStl, generateStl, generateGltfGeometry } from '../api/cad' function CadStatusBadge({ status }: { status: string | null }) { if (!status) return ( @@ -48,6 +48,7 @@ const META_FIELDS: Array<{ key: keyof Product; label: string }> = [ export default function ProductDetailPage() { const { id } = useParams<{ id: string }>() + const navigate = useNavigate() const qc = useQueryClient() const user = useAuthStore((s) => s.user) const isPrivileged = user?.role === 'admin' || user?.role === 'project_manager' @@ -552,6 +553,30 @@ export default function ProductDetailPage() { )} + {product.cad_file_id && ( + + )} + {product.cad_file_id && isPrivileged && ( + + )} {product.cad_file_id && isPrivileged && (

STL

diff --git a/frontend/src/pages/WorkerManagement.tsx b/frontend/src/pages/WorkerManagement.tsx new file mode 100644 index 0000000..11cbf9a --- /dev/null +++ b/frontend/src/pages/WorkerManagement.tsx @@ -0,0 +1,281 @@ +import { useState } from 'react' +import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query' +import { toast } from 'sonner' +import { RefreshCw, ChevronDown, ChevronRight, Cpu, Layers, Minus, Plus } from 'lucide-react' +import { + getCeleryWorkers, + getQueueStatus, + scaleWorkers, + type CeleryWorker, + type ScaleRequest, +} from '../api/worker' + +// --------------------------------------------------------------------------- +// Worker card +// --------------------------------------------------------------------------- + +function WorkerCard({ worker }: { worker: CeleryWorker }) { + const [expanded, setExpanded] = useState(false) + return ( +
+
+
+ + {worker.name} +
+
+ 0 + ? 'bg-blue-500/20 text-blue-400' + : 'bg-green-500/20 text-green-400' + }`} + > + {worker.active_task_count > 0 ? `${worker.active_task_count} active` : 'idle'} + + {worker.active_tasks.length > 0 && ( + + )} +
+
+ + {/* Queues */} +
+ {worker.queues.map((q) => ( + + {q} + + ))} +
+ + {/* Active tasks */} + {expanded && worker.active_tasks.length > 0 && ( +
+ {worker.active_tasks.map((t) => ( +
+ {t.name} +
+ ))} +
+ )} +
+ ) +} + +// --------------------------------------------------------------------------- +// Scale controls +// --------------------------------------------------------------------------- + +type ScalableService = ScaleRequest['service'] + +const SCALABLE_SERVICES: { service: ScalableService; label: string; description: string }[] = [ + { service: 'render-worker', label: 'Render Worker', description: 'Blender renders — concurrency=1' }, + { service: 'worker', label: 'Step Worker', description: 'STEP processing — concurrency=8' }, + { service: 'worker-thumbnail', label: 'Thumbnail Worker', description: 'Thumbnail rendering' }, +] + +function ScaleControl({ + service, + label, + description, +}: { + service: ScalableService + label: string + description: string +}) { + const [count, setCount] = useState(1) + const scaleMut = useMutation({ + mutationFn: () => scaleWorkers({ service, count }), + onSuccess: (data) => toast.success(`${data.service} → ${data.count} instance(s)`), + onError: (e: unknown) => { + const detail = (e as { response?: { data?: { detail?: string } } })?.response?.data?.detail + toast.error(detail ?? `Failed to scale ${service}`) + }, + }) + + return ( +
+
+

{label}

+

{description}

+
+
+ + {count} + + +
+
+ ) +} + +// --------------------------------------------------------------------------- +// Queue depth bar +// --------------------------------------------------------------------------- + +function QueueDepthRow({ queue, depth }: { queue: string; depth: number }) { + return ( +
+ {queue} +
+
10 ? 'var(--color-red-500)' : 'var(--color-accent)', + }} + /> +
+ 10 ? 'text-red-400' : 'text-content-muted' + }`} + > + {depth} + +
+ ) +} + +// --------------------------------------------------------------------------- +// Main page +// --------------------------------------------------------------------------- + +export default function WorkerManagement() { + const qc = useQueryClient() + + const { data: workerData, isLoading: workersLoading } = useQuery({ + queryKey: ['celery-workers'], + queryFn: getCeleryWorkers, + refetchInterval: 10_000, + }) + + const { data: queueData, isLoading: queuesLoading } = useQuery({ + queryKey: ['queue-status'], + queryFn: getQueueStatus, + refetchInterval: 5_000, + }) + + function refresh() { + qc.invalidateQueries({ queryKey: ['celery-workers'] }) + qc.invalidateQueries({ queryKey: ['queue-status'] }) + } + + const workers = workerData?.workers ?? [] + const queueDepths = queueData?.queue_depths ?? {} + + return ( +
+ {/* Header */} +
+
+

Worker Management

+

+ Monitor active Celery workers and scale services up or down. +

+
+ +
+ + {/* Queue depths */} +
+
+ +

Queue Depths

+
+ {queuesLoading ? ( +
+ {[0, 1, 2].map((i) => ( +
+ ))} +
+ ) : Object.keys(queueDepths).length === 0 ? ( +

No queue data available.

+ ) : ( +
+ {Object.entries(queueDepths).map(([queue, depth]) => ( + + ))} +
+ )} +
+ + {/* Active workers */} +
+
+ +

+ Active Workers + {workers.length > 0 && ( + + ({workers.length}) + + )} +

+
+ {workersLoading ? ( +
+ {[0, 1].map((i) => ( +
+ ))} +
+ ) : workerData?.error ? ( +
+ Failed to fetch workers: {workerData.error} +
+ ) : workers.length === 0 ? ( +
+ No active workers detected. Make sure Celery workers are running. +
+ ) : ( +
+ {workers.map((w) => ( + + ))} +
+ )} +
+ + {/* Scale controls */} +
+

Scale Services

+

+ Adjust the number of container instances for each service via Docker Compose. + Changes take effect immediately but are not persisted across deployments. +

+
+ {SCALABLE_SERVICES.map((s) => ( + + ))} +
+
+
+ ) +} diff --git a/plan.md b/plan.md index 413dfbd..4badec7 100644 --- a/plan.md +++ b/plan.md @@ -1,420 +1,365 @@ -# Plan: Phase J (WebSocket) + Turntable Bug + Phase K (Asset Library) +# Plan: Phase N — Workflow-Pipeline, 3D-Viewer Production-Modus, Worker-Management, QC-Tests ## Kontext -Analyse des aktuellen Codestands ergab: **Phasen F, G, H, I, L sind bereits vollständig implementiert.** +Vier offene Bereiche aus dem PLAN.md müssen abgeschlossen werden: -| Phase | Status | Beleg | -|-------|--------|-------| -| F - Hash-Caching | DONE | `domains/products/cache_service.py` + migration 041 | -| G - Billing | DONE | `domains/billing/` vollständig, WeasyPrint in Dockerfile | -| H - Excel Sanity-Check | DONE | `domains/imports/service.py run_sanity_check()` + Upload.tsx Dialog | -| I - Notification-Config | DONE | `notification_configs` migration 044, NotificationSettings.tsx | -| L - Dashboard | DONE | AdminDashboard.tsx + ClientDashboard.tsx vollständig | -| **J - WebSocket** | **FEHLT** | Kein `core/websocket.py`, alle Polls noch aktiv | +1. **Workflow-Pipeline verdrahten**: `workflow_builder.py` enthält nur defekte Stubs. `_build_still` übergibt `order_line_id` als `step_path` an `render_still_task` → würde crashen. Der neue `still_with_exports`-Workflow (still + gltf_export + blend_export) ist nicht implementiert. Die Celery-Tasks für export_gltf/export_blend fehlen in `domains/rendering/tasks.py`. -Zusätzlich: **Kritischer Bug in `render_blender.py`** — ffmpeg-Overlay-Befehl haengt bei endlicher Frame-Sequenz (kein `shortest=1`) -> Timeout -> Turntable-Render schlaegt fehl. +2. **K6: 3D-Viewer Production-Modus**: `ThreeDViewer.tsx` hat keinen Mode-Toggle, Wireframe, Env-Preset oder Download-Buttons. Für Testdaten wird `POST /api/cad/{id}/generate-gltf-geometry` benötigt (trimesh STL→GLB, kein Blender nötig). ---- +3. **L3: Worker-Management UI**: `WorkerManagement.tsx` fehlt. Backend braucht `/celery-workers` (Celery inspect) und `/scale` (docker compose subprocess). Backend-Container bekommt Docker-Socket-Mount. -## Bug Fix: Turntable ffmpeg Timeout - -**Root cause**: In `backend/app/services/render_blender.py:507`: -```python -"-filter_complex", "[1:v][0:v]overlay=0:0", -``` -Der `lavfi color`-Quell-Stream hat keine definierte Laenge. Ohne `shortest=1` wartet ffmpeg auf -weitere Frames vom Farb-Stream nachdem die PNG-Sequenz endet -> haengt bis Timeout (300s). - -**Fix**: `overlay=0:0` -> `overlay=0:0:shortest=1` - ---- - -## Phase J: WebSocket Backend + Frontend - -### Architektur (ADR-05: FastAPI nativ + Redis Pub/Sub) - -``` -Backend Task/Router: - -> redis.publish(f"tenant:{tenant_id}", json.dumps(event)) - -core/websocket.py: - ConnectionManager: tenant_id -> set[WebSocket] - background_task: asyncio.Task (redis subscribe loop) - -Frontend: - useWebSocket() hook -> WebSocket('/api/ws') - Empfaengt Events, invalidiert React Query caches -``` - -### Events die gesendet werden: -| Event | Sender | Daten | -|-------|--------|-------| -| `render_complete` | step_tasks.py | order_line_id, status, thumbnail_url | -| `render_failed` | step_tasks.py | order_line_id, error | -| `cad_processing_complete` | step_tasks.py | cad_file_id, status | -| `order_status_change` | orders router | order_id, new_status | -| `queue_update` | beat task (alle 10s) | depth per queue | +4. **M: QC-Tests**: `pytest` ist im Backend-Container nicht installiert. Dockerfile: `pip install -e ".[dev]"`. Neue Service-Tests für rendering und orders domains. 2 neue Vitest-Dateien. --- ## Betroffene Dateien -### Neu erstellen: -- `backend/app/core/websocket.py` -- ConnectionManager + Redis Pub/Sub Loop -- `frontend/src/hooks/useWebSocket.ts` -- WebSocket hook mit Auto-Reconnect -- `frontend/src/contexts/WebSocketContext.tsx` -- Context Provider - -### Aendern: -- `backend/app/services/render_blender.py` -- ffmpeg shortest=1 Bug-Fix -- `backend/app/main.py` -- WebSocket-Endpoint registrieren (`/api/ws`) -- `backend/app/tasks/step_tasks.py` -- WebSocket-Events emittieren -- `backend/app/domains/orders/router.py` -- Order-Status-Events emittieren -- `backend/app/tasks/celery_app.py` -- `broadcast_queue_status` Beat-Task hinzufuegen -- `frontend/src/App.tsx` -- WebSocketProvider wrappen -- `frontend/src/pages/WorkerActivity.tsx` -- polling durch WS ersetzen -- `frontend/src/pages/OrderDetail.tsx` -- polling durch WS ersetzen -- `frontend/src/pages/Orders.tsx` -- polling reduzieren -- `frontend/src/components/layout/Layout.tsx` -- polling reduzieren -- `frontend/src/components/layout/NotificationCenter.tsx` -- polling durch WS ersetzen - -### Nach Phase J Commit -- Phase K: -- `backend/alembic/versions/045_asset_libraries.py` -- asset_libraries Tabelle -- `backend/app/domains/materials/models.py` -- AssetLibrary Model hinzufuegen -- `backend/app/domains/materials/router.py` -- Asset Library CRUD + Upload -- `render-worker/scripts/asset_library.py` -- Materialien + Node-Groups aus .blend laden -- `render-worker/scripts/catalog_assets.py` -- Katalog aus .blend lesen -- `render-worker/scripts/export_gltf.py` -- GLB Export mit Materialien -- `render-worker/scripts/export_blend.py` -- .blend Export mit pack_all() -- `backend/app/domains/rendering/workflow_builder.py` -- Asset Library Nodes -- `frontend/src/pages/Admin.tsx` -- Asset Library Manager UI -- `frontend/src/api/assetLibraries.ts` -- API Client +| Datei | Änderung | +|-------|----------| +| `backend/app/domains/rendering/tasks.py` | 3 neue Tasks: `render_order_line_still_task`, `export_gltf_for_order_line_task`, `export_blend_for_order_line_task` | +| `backend/app/domains/rendering/workflow_builder.py` | Stubs ersetzen durch order-line-aware Tasks, `still_with_exports` hinzufügen | +| `backend/app/api/routers/cad.py` | `POST /{id}/generate-gltf-geometry` Endpoint | +| `backend/app/api/routers/worker.py` | `GET /celery-workers`, `POST /scale` Endpoints | +| `backend/Dockerfile` | `pip install -e ".[dev]"` | +| `docker-compose.yml` | Backend + Worker: Docker-Socket + Compose-File-Mount | +| `frontend/src/components/cad/ThreeDViewer.tsx` | Mode-Toggle, Wireframe, Env-Preset, Download-Buttons | +| `frontend/src/pages/WorkerManagement.tsx` | NEU: Worker-Liste, Queue-Stats, Scale-Button | +| `frontend/src/api/worker.ts` | Neue Interfaces + API-Funktionen | +| `frontend/src/App.tsx` | Route für /workers | +| `frontend/src/components/layout/Layout.tsx` | Sidebar-Link Workers | +| `backend/tests/domains/test_rendering_service.py` | NEU: ≥5 Tests für Rendering-Tasks und Workflow-Builder | +| `backend/tests/domains/test_orders_service.py` | NEU: ≥5 Tests für Orders-Endpoints | +| `frontend/src/__tests__/pages/WorkerActivity.test.tsx` | NEU: Vitest-Tests | +| `frontend/src/__tests__/pages/WorkerManagement.test.tsx` | NEU: Vitest-Tests | --- ## Tasks (in Reihenfolge) -### Task 1: Bug-Fix ffmpeg Turntable Timeout [x] -- **Datei**: `backend/app/services/render_blender.py:507` -- **Was**: `"[1:v][0:v]overlay=0:0"` -> `"[1:v][0:v]overlay=0:0:shortest=1"` -- **Akzeptanzkriterium**: Turntable-Render fuer Order f0436188 kann erneut gestartet werden und produziert MP4 -- **Abhaengigkeiten**: keine +### Task 1: Backend — Neue order-line-aware Rendering-Tasks +- **Datei**: `backend/app/domains/rendering/tasks.py` +- **Was**: Drei neue Celery-Tasks hinzufügen (UNTER den bestehenden Tasks): -### Task 2: WebSocket Backend -- core/websocket.py [x] -- **Datei**: `backend/app/core/websocket.py` (neu) -- **Was**: - ```python - class ConnectionManager: - _connections: dict[str, set[WebSocket]] # tenant_id -> sockets - async def connect(ws, tenant_id) - def disconnect(ws, tenant_id) - async def broadcast_to_tenant(tenant_id, event: dict) - async def start_redis_subscriber() # asyncio background task + **`render_order_line_still_task(order_line_id, **params)`** — Queue `thumbnail_rendering`: + - Lädt OrderLine + CadFile via sync SQLAlchemy (wie `publish_asset`) + - Setzt `render_status = 'processing'` + - Ruft `render_still()` aus `app.services.render_blender` auf + - Setzt `render_status = 'completed'`, speichert `render_log` + - Bei Fehler: `render_status = 'failed'` + - Returns dict mit `output_path` - def publish_event_sync(tenant_id: str, event: dict): - # Sync version fuer Celery tasks -- redis.publish() - ``` - - Redis Pub/Sub: subscribe auf `tenant:*` Channels - - Bei Nachricht: alle WebSockets des Tenants benachrichtigen - - Auto-Ping alle 30s gegen Disconnects -- **Akzeptanzkriterium**: broadcast_to_tenant sendet an alle verbundenen WS des Tenants -- **Abhaengigkeiten**: keine + **`export_gltf_for_order_line_task(order_line_id)`** — Queue `thumbnail_rendering`: + - Lädt OrderLine + CadFile sync + - Sucht STL-Cache (`{step_stem}_low.stl`) + - Ruft Blender subprocess mit `export_gltf.py` auf: `blender --background --python export_gltf.py -- --stl_path X --output_path Y` + - Lädt GLB nach MinIO `production-exports/{cad_file_id}/{order_line_id}.glb` + - Erstellt `MediaAsset(asset_type=gltf_production, storage_key=...)` + - Returns `storage_key` -### Task 3: WebSocket Endpoint in main.py [x] -- **Datei**: `backend/app/main.py` -- **Was**: - ```python - @app.websocket("/api/ws") - async def ws_endpoint(websocket: WebSocket, token: str = Query(...)): - user = await verify_ws_token(token) - await manager.connect(websocket, str(user.tenant_id)) - try: - while True: - await websocket.receive_text() # Keep-alive pings - except WebSocketDisconnect: - manager.disconnect(websocket, str(user.tenant_id)) - ``` - - Token-Auth via Query-Parameter (WS kann keinen Authorization-Header senden) - - `verify_ws_token`: JWT decode, User laden (analog zu get_current_user) - - `manager` als globale Instanz, gestartet im lifespan -- **Akzeptanzkriterium**: `ws://localhost:8888/api/ws?token=` oeffnet Verbindung -- **Abhaengigkeiten**: Task 2 + **`export_blend_for_order_line_task(order_line_id)`** — Queue `thumbnail_rendering`: + - Analog zu export_gltf, aber mit `export_blend.py` + - MediaAsset type: `blend_production` -### Task 4: WebSocket Events in step_tasks.py [x] -- **Datei**: `backend/app/tasks/step_tasks.py` -- **Was**: In render_order_line_task und render_step_thumbnail nach Erfolg/Fehler: - ```python - from app.core.websocket import publish_event_sync - # bei render complete: - publish_event_sync(tenant_id, {"type": "render_complete", "order_line_id": str(line.id), "status": "completed"}) - # bei render failed: - publish_event_sync(tenant_id, {"type": "render_failed", "order_line_id": str(line.id), "error": str(exc)}) - # bei CAD processing complete: - publish_event_sync(tenant_id, {"type": "cad_processing_complete", "cad_file_id": str(cad_file.id), "status": "completed"}) - ``` - - tenant_id aus cad_file.tenant_id bzw. order_line -> order -> user.tenant_id laden -- **Akzeptanzkriterium**: Render fertig -> WebSocket-Client empfaengt Event -- **Abhaengigkeiten**: Task 2 +- **Akzeptanzkriterium**: Tasks in `domains/rendering/tasks.py` vorhanden, keine Import-Fehler +- **Abhängigkeiten**: keine -### Task 5: WebSocket Events in orders router [x] -- **Datei**: `backend/app/domains/orders/router.py` -- **Was**: Bei Order-Status-Aenderung (submit, complete, cancel): - ```python - from app.core.websocket import manager - await manager.broadcast_to_tenant( - str(current_user.tenant_id), - {"type": "order_status_change", "order_id": str(order.id), "status": new_status} - ) - ``` -- **Akzeptanzkriterium**: Order-Submit -> WebSocket-Event geht an alle Browser-Tabs des Tenants -- **Abhaengigkeiten**: Task 2 - -### Task 6: Queue-Update Beat-Task [x] -- **Datei**: `backend/app/tasks/celery_app.py` -- **Was**: Neuer Beat-Task alle 10s: - ```python - @shared_task(name="beat.broadcast_queue_status", queue="step_processing") - def broadcast_queue_status(): - from app.core.websocket import publish_event_sync - from redis import Redis - r = Redis.from_url(settings.redis_url) - depths = { - "step_processing": r.llen("step_processing"), - "thumbnail_rendering": r.llen("thumbnail_rendering"), - } - # Broadcast an alle Tenants (broadcast_all) - r.publish("__broadcast__", json.dumps({"type": "queue_update", "depths": depths})) - ``` - - `__broadcast__` Channel: wird an ALLE verbundenen WS gesendet (nicht tenant-spezifisch) - - ConnectionManager subscribt auch auf `__broadcast__` -- **Akzeptanzkriterium**: WorkerActivity-Queue-Tiefe aktualisiert alle 10s automatisch -- **Abhaengigkeiten**: Task 2 - -### Task 7: Frontend WebSocket Hook [x] -- **Datei**: `frontend/src/hooks/useWebSocket.ts` (neu) -- **Was**: - ```typescript - export function useWebSocketConnection() { - // Verbindet zu ws://localhost:8888/api/ws?token= - // Auto-Reconnect: 1s, 2s, 4s, 8s, ... max 30s - // Emittiert Events via onMessage callback - // Pings alle 25s (keep-alive) - // Trennt Verbindung bei Logout - } - ``` -- **Akzeptanzkriterium**: Verbindung bleibt offen, reconnected nach Netzwerktrennung -- **Abhaengigkeiten**: keine - -### Task 8: Frontend WebSocket Context [x] -- **Datei**: `frontend/src/contexts/WebSocketContext.tsx` (neu), `frontend/src/App.tsx` aendern -- **Was**: - ```typescript - export function WebSocketProvider({ children }) { - const queryClient = useQueryClient() - // on 'render_complete': invalidateQueries(['orders', order_line_id]) - // on 'render_failed': invalidateQueries(['orders', order_line_id]) - // on 'cad_processing_complete': invalidateQueries(['cad-activity']) - // on 'order_status_change': invalidateQueries(['orders']) - // on 'queue_update': queryClient.setQueryData(['queue-status'], ...) - } - // App.tsx: um wrappen - ``` -- **Akzeptanzkriterium**: render_complete Event -> OrderDetail aktualisiert ohne Poll-Interval -- **Abhaengigkeiten**: Task 7 - -### Task 9: Polling ersetzen -- WorkerActivity.tsx [x] -- **Datei**: `frontend/src/pages/WorkerActivity.tsx` -- **Was**: - - `refetchInterval: 5000` entfernen -- bei `cad_processing_complete` invalidieren - - `refetchInterval: 3000` fuer Queue-Status entfernen -- bei `queue_update` setQueryData -- **Akzeptanzkriterium**: Keine automatischen HTTP-Requests im Network-Tab (nur WS-Frames) -- **Abhaengigkeiten**: Task 8 - -### Task 10: Polling ersetzen -- OrderDetail.tsx [x] -- **Datei**: `frontend/src/pages/OrderDetail.tsx` -- **Was**: - - `refetchInterval: (query) => {...}` entfernen - - Stattdessen: bei `render_complete` / `render_failed` fuer matching order_line_id -> invalidate -- **Akzeptanzkriterium**: Render-Status in OrderDetail aktualisiert live ohne Poll -- **Abhaengigkeiten**: Task 8 - -### Task 11: Polling reduzieren -- Layout.tsx + NotificationCenter.tsx [x] -- **Dateien**: `frontend/src/components/layout/Layout.tsx`, `NotificationCenter.tsx` -- **Was**: - - Layout: `refetchInterval: 8000` -> 60000 (1min) - - NotificationCenter: `refetchInterval: 15_000` -> 60000; bei `order_status_change` zusaetzlich invalidieren -- **Akzeptanzkriterium**: Signifikant weniger Poll-Requests im Network-Tab -- **Abhaengigkeiten**: Task 8 - -### Task 12: PLAN.md + LEARNINGS.md + Commit [x] -- **Was**: - - PLAN.md: Phase J als ABGESCHLOSSEN markieren, Status auf "Phase K als naechstes" - - LEARNINGS.md: ffmpeg `shortest=1` Learning + WebSocket Auth via Query-Param Learning - - `git commit -m "feat(J): WebSocket live-events + replace polling + fix ffmpeg turntable timeout"` -- **Abhaengigkeiten**: Tasks 1-11 - ---- - -## Phase K Tasks (nach Commit) - -### Task K1: Migration 045 + AssetLibrary Model [x] -- **Datei**: `backend/alembic/versions/045_asset_libraries.py` (neu, autogenerate), `domains/materials/models.py` -- **Was**: - ```python - class AssetLibrary(Base): - id: UUID PK, tenant_id FK nullable, name VARCHAR(200) - blend_file_key TEXT, # MinIO key - catalog JSONB, # {materials: [...], node_groups: [...]} - description TEXT, is_active BOOL, created_at TIMESTAMP - ``` - - `render_templates.asset_library_id` FK optional (nullable) - - `output_types.asset_library_id` FK optional (nullable) -- **Akzeptanzkriterium**: `alembic upgrade head` erfolgreich, `asset_libraries` Tabelle in DB - -### Task K2: Asset Library CRUD Backend [x] -- **Datei**: `backend/app/domains/materials/router.py` + `service.py` + `schemas.py` -- **Was**: - - `POST /api/asset-libraries` -- .blend Upload -> MinIO `asset-libraries/{id}.blend` -> queut Katalog-Refresh - - `GET /api/asset-libraries` -- Liste - - `GET /api/asset-libraries/{id}/catalog` -- Materialien + Node-Groups - - `DELETE /api/asset-libraries/{id}` -- nur wenn nicht in Verwendung (FK-Check) - - `AssetLibraryOut` Schema mit `catalog` field -- **Akzeptanzkriterium**: POST + GET funktionieren, .blend in MinIO gespeichert - -### Task K3: Katalog-Refresh Celery Task + Blender Script [x] -- **Datei**: `backend/app/domains/materials/tasks.py` (neu), `render-worker/scripts/catalog_assets.py` (neu) -- **Was**: - - Celery Task `refresh_asset_library_catalog(asset_library_id)` auf Queue `thumbnail_rendering` - - Laedt .blend aus MinIO in tmpdir - - Startet `blender --background --python catalog_assets.py -- ` - - `catalog_assets.py`: oeffnet .blend, liest alle markierten Assets: - ```python - import bpy, json, sys - blend_path = sys.argv[sys.argv.index('--') + 1] - bpy.ops.wm.open_mainfile(filepath=blend_path) - catalog = { - "materials": [m.name for m in bpy.data.materials if m.asset_data], - "node_groups": [ng.name for ng in bpy.data.node_groups if ng.asset_data], - } - print(json.dumps(catalog)) - ``` - - Schreibt Katalog in `asset_libraries.catalog JSONB` -- **Akzeptanzkriterium**: Nach .blend-Upload enthaelt `catalog` JSONB die Asset-Namen - -### Task K4: Blender Asset Library Apply Script [x] -- **Datei**: `render-worker/scripts/asset_library.py` (neu) -- **Was**: - ```python - def apply_asset_library_materials(blend_path: str, material_map: dict) -> None: - """Laedt Materialien aus Asset-Library .blend, wendet auf Mesh-Parts an.""" - with bpy.data.libraries.load(blend_path, link=True, assets_only=True) as (src, dst): - dst.materials = [n for n in src.materials if n in material_map.values()] - for obj in bpy.data.objects: - if obj.type == 'MESH': - for slot in obj.material_slots: - resolved = material_map.get(slot.material.name if slot.material else '') - if resolved and resolved in bpy.data.materials: - slot.material = bpy.data.materials[resolved] - - def apply_asset_library_modifiers(blend_path: str, modifier_map: dict) -> None: - """Laedt Geometry-Node-Gruppen, wendet als Modifier an.""" - with bpy.data.libraries.load(blend_path, link=True, assets_only=True) as (src, dst): - dst.node_groups = [n for n in src.node_groups if n in modifier_map.values()] - for obj in bpy.data.objects: - if obj.type == 'MESH': - for part_name, mod_name in modifier_map.items(): - if part_name.lower() in obj.name.lower(): - mod = obj.modifiers.new(name=mod_name, type='NODES') - mod.node_group = bpy.data.node_groups.get(mod_name) - ``` -- **Akzeptanzkriterium**: Render mit Asset-Library zeigt korrekte Produktionsmaterialien - -### Task K5: export_gltf + export_blend Scripts [x] -- **Dateien**: `render-worker/scripts/export_gltf.py` (neu), `render-worker/scripts/export_blend.py` (neu) -- **Was**: - - `export_gltf.py`: - 1. STL importieren (`bpy.ops.import_mesh.stl`) - 2. Asset Library laden via `apply_asset_library_materials` + `apply_asset_library_modifiers` - 3. `bpy.ops.export_scene.gltf(filepath=out, export_format='GLB', export_apply=True, export_draco_mesh_compression_enable=True)` - 4. Output nach MinIO `production-exports/{cad_file_id}/{run_id}.glb` - 5. MediaAsset-Record mit `asset_type=gltf_production` - - `export_blend.py`: - 1. STL + Asset Library laden (wie export_gltf) - 2. `bpy.ops.file.pack_all()` - 3. `bpy.ops.wm.save_as_mainfile(filepath=out, compress=True, copy=True)` - 4. MediaAsset-Record mit `asset_type=blend_production` -- **Akzeptanzkriterium**: GLB-Download oeffnet sich im Three.js Viewer mit Materialien - -### Task K6: Workflow-Builder -- Asset Library Nodes [x] +### Task 2: Backend — workflow_builder.py reparieren + still_with_exports - **Datei**: `backend/app/domains/rendering/workflow_builder.py` - **Was**: - - Neue Celery Tasks: `apply_asset_library_materials_task`, `apply_asset_library_modifiers_task`, `export_gltf_task`, `export_blend_task` - - Neuer Workflow-Typ `still_production`: + + - `_build_still`: Nutzt `render_order_line_still_task` statt `render_still_task` + - `_build_turntable`: Bleibt vorerst mit `render_turntable_task` (file-path-basiert, funktioniert via legacy path) + - `_build_multi_angle`: Nutzt `render_order_line_still_task` mit `camera_angle` param + - **NEU** `_build_still_with_exports(order_line_id, params)`: ```python - chain( - convert_step.si(order_line_id), + from celery import chain, group + return chain( + render_order_line_still_task.si(order_line_id, **params), group( - chain(apply_asset_library_materials.si(order_line_id), render_still.si(order_line_id)), - chain(apply_asset_library_materials.si(order_line_id), export_gltf.si(order_line_id)), - chain(apply_asset_library_materials.si(order_line_id), export_blend.si(order_line_id)), - ), - generate_thumbnail.si(order_line_id), - publish_asset.si(order_line_id), + export_gltf_for_order_line_task.si(order_line_id), + export_blend_for_order_line_task.si(order_line_id), + ) ) ``` -- **Akzeptanzkriterium**: Dispatch eines `still_production` Workflows -> PNG + GLB + .blend erzeugt + - `dispatch_workflow()`: `"still_with_exports"` zu `builders` hinzufügen -### Task K7: Asset Library Management UI [x] -- **Dateien**: `frontend/src/api/assetLibraries.ts` (neu), `frontend/src/pages/Admin.tsx` erweitern -- **Was**: - - API Client: `getAssetLibraries`, `uploadAssetLibrary` (multipart), `deleteAssetLibrary`, `getAssetLibraryCatalog` - - Admin.tsx: neues Panel "Asset Libraries" (nach Render Templates) - - Upload-Button + Drag-Drop - - Tabelle: Name, Materialien-Anzahl, Node-Groups-Anzahl, Aktionen - - Katalog-Detail: Material-Badge-Liste (gruen) + Node-Group-Badge-Liste (blau) - - OutputTypeTable: Asset-Library-Dropdown-Spalte -- **Akzeptanzkriterium**: Admin kann .blend hochladen, Katalog sehen, OutputType zuweisen +- **Akzeptanzkriterium**: `dispatch_workflow("still_with_exports", order_line_id)` löst keine Exception aus +- **Abhängigkeiten**: Task 1 -### Task K8: PLAN.md + LEARNINGS.md + Commit [x] +### Task 3: Backend — generate-gltf-geometry Endpoint (Testdaten für K6) +- **Datei**: `backend/app/api/routers/cad.py` +- **Was**: Neuer Endpoint `POST /api/cad/{id}/generate-gltf-geometry` (require_admin_or_pm): + - Prüft ob CadFile existiert + STL-Cache vorhanden (`{step_dir}/{stem}_low.stl`) + - Queut neuen Celery-Task `generate_gltf_geometry_task.delay(str(cad_file.id))` + - Returns `{"task_id": ..., "message": "GLB generation queued"}` + + Neuer Task `generate_gltf_geometry_task` in `domains/rendering/tasks.py` (Queue `thumbnail_rendering`): + - Lädt CadFile sync, findet STL-Cache + - **Nutzt trimesh** (kein Blender): `import trimesh; mesh = trimesh.load(stl_path); mesh.export(glb_path)` + → Warum trimesh: Schnell, kein Blender nötig, läuft auf worker-Container (trimesh in pyproject.toml cad-extras) + - Lädt GLB nach MinIO `uploads/{cad_file_id}/geometry.glb` + - Erstellt/aktualisiert `MediaAsset(asset_type=gltf_geometry, storage_key=..., cad_file_id=...)` + → `MediaAsset` braucht `cad_file_id` FK — prüfen ob vorhanden + + **Wichtig**: Prüfen ob `media_assets.cad_file_id` existiert. Falls nicht: Migration 047 notwendig. + +- **Akzeptanzkriterium**: `POST /api/cad/{id}/generate-gltf-geometry` gibt 202 zurück, nach Task-Ausführung existiert MediaAsset mit type=gltf_geometry +- **Abhängigkeiten**: Task 1 + +### Task 4: Migration 047 — media_assets.cad_file_id (wenn nötig) +- **Datei**: `backend/alembic/versions/047_media_assets_cad_file_id.py` +- **Was**: Nullable FK `cad_file_id UUID REFERENCES cad_files(id) ON DELETE SET NULL` auf `media_assets` +- **Prüfen**: `grep -n "cad_file_id" backend/app/domains/media/models.py` — falls schon vorhanden: Task überspringen +- **Akzeptanzkriterium**: `alembic upgrade head` erfolgreich +- **Abhängigkeiten**: keine + +### Task 5: ThreeDViewer.tsx — Production-Modus, Wireframe, Env-Preset, Downloads +- **Datei**: `frontend/src/components/cad/ThreeDViewer.tsx` +- **Was**: Props erweitern + Toolbar-Erweiterung: + + ```typescript + interface ThreeDViewerProps { + cadFileId: string + onClose: () => void + productionGltfUrl?: string // wenn vorhanden: Mode-Toggle anzeigen + downloadUrls?: { glb?: string; blend?: string } + } + ``` + + **Neuer State:** + - `mode: 'geometry' | 'production'` (default: 'geometry') + - `wireframe: boolean` (default: false) + - `envPreset: 'city' | 'studio' | 'sunset'` (default: 'city') + + **Toolbar** (neu, rechts vom "Capture Angle"-Button): + - Mode-Toggle (nur wenn `productionGltfUrl` gesetzt): Button-Gruppe "Geometry | Production" + - Wireframe-Toggle: Button + - Env-Preset-Dropdown: `