feat(M5-M7): embed canonical material names in USD via customData + pxr direct read

- export_step_to_usd.py: accept --material_map CLI arg, write
  schaeffler:canonicalMaterialName as customData on each Mesh prim,
  fix geometry transform (strip shape Location before face exploration,
  apply both face_loc and shape_loc sequentially)
- import_usd.py: after Blender USD import, use pxr to read customData
  directly from the USD file — builds {part_key: material_name} lookup
  (Blender ignores STRING primvars and customData, but pxr reads both)
- _blender_materials.py: add apply_material_library_direct() for exact
  dict-based material assignment without name-matching heuristics
- _blender_scene_setup.py: prefer direct USD lookup, fall back to
  name-matching for legacy USD files without material metadata
- export_glb.py (generate_usd_master_task): resolve material_map via
  material_service.resolve_material_map() and pass to subprocess;
  include material hash in cache key for invalidation
- ROADMAP.md: update P5 status, add M5-M7 milestones

Tested: 3/3 parts matched (ans_lfs120), 172/175 parts matched
(F-802007.TR4-D1-H122AG). Previous: 0/25 matched.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-03-12 23:04:26 +01:00
parent 1321ef2bd4
commit cc3071297b
15 changed files with 488 additions and 246 deletions
+2 -2
View File
@@ -12,7 +12,7 @@ process_step_file [queue: step_processing, worker container]
→ parses STEP objects, stores parsed_objects
→ queues render_step_thumbnail
render_step_thumbnail [queue: thumbnail_rendering, render-worker container]
render_step_thumbnail [queue: asset_pipeline, render-worker container]
→ backend/app/domains/pipeline/tasks/render_thumbnail.py
→ subprocess: export_step_to_gltf.py (OCC/GMSH tessellation → geometry GLB)
→ subprocess: export_gltf.py (Blender: materials, seams, sharp edges → production GLB)
@@ -123,7 +123,7 @@ from app.tasks.celery_app import celery_app
celery_app.send_task(
'app.domains.pipeline.tasks.render_thumbnail.render_step_thumbnail',
args=['[cad_file_id]'],
queue='thumbnail_rendering'
queue='asset_pipeline'
)"
```
+2 -2
View File
@@ -62,7 +62,7 @@ docker compose exec frontend npx tsc --noEmit
### Celery / Tasks
- `step_processing` queue: fast tasks only (< 5s) — metadata extraction, dispatch
- `thumbnail_rendering` queue: ALL Blender/render-worker calls — **never queue Blender on step_processing**
- `asset_pipeline` queue: ALL Blender/render-worker calls — **never queue Blender on step_processing**
- Task location: `backend/app/domains/pipeline/tasks/` — not `backend/app/tasks/`
- `step_tasks.py` is a 23-line shim — do not add logic there
- Write `self.request.id` to `render_job_doc.celery_task_id` at task start (for cancellation)
@@ -91,7 +91,7 @@ No HTTP blender-renderer service — everything goes through Celery:
step_processing queue:
backend/app/domains/pipeline/tasks/extract_metadata.py (OCC parsing)
thumbnail_rendering queue (render-worker container):
asset_pipeline queue (render-worker container):
backend/app/domains/pipeline/tasks/render_thumbnail.py
→ subprocess: render-worker/scripts/export_step_to_gltf.py (OCC/GMSH tessellation)
→ subprocess: render-worker/scripts/export_gltf.py (Blender: materials, seams, sharp)
+2 -2
View File
@@ -72,7 +72,7 @@ What is unclear? What could go wrong?
| Queue | Worker | Concurrency | Use for |
|---|---|---|---|
| `step_processing` | `worker` | 8 | metadata extraction, dispatch, fast tasks (< 5s) |
| `thumbnail_rendering` | `render-worker` | 1 | ALL Blender calls — never queue Blender on step_processing |
| `asset_pipeline` | `render-worker` | 1 | ALL Blender calls — never queue Blender on step_processing |
### New DB Fields
- Migration required → list as a separate task with migration filename
@@ -86,7 +86,7 @@ Every new backend response schema needs a TypeScript interface in `frontend/src/
process_step_file (step_processing)
→ domains/pipeline/tasks/extract_metadata.py
→ queues render_step_thumbnail
render_step_thumbnail (thumbnail_rendering)
render_step_thumbnail (asset_pipeline)
→ domains/pipeline/tasks/render_thumbnail.py
→ render-worker: export_step_to_gltf.py (OCC/GMSH tessellation)
→ render-worker: export_gltf.py (Blender: materials, seams, sharp edges)
+2 -2
View File
@@ -5,7 +5,7 @@ You are a specialist for the render script chain in the Schaeffler Automat proje
## Pipeline Overview
```
Celery task: render_step_thumbnail [queue: thumbnail_rendering]
Celery task: render_step_thumbnail [queue: asset_pipeline]
├─ subprocess: export_step_to_gltf.py (OCC/GMSH → geometry GLB)
│ └─ _extract_sharp_edge_pairs() (GCPnts curve sampling)
@@ -23,7 +23,7 @@ Celery task: render_step_thumbnail [queue: thumbnail_rendering]
└─ _activate_gpu() × 3 (before file, after file, after engine)
└─ Cycles render → PNG thumbnail
Celery task: render_order_line_task [queue: thumbnail_rendering]
Celery task: render_order_line_task [queue: asset_pipeline]
├─ subprocess: still_render.py (order-line PNG)
└─ subprocess: turntable_render.py (order-line MP4)
```
+1 -1
View File
@@ -25,7 +25,7 @@ You are the reviewer for the Schaeffler Automat project. You check implemented c
- [ ] `storage_key` values are relative (never start with `/`)
### Celery / Tasks
- [ ] Task is on the correct queue? (`thumbnail_rendering` for ALL Blender/render-worker calls)
- [ ] Task is on the correct queue? (`asset_pipeline` for ALL Blender/render-worker calls)
- [ ] No Blender/subprocess call on `step_processing` queue
- [ ] `self.request.id` written to `render_job_doc.celery_task_id` at task start
- [ ] `PipelineLogger` used for step start/done/error events
+1 -1
View File
@@ -114,7 +114,7 @@ grep -n "\.delay\|\.apply_async" backend/app/domains/pipeline/tasks/*.py | grep
```python
# In the Celery task function:
@celery_app.task(bind=True, queue='thumbnail_rendering')
@celery_app.task(bind=True, queue='asset_pipeline')
def render_step_thumbnail(self, cad_file_id: str, tenant_id: str | None = None):
from app.database import SyncSessionLocal
from app.utils.tenant import set_tenant_context