Backend:
- Migration 002: product_recognition_jobs table with JSONB images column
and job_type CHECK ('receipt' | 'products')
- New Kafka topics: ai.products.paid / ai.products.free
- ProductJob model, ProductJobRepository (mirrors dish job pattern)
- itemEnricher extracted from Handler — shared by HTTP handler and worker
- ProductSSEBroker: PG LISTEN on product_job_update channel
- ProductWorkerPool: 5 workers, branches on job_type to call
RecognizeReceipt or RecognizeProducts per image in parallel
- Handler: RecognizeReceipt and RecognizeProducts now return 202 Accepted
instead of blocking; 4 new endpoints: GET /ai/product-jobs,
/product-jobs/history, /product-jobs/{id}, /product-jobs/{id}/stream
- cmd/worker: extended to run ProductWorkerPool alongside dish WorkerPool
- cmd/server: wires productJobRepository + productSSEBroker; both SSE
brokers started in App.Start()
Flutter client:
- ProductJobCreated, ProductJobResult, ProductJobSummary, ProductJobEvent
models + submitReceiptRecognition/submitProductsRecognition/stream methods
- Shared _openSseStream helper eliminates duplicate SSE parsing loop
- ScanScreen: replace blocking AI calls with async submit + navigate to
ProductJobWatchScreen
- ProductJobWatchScreen: watches SSE stream, navigates to /scan/confirm
when done, shows error on failure
- ProductsScreen: prepends _RecentScansSection (hidden when empty); compact
horizontal list of recent scans with "See all" → history
- ProductJobHistoryScreen: full list of all product recognition jobs
- New routes: /scan/product-job-watch, /products/job-history
- L10n: 7 new keys in all 12 ARB files
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Replace dual-consumer priority WorkerPool with a single consumer per
worker process. WORKER_PLAN=paid|free selects the Kafka topic and
consumer group ID (dish-recognition-paid / dish-recognition-free).
docker-compose now runs worker-paid and worker-free as separate services
for independent scaling. Makefile dev target launches both workers locally.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Kafka consumers and WorkerPool are moved out of the server process into
a dedicated worker binary. Server now handles HTTP + SSE only; worker
handles Kafka consumption and AI processing.
- cmd/worker/main.go + init.go: new binary with minimal config
(DATABASE_URL, OPENAI_API_KEY, KAFKA_BROKERS)
- cmd/server: remove WorkerPool, paidConsumer, freeConsumer
- Dockerfile: builds both server and worker binaries
- docker-compose.yml: add worker service
- Makefile: add run-worker and docker-logs-worker targets
- README.md: document worker startup and env vars
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>