wiggly-tinkering-origami-agent-abb4ef1b1734c439f.md 3.8 KB

Media Pipeline Prototype Implementation Plan

1. Overview

This plan outlines the implementation of a prototype media processing pipeline, following the "non-blocking" principle described in ARCHITECTURE.md. The goal is to move from file upload to background transcoding using MinIO, BullMQ, and FFmpeg.

2. Data Flow

User Upload $\rightarrow$ Next.js Server Action (Multipart Form Data) $\rightarrow$ MinIO (Store original file + create DB record status: PENDING) $\rightarrow$ BullMQ Producer (Add job to queue with metadata) $\rightarrow$ Redis (Job persistence) $\rightarrow$ FFmpeg Worker (Consumes job, downloads from MinIO, transcodes) $\rightarrow$ MinIO (Upload processed variants) $\rightarrow$ Database Update (status: COMPLETED, store variant URLs).

3. Proposed File Structure

src/
├── app/
│   └── api/
│       └── media/
│           └── upload/          # Route handler for large file uploads (optional fallback)
├── actions/
│   └── media.ts                 # Server Actions: uploadMedia, getProcessingStatus
├── lib/
│   ├── minio.ts                 # MinIO client singleton & utility functions
│   ├── queue/
│   │   ├── connection.ts        # Redis connection config
│   │   ├── producer.ts          # BullMQ job addition logic
│   │   └── worker-types.ts      # Shared interfaces for jobs
│   └── ffmpeg/                  # (Shared types/utils for FFmpeg)
├── workers/
│   └── media-processor.ts       # Independent Node.js process consuming BullMQ jobs
└── db/
    └── schema/
        └── media.ts             # New schema: Media files, variants, processing status

4. Key Components & Implementation Details

A. Storage & Database (MinIO + Drizzle)

  • Schema: media table tracking id, original_key, mime_type, status (pending, processing, completed, failed), and a JSONB column for variants (resolutions, URLs).
  • Direct Uploads: Server Action receives the file, streams it to MinIO via the minio SDK, then initiates the queue.

B. Asynchronous Queue (BullMQ + Redis)

  • Job Payload: { mediaId: string, originalKey: string, filename: string }.
  • Progress Tracking: The worker will use BullMQ's job.updateProgress(n) to report percentage. This can be polled via a Server Action or pushed via WebSockets/SSE in the future.

C. FFmpeg Worker (The "Heavy Lifter")

  • A standalone script (src/workers/media-processor.ts) that runs as a separate process.
  • Steps:
    1. Pull job from queue.
    2. Download original file from MinIO to /tmp.
    3. Run FFmpeg command (e.g., ffmpeg -i input -vf scale=-1:720 ...).
    4. Upload resulting .mp4 or HLS segments back to MinIO.
    5. Update PostgreSQL via Drizzle with the new variant metadata and set status to completed.

5. Testing Strategy

  1. Unit Tests: Verify MinIO upload utility and BullMQ job creation logic using mocks.
  2. Integration Test (Local):
    • Spin up local Docker containers for MinIO, Redis, and Postgres.
    • Execute the Server Action with a sample .mp4.
    • Monitor Redis to ensure the job is queued.
    • Run the worker and verify:
      • FFmpeg successfully generates files in /tmp.
      • Files appear in MinIO.
      • Database record status changes from pending $\rightarrow$ processing $\rightarrow$ completed.
  3. Failure Resilience: Simulate a worker crash during transcoding to ensure BullMQ retries the job or marks it as failed.

6. Success Criteria

  • User receives immediate "Upload Successful" after file reaches MinIO.
  • Database shows the file record with status: pending.
  • Background worker picks up the task and updates status to processing.
  • Final output variants are accessible via MinIO URLs and recorded in the DB.