sequenceDiagram
participant User
participant API
participant Orchestrator
participant ML as ML Service
participant Qdrant
participant Thompson
User->>API: Request feed
API->>Orchestrator: Get ranked feed for user
Orchestrator->>ML: Get user embedding
ML->>ML: Load interaction history
ML->>ML: Run foundation model
ML->>Orchestrator: Return user_embedding
Orchestrator->>Qdrant: Search similar content
Qdrant->>Orchestrator: Return candidates (100)
Orchestrator->>Thompson: Rank candidates
Thompson->>Thompson: Sample from Beta distributions
Thompson->>Orchestrator: Return ranked IDs
Orchestrator->>API: Return ranked feed
API->>User: Display personalized content
User->>API: Interaction (view/save/share)
API->>Orchestrator: Record interaction
Orchestrator->>Thompson: Update reward
Thompson->>Thompson: Update alpha/beta
Content Discovery → Database
sequenceDiagram
participant Instagram
participant Discovery as Discovery Agent
participant Quality as Quality Guardian
participant Qdrant
participant Supabase
Discovery->>Instagram: Fetch curator feed
Instagram->>Discovery: Return posts
loop For each post
Discovery->>Quality: Evaluate content quality
Quality->>Quality: Multi-modal analysis
Quality->>Discovery: Quality score (0-1)
alt Quality > 0.7
Discovery->>Qdrant: Store content embedding
Discovery->>Supabase: Store metadata
else Quality < 0.7
Discovery->>Discovery: Skip low-quality content
end
end
Model Training Flow
graph LR
A[Interaction Logs] --> B[Tokenizer]
B --> C[Sequence Dataset]
C --> D[Foundation Model]
D --> E[Training Loop]
E --> F[Validation]
F --> G{Converged?}
G -->|No| E
G -->|Yes| H[Save Model]
H --> I[Deploy to Fly.io]
I --> J[Inference Service]