Skip to content

Vows Social AI - The Sentient Curator

An AI-First wedding content platform that learns your taste and shows you exactly what you'll love.


What is Vows Social?

Imagine Pinterest or Instagram recommendations, but for your wedding. Every couple has unique taste—some love rustic barns, others prefer elegant ballrooms. Vows Social uses cutting-edge AI to understand YOUR specific vision and deliver a perfectly personalized feed of ideas, inspiration, and vendors.

No more endless scrolling. Just content you'll actually save.


How It Works (In Plain English)

1️⃣ You Interact

Browse wedding content and save what you love. Every save, view, and share teaches our AI about your taste.

2️⃣ AI Learns Your Style

Our Two-Tower Model (like Pinterest uses) learns the patterns in what you love: - Rustic vs modern - Outdoor vs indoor - Color palettes - Vendor styles

3️⃣ Agents Find Perfect Matches

Six specialized AI agents work together to curate your feed:

  • 🔍 Discovery Agent - Finds exceptional vendors before they're popular
  • ✨ Quality Guardian - Ensures only stunning, professional content
  • 📖 Personal Archivist - Remembers your journey and preferences
  • 🎲 Serendipity Engine - Introduces variety so you don't miss gems
  • ⏰ Engagement Forecaster - Predicts the perfect time to notify you

4️⃣ Thompson Sampling Ranks Everything

Our ranking algorithm balances two goals: - Show you proven winners (content similar to what you've saved) - Explore new possibilities (discover content you might love but haven't tried)

This is the same algorithm Instagram, Pinterest, and TikTok use.

5️⃣ Feed Gets Smarter Every Day

The more you use Vows, the better it gets: - Real-time learning - Every interaction updates rankings instantly - Nightly training - AI model retrains on your behavior patterns - Agent optimization - Multi-agent system learns to work together better


The Technology Behind It

All AI/ML runs on Modal - a modern Python ML platform with: - GPU access for multimodal understanding (images + text) - Serverless scaling - $0 when idle, scales to thousands of users - Production-ready - Used by AI companies worldwide

🧠 Two-Tower Model

Like Pinterest and YouTube, we use a dual-encoder architecture: - User Tower - Learns your taste from interaction history - Content Tower - Understands wedding content (images, vendors, styles) - Match Score - Dot product finds best matches

🤖 Multi-Agent Crew

Six specialized agents coordinated by Multi-Agent PPO (Ray RLlib): - Industry-proven reinforcement learning - Agents learn to collaborate - Full observability with LangSmith

🎯 Thompson Sampling ✓

Beta-Bernoulli bandit for exploration/exploitation: - NOT removed - This is our core ranking algorithm - Balances showing proven content vs exploring new options - Used by Instagram, Pinterest, TikTok for recommendations

🔬 LangSmith Observability

Full visibility into every AI decision: - Trace every agent interaction - Debug user complaints - Monitor training performance - 5K traces/month free tier


Project Status

Current Phase: Foundation Setup → Modal Migration

✅ Completed

  • Architecture designed with industry-proven approaches
  • Comprehensive visual documentation
  • Platform decision: Modal (unified Python stack)
  • Admin console built (console.vows.social)
  • Legacy cleanup complete

🚧 In Progress

  • Modal platform setup
  • GPU embedding pipeline (SigLIP 2)
  • Thompson Sampling migration
  • Two-Tower model training

⏳ Next Steps

  • Multi-agent system (Phase 2 - if validated)
  • Mobile app (Flutter)
  • Scale optimization

Technology Stack

Compute & ML

  • 🐍 Modal - Python ML platform (GPU serverless)
  • ⚡ FastAPI - API microservices (async)
  • 🧠 PyTorch - Deep learning framework
  • 🎨 SigLIP 2 - Multimodal embeddings (400M params, state-of-the-art)
  • 🗣️ Jina CLIP v2 - Multilingual support (0.9B params, 89 languages)

Agents & Coordination

  • 🤖 LangGraph - Multi-agent orchestration framework
  • 🎓 Ray RLlib - Multi-Agent PPO training
  • 🔬 LangSmith - Agent observability (5K free traces/month)

Data & Storage

  • 🔍 Qdrant - Vector database for semantic search
  • 💾 Supabase - PostgreSQL for user data and interactions
  • 📦 Modal Volumes - Persistent storage for model checkpoints

Frontend

  • 🌐 Vercel - Next.js hosting (user app + admin console)
  • ⚛️ React - UI framework
  • 🎨 Tailwind CSS - Styling

CDN & Email

  • ⚡ Cloudflare - CDN and email workers only

Documentation

🚀 Getting Started

New to the project? Start here to understand the system and get running.

🏗️ How It Works

Plain-English explanation with visual flows of how everything ties together.

🏛️ Architecture

Technical deep dive into system design, ML components, and data flows.

📋 Implementation Roadmap

Phase-by-phase development plan from foundation to scale.

🔧 Components

Detailed documentation for each system component.

👨‍💻 Development Guides

Git workflow, testing, deployment, and development practices.

📡 API Reference

Complete API documentation for all endpoints.


Philosophy

  1. 🧠 AI-First - Foundation model as source of truth for personalization
  2. 🤖 Multi-Agent - Specialized intelligence coordinated by Multi-Agent PPO
  3. 🎯 Thompson Sampling - Proven exploration/exploitation algorithm (KEPT)
  4. 🐍 Unified Stack - Python everywhere for ML/AI (Modal platform)
  5. 🔬 Observability - Full visibility into AI decisions (LangSmith)
  6. 💰 Free Tier - Validate product-market fit before scaling costs
  7. 📊 Data-Driven - Every decision backed by user behavior
  8. 📚 Comprehensive Docs - Beautiful, visual, plain-English explanations

Key Architectural Decisions

We've made several critical decisions documented in ADRs:

  • ADR-0001 - AI-First architecture with Multi-Agent PPO
  • ADR-0005 - Modal platform choice (replaces Cloudflare Workers + Fly.io)

Current open question: - RFC-0002 - Do we need multi-agent complexity before PMF?



Ready to understand the magic? Start with How It Works for a beautiful visual explanation! 🎨