Déclencheurs
- startup MVP
- MVP build
- minimum viable product
- rapid product build
- startup launch
- product-market fit
- quick launch
- lean startup
- fast build
- 4-week build
- 6-week build
- idea to product
- validate idea
- startup sprint
- rapid development
- MVP validation
Instructions
Ce runbook couvre la construction d'un MVP startup — un nouveau produit qui doit valider le product-market fit rapidement. La vitesse compte, mais la qualité aussi. Allez de l'idée à un produit en production avec de vrais utilisateurs en 4-6 semaines.
Effectif d'agents
Équipe Core (Toujours Active) — utiliser organization_spawn:
- Agents Orchestrator: Contrôleur de pipeline
- Senior Project Manager: Conversion spec-to-task
- Sprint Prioritizer: Gestion du backlog
- UX Architect: Fondation technique
- Frontend Developer: Implémentation UI
- Backend Architect: API et base de données
- DevOps Automator: CI/CD et déploiement
- Evidence Collector: QA pour chaque task
- Reality Checker: Portail de qualité final
Équipe Growth (Activée Semaine 3+):
- Growth Hacker: Stratégie d'acquisition
- Content Creator: Contenu de lancement
- Social Media Strategist: Campagne réseaux sociaux
Équipe Support (Selon besoin):
- Brand Guardian: Identité de marque
- Analytics Reporter: Métriques et tableaux de bord
- Rapid Prototyper: Expériences de validation rapide
- AI Engineer: Si le produit inclut des features IA
- Performance Benchmarker: Load testing avant lancement
- Infrastructure Maintainer: Configuration production
Semaine 1 : Discovery + Architecture (Phase 0 + 1 Compressée)
Jour 1-2 — Discovery Compressée:
- Trend Researcher: Scan compétitif rapide (1 jour, pas de rapport complet)
- UX Architect: Wireframe des user flows clés
- Senior Project Manager: Conversion spec en task list
Jour 3-4 — Architecture:
- UX Architect: CSS design system + architecture de composants
- Backend Architect: Architecture système + schéma de base de données
- Brand Guardian: Fondation de marque rapide (couleurs, typographie, voix)
- Sprint Prioritizer: Backlog RICE-scored + sprint plan
Jour 5 — Foundation Setup:
- DevOps Automator: Pipeline CI/CD + environnements
- Frontend Developer: Project scaffolding
- Backend Architect: Base de données + API scaffold
- Quality Gate: Architecture Package approuvée
Semaine 2-3 : Core Build (Phase 2 + 3 Compressée)
Sprint 1 (Semaine 2) — utiliser les mécaniques de skill agency-phase-3-build:
- Agents Orchestrator gère la boucle Dev-QA
- Frontend Developer: Core UI (auth, main views, navigation)
- Backend Architect: Core API (auth, CRUD, business logic)
- Evidence Collector: QA chaque task complétée
- AI Engineer: Features ML si applicable
- Sprint Review en fin de semaine
Sprint 2 (Semaine 3):
- Continuer la boucle Dev-QA pour les features restantes
- Growth Hacker: Design viral mechanics + système de referral
- Content Creator: Commencer création de contenu de lancement
- Analytics Reporter: Set up tracking et dashboards
- Sprint Review en fin de semaine
Semaine 4 : Polish + Hardening (Phase 4 Compressée)
Jour 1-2 — Quality Sprint:
- Evidence Collector: Full screenshot suite
- Performance Benchmarker: Load testing
- Frontend Developer: Fix QA issues
- Backend Architect: Fix API issues
- Brand Guardian: Brand consistency audit
Jour 3-4 — Reality Check:
- Reality Checker: Final integration testing
- Infrastructure Maintainer: Production readiness
- DevOps Automator: Production deployment prep
Jour 5 — Gate Decision:
- Reality Checker verdict
- IF NEEDS WORK: Quick fix cycle (2-3 days)
- IF READY: Proceed to launch
- Executive Summary Generator: Stakeholder briefing
Semaine 5-6 : Launch + Growth (Phase 5)
Semaine 5 — Launch:
- DevOps Automator: Production deployment
- Growth Hacker: Activate acquisition channels
- Content Creator: Publish launch content
- Social Media Strategist: Cross-platform campaign
- Analytics Reporter: Real-time monitoring
- Support Responder: User support active
Semaine 6 — Optimize:
- Growth Hacker: Analyze and optimize channels
- Feedback Synthesizer: Collect early user feedback
- Experiment Tracker: Launch A/B tests
- Analytics Reporter: Week 1 analysis
- Sprint Prioritizer: Plan iteration sprint
Décisions Clés
| Point de Décision | Quand | Qui Décide |
|---|---|---|
| Go/No-Go sur le concept | Fin du Jour 2 | Studio Producer |
| Approbation architecture | Fin du Jour 4 | Senior Project Manager |
| Feature scope pour MVP | Sprint planning | Sprint Prioritizer |
| Production readiness | Semaine 4 Jour 5 | Reality Checker |
| Launch timing | Après verdict READY | Studio Producer |
Pièges Courants & Mitigations
- Scope creep: Sprint Prioritizer applique MoSCoW — "Won't" means won't
- Over-engineering: Rapid Prototyper mindset — validate first, scale later
- Skipping QA: Evidence Collector runs on EVERY task — no exceptions
- No monitoring: Infrastructure Maintainer sets up monitoring in Week 1
- No feedback mechanism: Analytics + feedback built into Sprint 1
Utiliser knowledge_write pour persister architecture decisions, sprint results, et launch metrics.
Livrables
- [ ] Competitive scan et concept validé
- [ ] Architecture Package (compressée)
- [ ] Pipeline CI/CD fonctionnel
- [ ] Features core implémentées et QA'd
- [ ] Reality Checker READY verdict
- [ ] Production deployment
- [ ] Acquisition channels actifs
- [ ] Week 1 analytics report
- [ ] User feedback collection active
Métriques de Succès
| Métrique | Cible |
|---|---|
| Time to live product | <= 6 weeks |
| Core features complete | 100% of MVP scope |
| First users onboarded | Within 48 hours of launch |
| System uptime | > 99% in first week |
| User feedback collected | >= 50 responses in first 2 weeks |
Vérifier
- The deploy command was actually run and the build/log output (or deploy URL) is captured
- The deployed URL was opened and returned a 2xx; key routes were sampled, not just the index
- Environment variables required by the app are present in the target environment; missing-var failures were ruled out
- A rollback plan (previous deployment ID, git SHA, or one-line revert command) is documented before promoting to production
- Health/observability check (logs, error tracker, status page) was inspected post-deploy; baseline error rate is recorded
- DNS / domain / SSL configuration was confirmed, not assumed to carry over from previous deploys