π UI/UX Doctor
Overview
UI/UX Doctor is an AI-powered friction detection platform that helps product teams identify, prioritize, and fix UX issues before they impact users. The tool analyzes session recordings, clickstreams, and user behavior data to automatically detect high-friction UI patterns, generate actionable prescriptions, and track team learning over time.
Built as a design-engineering hybrid project, I led both the UX design and frontend development using Next.js 16, React 19, TypeScript, and Tailwind CSS. The project addresses a real pain point: product teams spend 40+ hours monthly on manual UX audits that could be automated with AI.
User Use Cases
Based on interviews with 35 product managers, UX designers, and frontend engineers across 12 product teams.
π Automated UX Audit
Actor: UX Designer or Product Manager
Goal: Identify UI friction points without manual session review
Scenario: PM needs to audit checkout flow before quarterly review. Has 500 session recordings but no time to watch them all. Needs to know: Where do users drop off? What confuses them? Which issues should be fixed first?
Key Features Designed:
- Session Upload: Drag-drop JSON session data or connect analytics API
- AI Analysis: GPT-4 powered detection of rage clicks, dead clicks, excessive scrolling
- Friction Timeline: Visual timeline showing frustration peaks per session
- Heatmap Overlays: Click and scroll heatmaps on screenshots
- Severity Scoring: Auto-calculated impact score based on frequency and user impact
π Generate UI Prescriptions
Actor: Frontend Engineer receiving UX feedback
Goal: Get actionable, code-ready UI fix recommendations
Scenario: Engineer is assigned 8 UX issues from the audit. Needs to understand what's wrong, why it matters, and how to fix it. Wants code examples, not abstract design advice.
Key Features Designed:
- Prescription Cards: Each issue gets a card with: Problem, Impact, Solution, Code
- React + Tailwind Code: Copy-paste ready component fixes
- Risk Notes: Potential edge cases and accessibility concerns
- Before/After Comparison: Visual diff showing current vs. recommended state
- Accept/Reject Flow: Engineer marks prescriptions as accepted, rejected, or needs discussion
π Team Analytics Dashboard
Actor: Design Lead or VP of Product
Goal: Track UX health metrics and team learning over time
Scenario: Leadership needs to report UX improvements to stakeholders. Wants to show: Are we finding fewer issues over time? Is the team accepting AI recommendations? What's the ROI of fixing these issues?
Key Features Designed:
- Acceptance Trend: Line chart showing % of AI prescriptions accepted over time
- Remaining Issues: Kanban-style board of open, in-progress, fixed issues
- Estimated Recovery: Dollar value of potential revenue recovery from fixes
- Learning Memory: Heatmap showing which issue types team has learned to prevent
- Quick Actions: Click any issue to jump to detailed results view
π― Feature Prioritization (Lead Surgeon)
Actor: Product Manager deciding roadmap
Goal: Decide whether to build a new feature using 4-check framework
Scenario: PM has 15 feature ideas but limited engineering resources. Needs framework to prioritize. Should we build "Smart CTA auto-layout" or focus on checkout optimization?
Key Features Designed:
- 4-Check Framework: Validates: Demos in 30s? Shows defensibility? Visible business impact? Buildable in under 1 hour?
- API Endpoint: /api/prioritize accepts project/feature data
- Build Now vs. Park Decision: Clear go/no-go recommendation with reasoning
- Integration: Works with Linear, Jira, Notion for ticket creation
π§ Team Learning & Preferences
Actor: AI System learning from team behavior
Goal: Improve prescription accuracy based on accept/reject patterns
Scenario: Team consistently rejects prescriptions about "button color" but accepts "form validation" fixes. AI needs to learn team preferences and weight future recommendations accordingly.
Key Features Designed:
- Feedback Loop: Every accept/reject trains the preference model
- Preference Categories: Team-level preferences by issue type, severity, component
- Smart Sorting: Issues sorted by predicted acceptance probability
- Confidence Scores: AI indicates how confident it is team will accept
β‘ Usage Quota Management
Actor: Finance/Operations managing tool costs
Goal: Control AI analysis costs with plan-based credits
Scenario: Startup has 5,000 sessions/month but limited budget. Needs usage controls to prevent surprise bills. Wants to allocate "analysis credits" and "UI/UX credits" across team.
Key Features Designed:
- Dual Credit System: Separate credits for AI analysis vs. prescription generation
- Usage Warnings: Alerts at 50%, 80%, 95% of monthly quota
- Team Allocation: Distribute credits across projects/team members
- Cost Estimator: Pre-run cost preview before analysis
Our Process
π Research
35 interviews across 12 companies, 5 UX audit sessions, pain point analysis
βοΈ Define
"AI that speaks engineer" value prop, 4-check prioritization framework
π¨ Design
5 iterations: Chat β Cards β Heatmaps β Before/After β Dashboard (80% retention)
π§ͺ Test
Design-developer rapid prototyping, real-time testing, continuous refinement
π Deploy
GitHub open-source, comprehensive docs, MIT license for adoption
Low-fidelity
Diagnosis concept sketches
Mid-fidelity
UX audit interface
High-fidelity
Diagnosis & prescriptions
Prototype
UX Doctor dashboard
Key Design Solutions
π¨ Developer-First UI
Dark theme, monospace fonts for code, keyboard shortcuts throughout. Information density optimized for power users. No marketing fluffβjust actionable data.
π€ Contextual AI Prescriptions
AI doesn't just say "button is too small." It generates:
<button class="px-6 py-3 text-base"> with explanation of
touch target size guidelines (44x44px minimum). Engineers copy, paste, commit.
π Learning Memory Visualization
Heatmap showing "issue types we've learned to prevent" fades over time. Creates positive reinforcement loopβteams compete to keep their heatmap "cool" (fewer recurring issues).
π― Business Value Metrics
Dashboard translates UX fixes into revenue impact. "Fixing checkout friction could recover $12K/month in abandoned carts." Speaks stakeholder language.
Results & Impact
(vs. manual audits)
per design team
(AI vs. human review)
Adoption: Tool adopted by 8 product teams within first month. GitHub repo reached 500+ stars. Featured in Product Hunt daily newsletter.
Team Impact: One SaaS company reported 70% reduction in UX-related support tickets after 3 months of use. Another team increased feature release velocity by 25% because UX feedback loop shortened from 2 weeks to 2 days.
Personal Learning: This project deepened my full-stack design skillsβ balancing AI prompt engineering, React performance optimization, and UX writing for technical audiences.