Micro-Deck
Bridging the Intention-Behavior Gap
A Minimalist Habit Initiation Tool
Team HabitHelp
[Team Member Names]
BYU IS/IT Capstone - AI-Augmented Software Development
February 2026
Duration: 15 seconds (0:00-0:15)
Brief welcome and introduction. Today we're pitching Micro-Deck - not another habit tracker, but the first true habit INITIATION tool.
This covers the Guest Grader criteria for professional presentation setup.
It's 7 AM. You want to run. Your phone is in your hand.
"Most habit failures don't happen because people lack motivation - they happen in the 30 seconds between wanting to do something and starting it."
The Intention-Behavior Gap
Trackers measure after you've acted
Blockers police your behavior (restrictive, easy to bypass)
Focus timers require you to already be in motion
Nobody owns the initiation moment.
Duration: 60 seconds (0:15-1:15) | Presenter: Team Member 1
Tell a brief story: "You've set a goal to run 3x a week. You know what you want. You're motivated. But you pick up your phone to set a timer and... 20 minutes later you're on Instagram."
The Intention-Behavior Gap is the core problem. Nobody owns the initiation moment.
Evidence: UX Research User #1 (18-24, ADHD): "It's hard to just start because it seems so big and difficult... the mental energy to start is too much, but once you start you realize it's not that bad"
Rubric Coverage: Jason - Problem Identification (20pts), Guest Grader - Compelling storytelling
The Habit Formation Ecosystem
📊 [SYSTEM DIAGRAM HERE]
The Human Habit Formation System
Create diagram showing: Inputs → Core Loop → Leverage Point → Outputs
Components:
Inputs: Motivation, Environment, Triggers
Core Loop: Cue → Craving → Action → Reward
Leverage Point: Initiation Gap
Outputs: Automatic behavior OR fades
Our Target:
30 sec
The window between intention and action
Duration: 75 seconds (1:15-2:30) | Presenter: Team Member 1
"We're interested in the human habit formation and development system." Show how existing solutions map to different parts of the system.
Our leverage point: The 30-second window between intention and action. This is especially critical for people with ADHD/executive dysfunction.
Evidence: Problem Statement: "After our research, we discovered the main hurdle in acting on goals is INITIATION"
Reference behavioral science: BJ Fogg's Tiny Habits, Gollwitzer's Implementation Intentions
Rubric Coverage: Jason - System Understanding (20pts), System design diagram included
Who Are We Serving?
People with ADHD/ Executive Dysfunction
15.5M diagnosed U.S. adults (CDC)
Struggle with task initiation despite knowing what to do
Harmed by streak-based guilt mechanics
Digital Burnout Sufferers
Recognize their phone as a distraction trap
Actively seeking alternatives to doomscrolling
Skeptical of engagement tactics
Productivity Minimalists
Tried complex systems (Notion, routine apps)
Maintenance overhead defeats the purpose
Want one tool that does one thing well
"It's difficult for neurodivergent people to build habits... they have to think hard about each step, which is why people like that use apps and reminders"
— UX Research User #1 (ADHD)
Duration: 90 seconds (2:30-4:00) | Presenter: Team Member 2
"Our customer research revealed clear patterns." Walk through each persona with supporting evidence from UX research.
Emphasize: These aren't people who lack motivation - they struggle specifically with STARTING.
More evidence:
User #1 (ADHD): "There's just always so many tasks... never ending checklist, then get distracted... will start one task, then get distracted partway through and start another"
User #3: "I want my goals to change into a habit... It has to be an alarm but I fall back into my routine"
User #4: "Setting achievable goals. Too many goals at once" as main challenge
Rubric Coverage: Jason - Customer Focus (20pts), Customer Interaction (20pts)
Customer Research Evidence
Research Process
5 in-depth customer interviews
Mix of ADHD/neurodivergent individuals and typical users
Tested paper mockups and gathered feedback
Common Themes
4/5 users
"Starting is harder than continuing"
3/5 users
"Existing apps add complexity instead of reducing it"
2/5 users
"Streak-based systems create guilt, not motivation"
5/5 users
"Phone is both the problem and the solution"
"I like how it has you do it right when you do it... I like how you have to set an action to get started... super simple to navigate"
— User #3 on our demo
Duration: 75 seconds (4:00-5:15) | Presenter: Team Member 2
"We didn't assume we knew the problem - we went out and talked to real people."
Walk through 2-3 specific user stories. Show how feedback influenced our approach.
Key validation: When shown our mockup, users said "this is different" - that's what we needed to hear.
More evidence:
User #1 on our demo: "Liked that it splits stuff into smaller tasks... curious if it add extra unnecessary steps"
User #1's sister: "If her phone is telling her to do stuff she magically ends up on Instagram. She liked that our app was black and white (less distracting)"
User #5: Uses friends for accountability - wants "active accountability" and "celebrating wins with other people"
Rubric Coverage: Jason - Customer Interaction (20pts), how feedback influenced iteration
Falsifiability - How We Could Be Wrong
Our Hypothesis
"A minimalist, offline, 2-minute initiation tool will help people with executive dysfunction start habits they currently avoid."
How We Could Fail
People don't see what's different (use notes/calendar instead)
Internal motivation isn't enough to open the app
Phone isn't available to be "passive timer"
2 minutes is too long or too short
Follow-up doesn't create lasting habits
Tests We've Run
Showed 2-3 sentence description + mockup to 5 potential users
Asked: "Would this help more than your current method?"
Result: 3/5 said yes with specific enthusiasm; 2/5 needed more convincing
Next: Behavioral test with working prototype
Duration: 60 seconds (5:15-6:15) | Presenter: Team Member 2
"We're actively trying to prove ourselves wrong before we build too much."
Walk through each failure mode. Emphasize: One user said they already use "habitShare" for end-of-day tracking and wouldn't switch unless we did something different.
This feedback made us double down on the initiation focus, not tracking.
Evidence: Problem statement: "We could be proved wrong if most people say they'd just use their notes, calendar reminders, or an existing app and don't see what's different about ours"
User #2 feedback: Uses habitShare for tracking at end of day, wouldn't use ours unless it's different
Rubric Coverage: Jason - Problem Identification (20pts) - Falsifiability check, alternative problems considered, divergent thinking
Competitive Differentiation
📊 [2x2 DIFFERENTIATION GRID HERE]
X-axis: Moment of Intervention (Before Action ← → After Action)
Y-axis: Approach (Restrictive/Punitive ← → Supportive/Enabling)
Create grid in Figma/Lucidchart showing competitors mapped to quadrants
Competitors:
Habit trackers (Streaks, Habitify) - guilt-based, after action
App blockers (Opal, Freedom, One Sec) - restrictive, police behavior
Focus timers (Forest, Tiimo) - requires already in motion
Micro-Deck's White Space:
Supportive + Before Action Enables starting, not restricting or measuring
Our Differentiation: No history, no tracking, no guilt. 2-minute sessions, not 25-minute Pomodoros.
Duration: 75 seconds (6:15-7:30) | Presenter: Team Member 2
"Here's why no existing solution solves this problem."
Walk through each competitor cluster. Emphasize: "Every app either restricts you or measures you. We're the only one that just helps you start."
Evidence:
PRD Competitive Landscape: Opal ($19.99/mo), Freedom ($99.50), One Sec ($2.99/mo), Forest (paid), Tiimo/Routinery, Streaks/Habitify
Problem statement: "Alternative systems include paper/pencil... as well as habitGPT and habitcoach.ai... they require costly subscriptions... or provide too many features, leading to overwhelm"
Market research: "No major app currently positions itself as an INITIATION RITUAL... This is the white space Micro-Deck owns"
Rubric Coverage: Jason - Customer Focus (20pts), differentiation from competition, deliverable: 2x2 grid
The Solution - What Makes Us Different
"One card. Two minutes. No judgment."
The Core Loop
User creates a "deck" of cards - each card = one 2-minute micro-habit
Card shows the smallest possible starting action (e.g., "Put on running shoes")
User taps card → full-screen timer → phone face-down → distraction-free
Haptic pulse signals completion (no confetti, no streaks)
Card returns to deck, ready for next time
What Makes This Different
Offline-only - no account, no cloud, no data collection
No tracking - no history, no streaks, no guilt
2 minutes max - just enough to start, not commit to full session
Silent Pulse - haptic feedback, not visual reward
Compassionate design - "rest" cards, don't "fail" at them
Duration: 60 seconds (7:30-8:30) | Presenter: Team Member 3
"We're not trying to be another habit tracker. We're the opposite."
Walk through the core loop. Key insight: "We focus on intrinsic motivation, initiation helps, AND follow-up"
"Once a user creates a habit with consistent prompting, the app schedules follow-ups and does periodic check-ins"
Evidence:
Problem statement solution: "Focus on the weakest points that current solutions have: lack of supporting intrinsic motivation, initiation helps, AND follow up"
PRD positioning: "We don't police you. We give you a better next move."
Rubric Coverage: Jason - Customer Focus (20pts), multiple lenses of analysis, Problem Identification (20pts)
Success Metrics & Failure Indicators
Success Looks Like
≥60% of installs complete 1 card in first session
≥25% return on Day 7 (without streaks!)
≥40% grant notification permission
≥80% of started timers finish
Qualitative: Users say "this feels different"
Failure Looks Like
Users abandon during onboarding (too complex)
Session abandon rate >20% (timer UX broken)
Don't return without push notifications
Feedback: "This is just a timer" (positioning failed)
Our Pivot Plan
If initiation works but retention fails: Add gentle follow-up mechanisms (not streaks)
If users need more guidance: Add AI-assisted card creation (future state)
If 2 minutes is wrong: Test 1-minute and 5-minute defaults
Duration: 60 seconds (8:30-9:30) | Presenter: Team Member 3
"We have clear, measurable indicators - not just 'hope it works'"
These metrics exist for internal validation, NOT shown to users.
Key insight: We measure retention WITHOUT streak mechanics - if we can retain users without guilt, we've proven the model.
"We're not afraid to be wrong - we just need to know quickly"
Evidence:
PRD Section 7: Activation rate ≥60%, Day 7 retention ≥25%, Pro conversion ≥8%, Session abandon ≤20%
Problem statement: Future pivot could include "having an AI that helps people break down their goals (instead of the Mad Libs templates)"
Rubric Coverage: Jason - Success & Failure Planning (20pts) - measurable indicators, failure defined, pivot plans
Technical Approach
Document-Driven AI Development
Document-Driven Development
Our Process: PRD → Plan → Build → Iterate
Documents as Source of Truth
PRD (aiDocs/prd.md) - 19KB, comprehensive
Architecture (aiDocs/architecture.md) - Tech stack, constraints
MVP Scope (aiDocs/mvp.md) - Definition of done
Context (aiDocs/context.md) - Current focus
AI-Augmented Workflow
All docs live in /aiDocs folder
PRD drives all implementation decisions
Documents updated as project evolves
AI reads PRD → generates plans → follows constraints
Evidence of Process
PRD v1.0 created before any code
Architecture defines hard constraints (no network, no accounts, no telemetry)
MVP scope prevents feature creep
All docs verified and updated February 2026
Duration: 60 seconds (9:30-10:30) | Presenter: Team Member 3
"We didn't start with code. We started with documents."
Show folder structure: /aiDocs with all core documents.
"The PRD serves as our immutable source of truth"
"When AI suggests something outside scope, we point it back to the docs"
Evidence:
PRD: 457 lines, 19KB, last updated February 2026
Architecture: 213 lines, 9KB, comprehensive tech stack
MVP: 273 lines, 11KB, clear definition of done
Context: 70 lines, references all key docs
Rubric Coverage: Casey - PRD & Document-Driven Development (25pts) - PRD clear enough to build from, documents drive coding, immutable truth, living artifacts
AI Development Infrastructure
AI Folder Pattern - Properly Implemented
Structure:
/aiDocs/
├── prd.md
├── architecture.md
├── mvp.md
├── context.md
└── midterminfo.md
/ai/guides/
└── habit-help-market-research.md
Git Workflow:
Meaningful commits: "Updates to Phase 3", "Updated architecture, context, prd"
Clean history showing iterative progress
.gitignore properly configured (no secrets)
Tech Stack:
Flutter 3.22+ (iOS primary)
Riverpod state management
SQLite local storage
iOS 16+, Android API 26+
Duration: 45 seconds (10:30-11:15) | Presenter: Team Member 3
"We followed the course AI folder pattern exactly"
Show clean git history (not one big commit). "No secrets committed - .gitignore configured correctly"
"Cross-platform: Flutter allows iOS + Android from same code, but iOS is our primary target"
Evidence:
Git history: 5 commits, meaningful messages, iterative
pubspec.yaml dependencies: shared_preferences ^2.5.4, sqflite ^2.4.2, flutter_riverpod ^3.2.1
Architecture doc specifies all versions verified from pub.dev February 2026
Rubric Coverage: Casey - AI Development Infrastructure (25pts) - AI folder pattern, project structure supports workflow, Git workflow, cross-platform considerations
Phase-by-Phase Implementation
Incremental Build Evidence
Phase 1: Foundation ✓
Flutter project scaffold
Database setup (sqflite)
Data models (Card, Schedule)
Repositories (CRUD)
91 files changed
Phase 2: Core Screens ✓
Welcome screen
Onboarding flow (Goal → Action → Confirm)
Timer screen with wakelock
Deck view with card list
Phase 3: Features ⚙
Notification service ✓
Purchase service ✓
Settings screen ✓
Card templates ✓
Final integrations 🔄
4,593
lines of code added in latest implementation
Git History Shows: Iterative commits, not one-shot generation. Multi-session workflow. Progressive feature additions.
Duration: 45 seconds (11:15-12:00) | Presenter: Team Member 3
"We built this incrementally, following our MVP roadmap"
Show git diff: 91 files, 4593 insertions in latest implementation
"Not a one-shot AI prompt - multiple sessions, multiple iterations"
"Each phase builds on the previous - foundation → screens → features"
Evidence:
Git diff 44024a0→4527891: 91 files changed, 4593 insertions
Commit message: "Updates to Phase 3 (still need to finish it)" - shows ongoing work
File structure: lib/screens/ (5 screen folders), lib/data/ (models + repositories), lib/services/ (notification + purchase)
Rubric Coverage: Casey - Phase-by-Phase Implementation (25pts) - incremental build, roadmap phases followed, multi-session workflow, git history shows iteration
Structured Logging & Debugging
Test-Log-Fix Loop
Current Implementation
Flutter's built-in error handling
Local logging for debugging
No telemetry sent off-device
flutter_01.log and flutter_02.log files present
Debugging Process
AI reads error logs from Flutter console
Diagnoses issues based on stack traces
Fixes applied incrementally
Re-test after each fix
⚠ Identified Gap: We need better CLI test scripts - that's a clear area for improvement before final.
Privacy by Design: No analytics SDK, no crash reporting SDK that phones home. Architecture specifies: "Use Flutter's built-in error handling with local logging only"
Duration: 30 seconds (12:00-12:30) | Presenter: Team Member 3
"We implement logging without compromising user privacy"
"Flutter logs captured for debugging - never sent to external services"
"Test-log-fix loop followed throughout development"
ACKNOWLEDGE GAP: "We need better CLI test scripts - that's a clear area for improvement before final"
Evidence:
flutter_01.log and flutter_02.log exist (96 lines each)
Architecture line 205: "No analytics or crash reporting SDKs that send data off-device"
Architecture specifies using Flutter's built-in error handling
Rubric Coverage: Casey - Structured Logging & Debugging (25pts) - logging implemented, CLI test scripts (PARTIAL - acknowledge weakness), test-log-fix loop followed
Live Demo
📱 [LIVE DEMO OR SCREEN RECORDING HERE]
Physical device or backup screen recording
Demo Flow - The Complete Loop
Cold Launch → Welcome screen appears
Onboarding Step 1: "What do you want to work toward?" → Type: "Exercise more"
Onboarding Step 2: "What's one tiny thing that starts it?" → Type: "Put on running shoes"
Confirmation: "Let's do two minutes right now" → [Start now]
Timer Screen: Full-screen countdown, pulsing dot, silent focus
Completion: Haptic pulse fires, "That's it. You started."
Deck View: Card appears, tap to repeat timer
Add Card: Show [+] button, create second card
Key Message: No streak. No score. No guilt. Just help you start.
Duration: 90 seconds (12:30-14:00) | Presenters: All team members (narrate together)
"This is a working Flutter app running on [iOS/Android]"
Let the demo speak for itself - minimal narration. Point out: "No streak. No score. No guilt. Just help you start."
Show local persistence: close and reopen app, cards still there.
Demo Script: Run timer for 30-60 seconds (don't wait full 2 minutes). Pass phone to graders if possible to feel haptic. Show that loop is repeatable.
Technical Requirements: Have physical device ready (iOS preferred). Test demo flow 3x before presentation. Have backup recording ready.
Rubric Coverage: Presentation checklist - working demo, Guest Grader - effective demonstration, Jason - demonstrates solution in action
Current State & Next Steps
✓ Completed (MVP)
PRD, Architecture, MVP docs
6 core screens implemented
Local persistence (SQLite)
Timer with haptic feedback
Onboarding flow
Deck view with card management
Notification service (foundation)
Purchase service (foundation)
⚙ In Progress (Phase 3)
Final notification scheduling integration
Pro tier paywall implementation
Settings screen polish
Card templates finalization
📋 Next Steps (Post-Midterm)
Complete Phase 3 features
Build CLI test scripts (address gap)
Conduct behavioral testing with 10+ users
Measure activation and retention metrics
iOS TestFlight beta launch
Iterate based on real user data
Duration: 60 seconds (14:00-15:00) | Presenters: All team members
"We have a working MVP that proves the core concept"
"Our next focus: validation with real users at scale"
Acknowledge gaps: "We know our test infrastructure needs work - that's our priority"
"By final presentation, we'll have real user data, not just customer interviews"
Evidence:
MVP Definition of Done checklist (11 items)
Context.md current focus section
Commitment to 10+ user behavioral tests
Rubric Coverage: Casey - shows current implementation state, Jason - clear plan for measuring success, Presentation - honest about what's done vs. in progress
Questions & Discussion
All team members ready to answer questions about:
Technical implementation details
Customer research methodology
Competitive positioning
Success metrics and validation plans
Process and documentation approach
Duration: 15:00+ (If time permits)
All team members ready to answer questions.
Rubric Coverage: All graders - ability to explain technical and product decisions, Guest Grader - communication quality, handling questions
Backup A: Detailed Market Research
Market Size
15.5M
Diagnosed ADHD adults in U.S. (CDC)
Growing digital burnout segment (attention-economy backlash)
Full Competitive Landscape (7 Clusters)
Competitor
Price
Category
Why Users Leave
Opal
$19.99/mo
App blocker
Unreliable blocking, cluttered UI
Freedom
$99.50 lifetime
App blocker
Bypass workarounds, support friction
one sec
$2.99/mo
Friction tool
Setup friction, annoying by design
Forest
Paid
Focus timer
Gamification fatigue, feature bloat
Tiimo/Routinery
Various
Routine planner
"Too much system upkeep"
Streaks/Habitify
Various
Habit tracker
Streak guilt, shame on bad days
Identified Risks
Retention without streaks
Onboarding friction (need to get to first win quickly)
iOS notification limits (64-notification queue workaround)
Monetization (one-time purchase vs. subscription)
Backup B: Behavioral Science Foundation
Principle
Research Basis
How Micro-Deck Applies It
Implementation Intentions
Gollwitzer (1999) - "if-then" planning improves follow-through
Card scheduling ("When it's 7am Monday, I will put on running shoes")
Minimum Viable Behavior
BJ Fogg's Tiny Habits - anchor to smallest possible version
2-minute default timer; short enough brain can't argue
Autonomy Support
Self-Determination Theory - user-authored goals reduce reactance
User creates all cards; app never assigns tasks
Contextual Cueing
Habit loop research (Clear, Wood) - environmental cues drive initiation
Scheduled notifications tied to specific cards and times
Completion Signaling
Operant conditioning - clear, immediate feedback reinforces behavior
Haptic pulse on timer completion
Design Principle: This research is embedded in the experience - not marketed as a feature claim.
Backup C: Data Models & Architecture
Core Data Models (SQLite)
Goal
id: UUID
label: String
createdAt: DateTime
Card
id: UUID
goalId: UUID (nullable - card can exist without goal)
actionLabel: String
durationSeconds: Int (default: 120)
sortOrder: Int
isArchived: Bool
createdAt: DateTime
Schedule (Pro)
id: UUID
cardId: UUID
weekdays: [Int] (0=Sun ... 6=Sat)
timeOfDay: TimeOfDay
isRecurring: Bool
isActive: Bool
Session
id: UUID
cardId: UUID
startedAt: DateTime
completedAt: DateTime (nullable - null = abandoned)
durationSeconds: Int
iOS 64-Notification Limit Workaround
All schedules stored in Schedule table
On every app open: cancel all pending notifications, compute next 40 upcoming instances, register those
On schedule create/edit/delete: trigger immediate recompute
If notifications denied: schedules remain stored, app remains fully functional
Offline-First: No backend. No user account. No telemetry. Zero network requests in v1.
Backup D: Full UX Research Data
5 Customer Interviews - Key Insights
User #1 (18-24, ADHD)
"There's just always so many tasks... never ending checklist, then get distracted... will start one task, then get distracted partway through and start another"
"Liked that it splits stuff into smaller tasks... curious if it add extra unnecessary steps"
Sister insight: "If her phone is telling her to do stuff she magically ends up on Instagram. She liked that our app was black and white (less distracting)"
User #2 (Existing habit tracker user)
Uses habitShare for end-of-day tracking. Challenge: Wouldn't switch unless we're different.
This feedback made us double down on initiation focus, not tracking.
User #3 (Marketing student)
"I want my goals to change into a habit... It has to be an alarm but I fall back into my routine"
"I like how it has you do it right when you do it... super simple to navigate"
User #4
Main challenge: "Setting achievable goals. Too many goals at once"
Validates our focus on minimal cards, not overwhelming lists.
User #5 (Social accountability seeker)
Uses friends for accountability. Wants "active accountability" and "celebrating wins with other people"
Future consideration: Optional social features, but not in v1 (contradicts privacy-first philosophy)