5 exercises — read and discuss DAU/MAU ratios, conversion funnels, churn, MRR, and API performance metrics with the precise vocabulary used in product and engineering teams.
Key metrics vocabulary quick reference
DAU/MAU ratio — engagement / "stickiness" of a product (higher = users return more often)
Churn rate — % of users or customers lost in a period; monthly × 12 ≈ annual
Conversion rate — % of users who complete a target action (purchase, signup)
p50/p95/p99 — latency percentiles; p99 = the slowest 1% of requests
Percentage points (pp) vs percent (%) — always use "pp" for the difference between two percentages
0 / 5 completed
1 / 5
A product report shows:
Monthly Active Users (MAU): 142,000 Daily Active Users (DAU): 18,500 New users this month: 9,200 Churned users this month: 3,100
How would you describe user engagement to a stakeholder?
Calculation breakdown:
• DAU/MAU ratio: 18,500 / 142,000 = 0.13 = 13% → "Users are active on average 0.13 × 30 ≈ 4 days per month" • Net growth: 9,200 − 3,100 = +6,100 users • Churn rate: 3,100 / 142,000 × 100 ≈ 2.2% monthly churn
Key vocabulary: • DAU — Daily Active Users: unique users who interact with the product in a given day • MAU — Monthly Active Users: unique users who interact at least once in a given month • DAU/MAU ratio — engagement ratio; how "sticky" the product is • churn — users who stopped using the product • net growth — new users minus churned users • stickiness — informal term for how often users return to a product
Funnel calculation: • 54,000 total visitors • 8,100 add to cart = 8,100 / 54,000 = 15% • 3,240 checkout started = 3,240 / 8,100 = 40% of cart adds • 1,620 purchases = 1,620 / 3,240 = 50% of checkouts • Overall conversion: 1,620 / 54,000 = 3%
Funnel analysis insight: The largest drop-off is at the FIRST step (visitors → cart). This means the problem is likely product discovery, relevance, or page content — not the checkout process (which converts at 50%, a strong rate).
Key vocabulary: • conversion rate — % of users who complete a desired action (purchase, signup, etc.) • conversion funnel — the series of steps from first visit to final action; shaped like a funnel because each stage has fewer users • drop-off / drop-off rate — users who exit the funnel at a given stage • bounce rate — % of visitors who leave after viewing only one page • checkout abandonment — users who started checkout but didn't purchase • cart abandonment rate — users who added items to cart but didn't checkout
3 / 5
A SaaS company presents these quarterly metrics:
Monthly Recurring Revenue (MRR): $184,000 Customer count: 460 Monthly churn rate: 2.8% Average Contract Value (ACV): $4,800/year
A new engineer asks: "What does 2.8% monthly churn actually mean in practice?"
Calculation: • Monthly churn: 460 × 0.028 ≈ 13 customers per month • Annual churn (approximate): 2.8% × 12 = 33.6% per year • Revenue impact per customer: $4,800 ACV → each churned customer = $4,800 lost revenue per year • 13 customers/month × $4,800 = ~$62,400 in annualised revenue lost monthly
Why 2.8% monthly sounds small but is significant: • "2.8%" feels like nothing, but annualised it's ~33% — the company must replace a third of its customer base every year just to stay flat • Good SaaS benchmarks: <1% monthly churn (elite) / <2% (good) / 2–5% (acceptable for early-stage) / >5% (problem)
Key vocabulary: • churn rate — % of customers or revenue lost in a given period • MRR — Monthly Recurring Revenue: predictable monthly subscription revenue • ACV — Annual Contract Value: the average annual revenue per customer • annualised churn — monthly churn rate × 12 (approximate; compound is slightly different) • customer lifetime — 1 / monthly churn rate (months); at 2.8%, average customer stays ≈ 36 months • LTV — Lifetime Value: ACV × average customer lifetime in years
4 / 5
An analytics report for a mobile app shows week-over-week trends:
Negative signals: • Sessions: −5.2% (fewer opens — possible stability-related) • Crash-free rate: 99.1% → 97.3% = −1.8 percentage points — significant drop; at 97.3%, approximately 1 in 37 sessions crashes • Session duration: −14.6% — users who do open the app are leaving faster
Positive signal: • Day 7 retention: +3 pp — of users who installed 7 days ago, more are still active. This contradicts an across-the-board decline — retained users are engaging better.
Likely hypothesis: A stability regression (crash) is causing some users to stop opening the app (sessions) and others to close it sooner (duration), while core retained users are unaffected.
Key vocabulary: • crash-free rate — % of sessions that complete without a crash (Firebase/Crashlytics metric) • Day N retention — % of users who return N days after first install • percentage points (pp) — absolute difference between two percentage values • stability regression — a new code change that introduced crashes • mixed signals — when some metrics improve and others decline simultaneously
5 / 5
A backend engineering manager reviews API performance metrics for the quarter:
How would you summarise these metrics in a brief stakeholder report?
Why p50/p95/p99 all matter:
• p50 (median): 42ms — half of requests complete in 42ms or less. "Average" users experience this. • p95: 380ms — 5% of requests take longer than 380ms. If you serve 8,400 req/s, that's 420 requests/second experiencing 380ms+. • p99: 1,240ms — 1% of requests. At 8,400 req/s, that's 84 requests/second taking over 1.2 seconds. • At peak (14,200 req/s), 142 req/s are in the 1.2s tail.
Why "average is 42ms" is misleading: The p50 hides the tail. Real users experience the full distribution — and the "long tail" (slow 1–5%) is often the source of user complaints and SLA breaches.
Key vocabulary: • percentile (p50/p95/p99) — the value below which N% of observations fall • tail latency — latency at the slow end of the distribution (p95/p99) • throughput — number of requests processed per second (req/s or RPS) • error rate — % of requests that return an error (4xx/5xx) • uptime — % of time the service is available; 99.94% = ~5.3 hours downtime/year • SLA — Service Level Agreement: the contractual uptime/latency commitment