OKR — Objective + Key Results; 70% completion is often "healthy ambitious"
1 / 5
In a product review, the team discusses: "DAU/MAU ratio is at 42% — what does that tell us?" A new engineer asks what DAU/MAU means and why the ratio matters.
DAU, MAU, and the stickiness ratio:
DAU (Daily Active Users) — unique users who perform at least one meaningful action in a single day MAU (Monthly Active Users) — unique users who perform at least one meaningful action in a 30-day period
DAU/MAU ratio (stickiness): Formula: DAU ÷ MAU × 100% • 100% = every monthly user visits every single day (impossible in practice) • 50%+ = extremely sticky (messaging apps, core productivity tools) • 25-40% = healthy consumer engagement (Facebook historically ~50-65%) • 10-20% = typical for many B2B tools • <10% = low engagement; most users are not forming a habit
What "active" means matters: Teams define what counts as "active" — this definition dramatically affects the metric: • Weak: "logged in" • Strong: "completed one core action" (sent a message, ran a query, closed a task) • The definition should reflect the product's North Star Metric
WAU (Weekly Active Users) — weekly variant; useful for products designed for weekly cadences (code review tools, weekly report generators)
Related vocabulary: • stickiness — how often users return; habit formation • activation rate — % of new sign-ups who complete the "aha moment" (first valuable action) • churned user — a user who was active but stopped using the product • resurrection — a churned user who returns and becomes active again
2 / 5
A growth dashboard shows: "Monthly churn: 3%. LTV: $1,200. CAC: $180. LTV/CAC ratio: 6.7." The CEO says this is healthy. Why?
LTV, CAC, churn — the unit economics vocabulary:
CAC (Customer Acquisition Cost) Total spend to acquire one paying customer: (sales + marketing spend) ÷ new customers acquired • $180 CAC = the company spent $180 in sales/marketing to get one paying customer
Churn Rate The percentage of customers who stop using the product each period. • 3% monthly churn = 3% of customers leave each month • Annual churn ≈ 1 - (1 - 0.03)¹² ≈ 31% (rule of thumb: monthly × 12 slightly overestimates) • Churn directly limits LTV: lower churn = customers stay longer = more revenue per customer
LTV (Customer Lifetime Value) Total revenue expected from one customer over their entire lifecycle. Simple formula: LTV = ARPU ÷ Churn Rate • Where ARPU = Average Revenue Per User per month • If ARPU = $36/month, churn = 3% → LTV = $36 ÷ 0.03 = $1,200 ✓
LTV/CAC ratio — the health benchmark: • <3: danger zone; you're losing money acquiring customers • 3:1: minimum healthy threshold • 3-5: solid • >5: excellent; strong margin and room to invest in growth • Too high (>10+): possibly under-investing in growth
Vocabulary: • unit economics — the revenue and cost metrics for a single customer • retention rate — inverse of churn; 3% monthly churn = 97% retention • ARPU — Average Revenue Per User (monthly) • payback period — CAC ÷ monthly ARPU; months to recover acquisition cost
3 / 5
A data analyst presents cohort retention data: "Week-1 cohort retained at 60%; Week-4 at 35%; Week-8 at 28%; Week-12 at 27% — the curve is flattening." What does this mean for the product?
Cohort retention curves — reading and interpreting retention data:
What is a cohort? A cohort is a group of users who signed up (or started) in the same time period — for example, "all users who signed up in January 2024."
The retention curve shape — what it tells you:
Declining then flattening (this example): • Many users drop off early (normal — discovering the product isn't for them) • A core group stabilises and remains long-term • Flattening = product has found its core audience • The flatline retention % is the "Long-Term Retention" benchmark
Still declining at the end (no flatline): • Indicates no retained core; even committed users eventually leave • Product does not have product-market fit yet • Engineering focus: find the "aha moment" and make it happen faster for more users
Reading the numbers in this exercise: • Week 1: 60% retained — roughly expected drop from new user spike • Week 4: 35% — typical early attrition • Week 8: 28% — slowing churn • Week 12: 27% — essentially flat; the retained core
These 27% are the power users. Understanding them — who they are, why they stay, what their workflow is — is the key to improving the product for the 73% who left.
Vocabulary: • cohort — a group of users sharing a starting time period • cohort retention curve — a chart showing % of a cohort who remain active over time • flatline — when the retention curve stops declining (found stable retained users) • D1/D7/D30 — retention at 1 day, 7 days, 30 days after sign-up • power user — a highly engaged user who uses the product deeply and frequently
4 / 5
A growth meeting agenda item: "NPS score dropped from 52 to 38 this quarter — we need to review our detractors." What is NPS and what do detractors, passives, and promoters mean?
NPS (Net Promoter Score) — the customer satisfaction vocabulary:
NPS was created by Fred Reichheld (2003) and became a standard business metric. The single survey question: "How likely are you to recommend [Product] to a friend or colleague? (0-10)"
Scoring segments: • Promoters (9-10) — enthusiastic fans; actively refer others; low churn • Passives (7-8) — satisfied but unenthusiastic; vulnerable to competition • Detractors (0-6) — unhappy; may churn; may actively warn others against the product
Benchmarks: • Above 0: more promoters than detractors (positive) • Above 30: good • Above 50: excellent • Above 70: world-class (Apple, Tesla range) • B2B SaaS industry average: ~30-40
A drop from 52 to 38: A 14-point drop is significant. Investigating detractor responses reveals the specific pain points. Common triggers: degraded performance, a major UX change, pricing change, competitor launches.
Vocabulary: • NPS (Net Promoter Score) — loyalty metric; % promoters - % detractors • CSAT (Customer Satisfaction Score) — satisfaction at a specific interaction: "How satisfied were you with this support response?" (1-5 or 1-10) • CSAT vs NPS: CSAT measures a transaction; NPS measures the overall relationship • churn survey / exit survey — asked when a customer cancels
5 / 5
A company's OKR for Q3: "Objective: Grow into the enterprise market. Key Result 1: Sign 5 enterprise logos (ACV > $50K). Key Result 2: Reduce enterprise onboarding time from 8 weeks to 4 weeks. Key Result 3: Achieve 90% CSAT for enterprise accounts." At quarter end, KR1: 3/5, KR2: 6 weeks (not 4), KR3: 88% CSAT. How should the engineering team read these results?
OKRs (Objectives and Key Results) — the goal-setting framework:
OKRs were popularised by John Doerr and adopted by Google, Intel, and most major tech companies.
The two components: • Objective — qualitative, inspiring, directional: "Grow into the enterprise market" • Key Results — quantitative, measurable, time-bound: specific numbers that prove the objective was achieved
The "moon shot" philosophy: Google's OKR guideline: if you consistently achieve 100% of your key results, your goals aren't ambitious enough. Target 70% completion as "healthy ambitious." 100% completion often means the bar was set too low.
Reading the results in this exercise: • KR1: 3/5 logos = 60% — below goal but meaningful progress; blockers worth understanding • KR2: 8→6 weeks = 50% reduction achieved (vs. 50% target reduction from 8→4) — directionally strong • KR3: 88% vs 90% CSAT = very close; 97.8% of target — essentially achieved
Overall: the direction is right. The quarter generated real enterprise signal. Retrospect on blockers, not blame.
OKR vocabulary: • Objective — qualitative direction-setting statement • Key Result (KR) — measurable milestone proving the objective • OKR cycle — typically quarterly (Q1-Q4) with annual objectives • stretch goal — an ambitious target designed to push the team beyond comfortable delivery • company-level OKR / team OKR / individual OKR — OKRs cascade through the organisation • OKR check-in — a regular (weekly or biweekly) review of OKR progress • KPI (Key Performance Indicator) — ongoing operational metrics (different from OKRs which are cyclical goals)