5 exercises — the vocabulary that bridges engineering and product: churn, feature flags, OKRs, A/B testing, and growth metrics. Essential for developers working in product-driven teams.
A product manager announces in a team meeting: "Our monthly churn rate increased to 8% last quarter — we need to investigate why customers are leaving." Which definition of churn rate is correct?
Churn rate (also called customer attrition rate) is the percentage of customers who cancel or stop using a SaaS product in a given time period. Formula: Churn Rate = (Customers lost during period / Customers at start of period) × 100. High churn is the enemy of SaaS growth — even modest churn compounds aggressively: 8% monthly churn = ~64% customer base lost annually. The SaaS metrics family: Churn rate — customers leaving. Retention rate — inverse of churn; 100% − churn rate. "We improved retention from 91% to 94% by improving onboarding." MRR (Monthly Recurring Revenue) — total predictable monthly revenue from subscriptions. ARR (Annual Recurring Revenue) — MRR × 12. LTV / CLV (Lifetime Value) — average revenue from one customer over their entire relationship. CAC (Customer Acquisition Cost) — cost to acquire one new customer. The key ratio: LTV/CAC > 3 is generally considered healthy. In conversation: "We can tolerate higher CAC if our LTV projection is strong and churn stays below 3%."
2 / 5
A developer asks a product manager: "Should we roll out the dark mode feature to all users at once, or can we use a feature flag to release it gradually?" What is a feature flag (also called a feature toggle)?
A feature flag (feature toggle, feature switch) is a technique for turning features on or off in production using a configuration change — without redeploying code. Feature flags enable: Gradual rollout — enable for 1% → 10% → 50% → 100% of users, monitoring each step. Canary releases — test on a small set before full rollout. A/B testing — show version A to group 1, version B to group 2. Kill switch — instantly disable a broken feature without reverting code. Beta access — enable only for internal users or paying beta testers. Feature flag tools: LaunchDarkly, Flagsmith, Unleash, GrowthBook, or a simple database config table. Related vocabulary: Rollout — progressively deploying a feature to more users. Rollback — reverting to the previous version. Dark launch — code is deployed but the feature is hidden from users (flag is off). In conversation: "We shipped the new checkout flow behind a flag two weeks ago — it's currently enabled for 20% of users and conversion is up 3%."
3 / 5
The CEO reads from a board report: "Our North Star metric this quarter is to increase Weekly Active Users by 25%. To achieve this, each team has defined their OKRs accordingly." What does OKR stand for, and what is it?
OKR = Objectives and Key Results, a goal-setting framework popularised by Intel and Google. Structure: Objective — what you want to achieve (qualitative, inspirational, time-bound). Example: "Make our onboarding experience best-in-class." Key Results (KRs) — how you measure progress toward the objective (quantitative, specific, verifiable). Example: KR1: Reduce time-to-first-value from 14 days to 5 days. KR2: Increase 30-day activation rate from 45% to 65%. KR3: Reduce onboarding-related support tickets by 40%. OKR vs. KPI: OKRs — set for a quarter/year, aspirational (60–70% achievement is good), focus on outcomes and change. KPIs (Key Performance Indicators) — ongoing metrics that track the health of the business (revenue, uptime, NPS) — not necessarily tied to a specific goal. OKR vocabulary: "Set OKRs", "OKR cycle", "check-in on OKRs", "confidence score" (how likely you are to hit the KR), "stretch goal" (ambitious target intentionally set above what's easy). In practice: "Our team's OKR this quarter is to increase API reliability — KR1 is reducing error rate below 0.1%."
4 / 5
A growth team is running an experiment described as: "We split users randomly into two groups: Group A sees the original sign-up page, Group B sees the new version with a video walkthrough. We measure 14-day retention for both." What is this experiment called?
An A/B test (split test) is a controlled experiment that compares two versions (A = control, B = variant) by randomly splitting users into groups and measuring a target metric. Key concepts: Control group (A) — sees the original/current version. Variant group (B) — sees the new version being tested. Target metric — the outcome you're measuring (retention, conversion, click-through rate). Statistical significance — the test result is reliable and not due to random chance (typically p < 0.05). Sample size — must be large enough to detect the expected effect. Multivariate test (option B) — tests multiple variables simultaneously (e.g., different headlines and different CTAs at the same time); requires much larger samples. Canary deployment (option D) — an infrastructure technique for gradual rollout, not designed to compare metrics between groups. A/B testing in SaaS: "We ran an A/B test on the onboarding email — version B (personalised subject line) improved open rate by 18% with 95% confidence at n=50,000." Tools: Optimizely, VWO, Google Optimize, Statsig, GrowthBook.
5 / 5
A SaaS startup's pricing page shows three tiers: Free (limited features), Pro ($29/month), Enterprise (custom pricing). What pricing model does the Free tier represent?
Freemium (free + premium) is a SaaS business model where a permanently free tier provides enough value to attract users, while advanced features are gated behind paid plans. The goal: acquire users at zero cost, then convert a percentage to paying customers. Classic examples: Slack (free for small teams, paid for large orgs), Dropbox (free storage + paid for more), GitHub (free for public repos, paid for private/enterprise), Figma (free for small teams, paid for larger teams). Key freemium metrics: Free-to-paid conversion rate — industry average is 2–5%. PQL (Product Qualified Lead) — a free user whose behaviour signals they're ready to pay (hit usage limits, used a premium feature, invited teammates). Freemium vs. free trial: Freemium — forever free with feature limits, no time limit. Free trial — full product access but time-limited (14 days, 30 days). Option A describes a free trial. In product conversation: "Our freemium conversion rate is 4% — above the industry average — but we want to improve it by surfacing premium features earlier in the user journey."