4 exercises — narrating charts, translating uptime to human time, presenting negative metrics with context, and using hedge language for uncertain data.
0 / 4 completed
Data presentation language
Chart narration: "What this chart shows is [topic]. The [element] represents [baseline]…"
Uptime for non-tech: % → hours/year → hours/month → vs. SLA
Negative metrics: Exact change → cause → recovery timeline → reference to plan
Uncertain cause: "The most likely contributing factor is… however we're testing to confirm"
Always give both percentages AND human-scale equivalents for executives
1 / 4
You're presenting a bar chart showing that feature adoption increased from 12% to 34% after a UX redesign. Which narration is most effective?
Option C demonstrates data narration — turning a visual into a story:
1. Name what the chart shows: "feature adoption before and after the UX redesign" — the audience doesn't have to figure out the axes 2. Explain the baseline: "12% of active users" — gives the starting context 3. State the change with the multiplier: "34%" and "2.8× increase in 8 weeks" — percentages AND ratios because each audience member processes differently 4. Provide the causal explanation: "driven primarily by making the export button visible" — not just what changed, but WHY it changed
Why A and B fail: "Shows our data" and "the bar went up" force the audience to do the interpretive work you should have already done for them
Why D fails: "This is a good result" is a value judgment without explanation — what does "good" mean in the context of your target KPI?
Chart narration formula: "What this [chart type] shows is [topic]. The [reference element] represents [baseline with context]. After [change], that became [new value] — a [multiplier or %] [increase/decrease] over [timeframe], driven by [cause]."
2 / 4
You need to present 99.94% uptime to a non-technical VP. Which presentation is most effective?
Option C uses the percentage → human time → benchmark translation technique for non-technical executives:
1. Starts with the percentage: "99.94%" — the official metric your audience may know 2. Converts to human time: "5.3 hours across the full year" and "26 minutes per month" — a VP can understand "26 minutes a month" 3. Connects to the SLA target: "exceeded our target by 0.04pp" — shows you know the commitment and are tracking against it 4. Translates the delta into business terms: "3.5 fewer downtime hours per year than our contractual minimum" — now the VP can say "we're performing 3.5 hours/year better than we promised clients"
Why A fails: States the number without context — what does 99.94% mean at human scale?
Why B fails: "Very high" is a vague evaluation without a benchmark
Why D fails: "0.06% of the year" is even less intuitive than 99.94% — percentage of a percentage is the worst form for non-technical audiences
Your team's deployment frequency dropped from 14/week to 6/week last month due to a refactoring initiative. You need to present this to stakeholders. Which framing is most professional?
Option C uses the context → plan → future promise → tracking formula for presenting negative metrics:
1. States the change factually: "14 to 6 per week" — no hedging, no softening 2. Provides the causal context: "we front-loaded refactoring of test infrastructure" — explains WHY without deflecting 3. Gives a specific recovery timeline: "return to 14+ by end of April" and "20+/week by Q3" — stakeholders can hold you to a date 4. References the original plan: "tracking against our original project plan — currently on track" — shows this dip was anticipated, not a surprise
Why A fails: "Dropped significantly" without context invites questions you should have answered preemptively
Why B fails: "Not really our fault" is defensive and undermines trust — even if it's technically accurate
Why D fails: "Quality rather than quantity" is a vague justification that sounds like an excuse; it doesn't explain the metric drop or provide a recovery plan
Negative metric formula: [Exact change] → [Why it happened] → [When it recovers + to what level] → [Reference to plan to show it was expected]
4 / 4
You're presenting a chart showing a 15% improvement in p99 API latency. But you're not sure if the improvement is from your new caching layer or from a traffic pattern change. Which language handles the uncertainty most professionally?
Option D demonstrates professional uncertainty hedging — the language engineers use when data is promising but not yet causal:
1. States the observed fact: "15% improvement in p99 latency since caching deployment" — presents the measurement without overclaiming 2. States the hypothesis with appropriate confidence: "most likely contributing factor" — doesn't say "definitely" or "probably" 3. Names the confounding variable: "shift in traffic patterns during the same period" — shows analytical rigour, not weakness 4. States what you're doing about it: "controlled experiment in staging to separate the signals" — turns an uncertainty into a plan 5. Commits to a resolution date: "cleaner attribution by next week's review" — stakeholders aren't left with an open question
Why A fails: Claiming causality ("caching layer improved latency") without stated evidence — if you're wrong, credibility is lost
Why B fails: "We don't know why" without a follow-up plan sounds negligent
Why C fails: "Not confident in these numbers" is too broad — you can be confident in the measurement and uncertain about attribution simultaneously
Uncertainty hedging levels: • High confidence: "The data shows…" / "We can confirm…" • Medium confidence: "Based on [evidence], the most likely explanation is…" • Low confidence: "Preliminary data suggests… though this is early" • Unknown cause but known effect: "We observed X. We have a hypothesis and are testing it."