Discussing AI Risks
2 exercises — articulate AI limitations and risks clearly to executives, product managers, and customers.
0 / 2 completed
AI risk communication framework
- Name the risk precisely — "hallucination" (not just "it can be wrong")
- Explain the mechanism — why it happens, not just that it happens
- Quantify when possible — "accurate 90% of the time on X, lower on Y"
- Propose the mitigation — every risk needs a design response
- Match audience — executives: business impact; engineers: technical mitigation
1 / 2
A colleague asks: "Why can't we just trust the AI — it's been trained on everything?" Which response best articulates the hallucination risk to a non-technical stakeholder?
Option B is the professional explanation. It:
• Explains the mechanism — not just "it makes mistakes" but why: token prediction optimises for plausibility, not truth
• Names the risk precisely — "high confidence presentation of incorrect information" (hallucination)
• Proposes a mitigation — "define a verification step"
• Contextualises by domain — code, legal, financial domains where errors are especially costly
How to explain AI bias as a second risk:
"LLMs reflect patterns in their training data. If that data over-represents certain groups, languages, or perspectives, the model's outputs will too. This matters for: hiring tools (may encode historical biases), user-facing content (may work better in English than other languages), and recommendation systems."
The key communication skill: translate "the model hallucinates" into business risk language: "Without a human verification step, we risk publishing or acting on incorrect information presented with false confidence."
• Explains the mechanism — not just "it makes mistakes" but why: token prediction optimises for plausibility, not truth
• Names the risk precisely — "high confidence presentation of incorrect information" (hallucination)
• Proposes a mitigation — "define a verification step"
• Contextualises by domain — code, legal, financial domains where errors are especially costly
How to explain AI bias as a second risk:
"LLMs reflect patterns in their training data. If that data over-represents certain groups, languages, or perspectives, the model's outputs will too. This matters for: hiring tools (may encode historical biases), user-facing content (may work better in English than other languages), and recommendation systems."
The key communication skill: translate "the model hallucinates" into business risk language: "Without a human verification step, we risk publishing or acting on incorrect information presented with false confidence."