5 exercises — the vocabulary every developer needs to give and receive code reviews professionally in English: approval phrases, design principles, code structure terms, and review etiquette.
A reviewer writes in a pull request comment: "LGTM — minor nit on the variable naming but nothing blocking. Ship it." What does LGTM mean, and what is a nit?
LGTM = "Looks Good To Me" — the most common informal code review approval phrase. It signals the reviewer is satisfied and the change can be merged. It originated in Google's engineering culture and spread across the industry. A nit (short for "nitpick") is a very minor comment — often style, naming, or formatting — that the reviewer explicitly labels as non-blocking. The label matters: without "nit:", the author doesn't know if the comment must be addressed before merging. Code review comment vocabulary: Blocking comment — must be addressed before merge. Non-blocking / nit — optional improvement. Question — reviewer is asking for clarification, not requesting a change. Suggestion — reviewer proposes an alternative. FYI / for context — informational, no action required. TODO — record it for future work, don't fix now. Best practice: label every comment with its intent. An unlabelled comment creates ambiguity and leads to unnecessary back-and-forth.
2 / 5
A senior engineer comments on a PR: "This logic is duplicated in three places — could you extract it into a shared utility function? If not now, please add a TODO with a ticket reference." What principle is the reviewer applying?
DRY (Don't Repeat Yourself) is a software design principle from "The Pragmatic Programmer" (Hunt & Thomas, 1999): "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system." When logic is duplicated, a bug fix or requirement change must be applied in multiple places — one missed location introduces inconsistency. Related principles often cited in code reviews: SOLID (Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, Dependency Inversion). KISS — Keep It Simple; don't over-engineer a solution. YAGNI — don't add functionality before it's actually needed. Separation of Concerns (SoC) — different responsibilities should live in different modules. Law of Demeter — objects should only call methods on their direct collaborators. Code review culture note: when asking for a refactor that would delay the PR, it's good practice to offer the TODO alternative — it acknowledges the improvement is real while allowing the current work to ship.
3 / 5
A reviewer leaves this comment: "This is harder to follow than it needs to be — there are three levels of nesting here. Consider early returns (guard clauses) to flatten the structure." What does early return / guard clause mean?
A guard clause (also called an early return) is a pattern where you check preconditions at the top of a function and return (or throw) immediately if they're not met — so the main logic of the function runs "flat" without deep if-else nesting. Without guard clauses:
if (user) {
if (user.isActive) {
if (user.hasPermission) {
// actual logic buried 3 levels deep
}
}
}
With guard clauses:
if (!user) return;
if (!user.isActive) return;
if (!user.hasPermission) return;
// actual logic at top level
Benefits: easier to read (main path is clear), easier to test (edge cases are explicit), lower cyclomatic complexity. Related code review terms: Cyclomatic complexity — a measure of the number of independent paths through a function; high complexity = harder to test. Cognitive complexity — how hard is it for a human to understand the flow (not just count branches). Happy path — the main successful execution path. Edge case — input or condition that deviates from the typical scenario.
4 / 5
In a team code review discussion, someone says: "We should treat this as a drive-by comment — the existing code was already like this before this PR. We don't want to block the author for pre-existing issues." What is a drive-by comment in code review context?
A drive-by comment is a code review comment about an issue that exists in the codebase but was not introduced by the current PR. The author is not responsible for the pre-existing code, and blocking their PR for someone else's old issue is unfair and slows delivery. Best practices: Don't block the current PR for drive-by issues. If the issue is significant, either: open a separate issue/ticket and reference it, or note it as a non-blocking suggestion. The Boy Scout Rule (leave the code better than you found it) is often used to justify fixing drive-by issues — but it's a suggestion, not a requirement to block PRs. Related review etiquette vocabulary: Bike-shedding (Parkinson's Law of Triviality) — spending disproportionate time discussing minor, trivial details (e.g., variable naming) while ignoring significant issues. Rubber stamping — approving a PR without genuine review. Reviewer fatigue — reduced attention quality when reviewing too many large PRs. PR size — smaller PRs get better reviews; large PRs get rubber-stamped.
5 / 5
A reviewer comments: "Could you add a unit test to cover the error path here? The happy path is tested but there's no coverage for when the third-party API returns a 503." What is the error path / sad path, and what contrast does the reviewer make?
In testing and code review, the happy path is the primary, successful execution flow — the scenario where everything works as expected. The sad path (or error path, unhappy path) is the execution flow when something goes wrong: an external service fails, the input is invalid, the database is unavailable, etc. Testing vocabulary: Unit test — tests a single function or unit in isolation, with dependencies mocked. Integration test — tests how components interact (e.g., service + database). Test coverage — the percentage of code lines, branches, or paths covered by tests. Edge case — a boundary condition or unusual input that may expose bugs. Test-Driven Development (TDD) — write the test first, then implement the code to make it pass. Mocking / stubbing — replacing dependencies with controlled fake implementations. Code review best practice: always check whether tests cover both the happy and sad paths. A codebase with 90% test coverage can still have critical bugs if the coverage is concentrated on the happy path.