5 exercises — choose the best-structured answer to common SOC Analyst and Threat Hunter interview questions. Focus on triage methodology, hunting hypothesis structure, IOC/IOA precision, and professional handoff communication.
Structure for SOC analyst questions
Enrich before acting: IP reputation, user identity, asset, correlation
Scale response to evidence: contain proportionally, escalate when justified
Hypothesis format: specific scenario → query → three correlation steps → document result
Avoid sweeping blocks: explain the false negative risk before any action
0 / 5 completed
1 / 5
The hiring manager asks: "Walk me through how you would triage a high-priority alert: 500 failed login attempts against our admin portal in the last 10 minutes." Which answer best demonstrates a structured triage process?
Option B is the strongest: it follows a structured enrichment → correlation → escalation decision tree, distinguishes between attack types (brute force vs. credential stuffing) with concrete criteria, identifies the key decision point (any successful login = immediate escalation), considers network context (internet-facing vs. internal), and scales the response to the evidence. Option A is potentially harmful — blocking a legitimate pentest IP or a NAT gateway could cause a service outage. Option C relies solely on reputation data and has a false negative risk (a new attacker IP not in any blacklist would be missed). Option D abdicates triage decision-making — a Tier 1 analyst should make the dismiss/monitor/escalate decision, not pass everything up. Key structure: enrich (IP, distribution) → classify attack type → check for success → assess context → contain proportionally.
2 / 5
The interviewer asks: "Describe a threat hunting hypothesis you would run to detect a potential insider threat exfiltrating data." Choose the most technically rigorous answer.
Option B is strongest: it follows the hypothesis-based hunting format precisely — a specific, testable scenario with four concrete query strategies, each targeting a different phase of the kill chain (access → staging → exfiltration). It uses correct vocabulary (3 standard deviations from baseline, 90-day baseline window, DLP correlation, HR data join), and importantly acknowledges that a null result is a valid and documented outcome — a mark of professional threat hunting. Option C is passive — UEBA is a detection tool, not a substitute for targeted hypothesis-based hunting; UEBA baselines can be subverted by a patient insider who changes behaviour gradually. Option A is correct instinct but lacks the hunt query structure and kill-chain staging vocabulary. Option D describes an investigation step, not a hunt. Key structure: formulate testable hypothesis → baseline deviation → volume anomaly → staging indicators → exfiltration correlation → document result (including null).
3 / 5
The interviewer asks: "What is the difference between an IOC and an IOA, and which is more valuable for detection? Give an example of each." Choose the most precise answer.
Option B is the strongest: it defines each term precisely (forensic artefact vs. behavioural pattern; retrospective vs. prospective), gives a concrete technical example for each (SHA256 hash vs. process parent-child chain with network connection), explains the specific limitations of each (IOC evasion via recompilation, IOA analysis cost), and describes how a mature SOC uses both in layers. Option D is accurate but uses abstract framing ("what tools vs. how attackers behave") without the concrete examples or the limitation analysis. Option C correctly identifies blocking vs. hunting use cases but doesn't define the terms precisely and misses the retrospective/prospective dimension. Option A is correct in substance but lacks examples and the layered defense framing. Key tip: IOC = retrospective artefact (proves compromise) → hash/IP/domain → easy to evade; IOA = prospective behaviour (attack in progress) → process chain/network pattern → harder to evade; layer both.
4 / 5
The hiring manager asks: "How do you tune a SIEM to reduce false positives without creating false negatives?" Which answer best demonstrates operational discipline?
Option B is strongest: it frames tuning as a diagnostic process (categorise the cause before choosing the remedy), introduces a critical best practice (suppression specificity — narrow context to avoid blind spots), describes measurement methodology (track before and after), and includes validation (red team/purple team after tuning). The narrow suppression principle is particularly important — broad suppressions are a real-world source of false negative incidents. Option C (ML scoring) is a genuine complement to rule tuning but doesn't eliminate the need for deliberate rule management. Option D is correct but describes only the baseline step — it doesn't address a methodology for ongoing tuning decisions. Option A is the most dangerous answer: arbitrarily raising thresholds is how real attacks get missed — a patient attacker simply stays below the new threshold. Key structure: categorise cause → suppression specificity (narrow) → measure before/after → validate with purple team.
5 / 5
The interviewer asks: "Describe how you would perform an effective shift handoff to the oncoming analyst. What information must be handed off?" Choose the most professionally complete answer.
Option B is the strongest: it identifies five distinct categories of information that must transfer between shifts (queue status, open investigations, watchlists, environmental changes, pending escalations), calls out the often-missed items (watchlists and new suppressions are invisible to the incoming analyst without explicit briefing), and recommends the verbal + written format for different types of context. The observation that the incoming analyst won't see watchlists unless they're told is a real operational gap that causes incidents to be missed. Option C relies entirely on the ticket system, which doesn't capture soft context (analyst intuition, monitoring watchlists, recently added suppressions). Option D is a summary email approach — better than nothing, but lacks the live verbal handoff component and the watchlist/suppression transparency. Option A is the minimal acceptable handoff. Key structure: queue status → open investigations (with soft context) → watchlists → environmental changes (especially new suppressions) → pending escalations → verbal + written.