Key terms: base case, call stack, collision, hash function, invariant, sentinel value
0 / 5 completed
1 / 5
A tech lead reviews a candidate's solution during a whiteboard interview and says: "Your algorithm works correctly, but its time complexity is O(n²) — that won't scale for large inputs." What does time complexity describe?
Time complexity describes how an algorithm's running time scales as the input size (n) grows — not the actual seconds, but the rate of growth. It's expressed in Big O notation: O(1) — constant time (doesn't depend on input size). O(log n) — logarithmic (binary search). O(n) — linear (single pass through array). O(n log n) — linearithmic (efficient sorting: merge sort, quicksort average). O(n²) — quadratic (naive nested loops: bubble sort, selection sort). O(2ⁿ) — exponential (brute-force recursive problems). The key insight: O(n²) is fine for n=100 but catastrophic for n=1,000,000. Space complexity (option B) describes how memory usage grows with input size — the same Big O notation applies. In interviews, candidates are routinely asked to "state the time and space complexity" of their solution. Example: "Sorting this list with merge sort gives us O(n log n) time and O(n) space."
2 / 5
A senior developer explains a coding technique: "This function calls itself with a smaller version of the problem until it reaches the base case." What programming concept is being described?
Recursion is a technique where a function solves a problem by calling itself with a simpler or smaller version of the problem, until a base case is reached that stops the chain. Classic recursive structures: factorial(n) = n × factorial(n-1), base case: factorial(0) = 1. Tree traversal (each node calls traversal on its children). Merge sort (split array, recursively sort halves, merge). Any recursive solution can also be written iteratively (using a loop). Key vocabulary: Base case — the terminating condition that stops recursion. Recursive call — the part where the function calls itself. Call stack — each recursive call adds a frame; too many without reaching the base case = stack overflow. Tail recursion — a recursive call is the last operation; some compilers optimise this to avoid stack growth. Iteration (option A) is the loop-based equivalent — generally more memory-efficient. Memoization (option C) is caching recursive sub-results — related but not the same concept. Polymorphism (option D) is an OOP concept (same method, different behaviour in subclasses) — unrelated.
3 / 5
During a code review, a comment reads: "We should use a hash map here instead of a list — lookups will be O(1) instead of O(n)." What is a hash map (also called a hash table or dictionary)?
A hash map (hash table, dictionary, associative array) stores data as key → value pairs and provides average O(1) lookup, insertion, and deletion — by computing a hash of the key to find the bucket directly. Implementations by language: Python: dict / JavaScript: Object or Map / Java: HashMap / C++: unordered_map / Go: map. Why O(1)? Rather than scanning all elements (as a list does), the hash function maps the key to a memory address directly. Key vocabulary: Hash function — converts a key to an integer index. Collision — two keys hash to the same index (handled by chaining or open addressing). Load factor — ratio of entries to buckets; high load factor → more collisions → degrades to O(n) in worst case. Trade-offs vs. sorted array (option A): hash maps have O(1) lookup but no ordering; sorted arrays have O(log n) search but maintain order. In interviews: "We can use a hash map to store seen values for O(n) total time instead of O(n²) for the nested loop approach."
4 / 5
An interviewer asks: "What is the difference between a stack and a queue?" Which answer is correct?
Stack — LIFO (Last In, First Out): The last element added is the first removed. Think of a stack of plates — you add and remove from the top. Operations: push (add to top), pop (remove from top), peek (view top without removing). Use cases: function call stack, undo/redo, expression parsing, DFS (depth-first search). Queue — FIFO (First In, First Out): The first element added is the first removed. Think of a queue at a coffee shop — first come, first served. Operations: enqueue (add to back), dequeue (remove from front). Use cases: task queues, BFS (breadth-first search), message queues (Kafka, RabbitMQ), printer spooling. Both can be implemented using arrays or linked lists. A Priority Queue is a variant where elements have priorities (implemented with a heap) — not strictly FIFO. In code: "The background job worker processes tasks using a queue — jobs are handled in the order they were submitted."
5 / 5
A developer says: "I added memoization to the recursive Fibonacci function — it went from exponential to linear time." What does memoization mean?
Memoization is an optimization technique that caches the return values of function calls, so that subsequent calls with the same arguments return the cached result instead of recomputing. It is the top-down approach to dynamic programming. Classic example — naive Fibonacci: fib(5) calls fib(4) + fib(3); fib(4) calls fib(3) + fib(2) — fib(3) is computed twice (and exponentially more for larger n) → O(2ⁿ). With memoization: "Have I computed fib(3) before? Yes — return the cached value." → O(n). Key vocabulary: Cache / memo table — the storage (usually a hash map) for computed results. Top-down DP — recursive with memoization. Bottom-up DP (tabulation) — iterative, fills a table from smallest to largest subproblem. Option A (iterative rewrite) is called bottom-up DP or iterative conversion — related but distinct. Option D describes divide and conquer — e.g., merge sort splits into independent halves. In practice: "The API response memoization layer prevents duplicate DB queries within the same request lifecycle."