Common Backend Engineering Interview Questions

 ・ 27 min

photo by Johannes Plenio(https://unsplash.com/@jplenio?utm_source=templater_proxy&utm_medium=referral) on Unsplash|690x460

I've put together questions you'll likely run into when interviewing for backend engineering roles, organized by topic. Every company has its own flavor, but the major branches — Java, Spring, databases, and operations — show up almost everywhere. Here are the items worth reviewing before you apply.

Data Structures & Algorithms#

Almost every interview touches on the fundamentals. Hash collisions, search algorithms, and page replacement are especially common.

Handling Hash Collisions#

No matter how good your hash function is, collisions are inevitable thanks to the pigeonhole principle. How you handle them determines the data structure's performance.

Open Addressing

Resolves collisions by using empty slots within the table itself. It needs no extra memory and is cache-friendly because data sits in contiguous memory.

  • Linear Probing: Scans the next slot sequentially on collision. Simplest to implement, but suffers from primary clustering — once data starts piling up in one area, lookups slow down dramatically.
  • Quadratic Probing: Probes slots n² steps away. Reduces primary clustering, but keys with the same hash still trace the same path — that's secondary clustering.
  • Double Hashing: Uses one hash for the position and a second hash for the step size. Produces the most uniform distribution but pays the cost of two computations.

Open addressing performance drops sharply once the load factor exceeds about 0.7, so rehashing at the right time is essential.

Separate Chaining

Hangs data off each bucket using a linked list or tree. Java's HashMap is the canonical example, with one interesting detail.

  • Since Java 8, buckets with more than 8 nodes get converted from a linked list to a red-black tree (treeify)
  • If a tree shrinks to 6 or fewer nodes, it converts back to a linked list (untreeify)
  • The point is to bring worst-case lookup from O(n) down to O(log n)

Interview tip: Expect a follow-up: "Which would you prefer?" There's no single answer — say it depends on data distribution and memory constraints. For example, open addressing wins for small, cache-sensitive datasets, while separate chaining handles frequent collisions and dynamic sizing better.

DFS vs BFS#

The two pillars of graph traversal. Don't just define them — know when to use which.

Aspect DFS BFS
Data structure Stack (recursion) Queue (FIFO)
Memory Proportional to depth Proportional to width
Shortest path Not guaranteed Guaranteed (unweighted graphs)
Use cases Backtracking, topological sort, cycle detection Shortest distance, level-order traversal

When to pick which?

  • Need to explore every path → DFS
  • Need shortest path (by edge count) → BFS
  • Very deep graph → DFS risks stack overflow
  • Very wide graph → BFS risks memory blowup

For weighted shortest paths, you'll want Dijkstra or Bellman-Ford. Mentioning that BFS isn't enough on its own shows depth.

Page Replacement Algorithms#

In the OS section, LRU is almost guaranteed to come up. The same concept applies to caching policies, so it's worth knowing well.

  • FIFO: Evicts the oldest page first. Simple, but susceptible to Belady's Anomaly — adding more frames can actually increase page faults.
  • Optimal (OPT): Evicts the page that will go unused the longest. Theoretically optimal but requires knowing the future, so it's only used as a benchmark.
  • LRU (Least Recently Used): Evicts the page that hasn't been touched in the longest. Approaches optimal performance by exploiting temporal locality. Typically implemented with a HashMap + Doubly Linked List for O(1).
  • LFU (Least Frequently Used): Evicts the page with the fewest accesses. Downside: heavily-accessed pages stick around forever, blocking new data (cache pollution).
  • MFU: The opposite of LFU. The idea is that frequently-used pages have already served their purpose.

Implementing an LRU cache from scratch is a classic LeetCode problem and shows up in interviews. Know both approaches: using LinkedHashMap, and building it by hand with a doubly linked list plus a hash map.

Java & JVM#

The core of any Java backend interview. GC, threads, and collections are non-negotiable.

JVM Memory Layout#

Before talking about GC, you should know the memory regions.

  • Method Area (Metaspace): Class metadata, static variables, constant pool
  • Heap: Where every object and array lives. The GC target.
  • Stack: Frame per method call. Holds local variables and parameters.
  • PC Register: Address of the currently executing instruction
  • Native Method Stack: For JNI native method calls

Java 8 replaced PermGen with Metaspace, which lives in native memory — that changed the shape of OOM errors significantly.

Garbage Collection#

The heap splits into young generation and old generation. The design rests on the weak generational hypothesis — most objects die young.

Young generation layout:

  • Eden: Where new objects land first
  • Survivor 0, 1: Where surviving objects move from Eden. The two spaces alternate.

Types of GC:

  • Minor GC: Runs in the young generation. Fast and frequent.
  • Major GC (Full GC): Includes the old generation. Slow, with long stop-the-world pauses.

GC algorithms:

  • Serial Collector: Single-threaded. Suited to apps under ~100MB. -XX:+UseSerialGC
  • Parallel Collector: Multi-threaded, throughput-oriented. Default in JDK 8. -XX:+UseParallelGC
  • CMS (Concurrent Mark Sweep): Concurrent marking minimizes STW. Deprecated in JDK 9, removed in JDK 14.
  • G1 (Garbage First): Manages the heap in regions. Default since JDK 9. Best for large heaps (4GB+).
  • ZGC: Cuts STW under 10ms. Experimental in JDK 11+, official in JDK 15+. Targets massive heaps (TB scale).
  • Shenandoah: A low-latency GC similar to ZGC, led by Red Hat.

Interview tip: When asked which GC you've used, don't just name it — explain why you chose it. Example: "We used G1 because response time mattered. It reduced full-GC frequency compared to CMS."

OOM and Heap Dumps#

Different OOM types have different causes and fixes.

OOM type Cause
Java heap space Heap exhausted — leak or undersized heap
GC Overhead limit exceeded GC consumes too much time but reclaims too little
Metaspace Class metadata region full
Direct buffer memory NIO direct buffer leak
Unable to create new native thread Hit OS thread creation limit

Heap dump analysis flow:

  1. Auto-dump on OOM: -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/path/to/dump
  2. Manual dump: jmap -dump:format=b,file=heap.hprof <pid>
  3. Tools: Eclipse MAT (Memory Analyzer Tool), VisualVM, JProfiler
  4. What to look for: Use the dominator tree to find biggest retainers, then trace GC roots

Common leak patterns:

  • Objects piling up in static collections
  • ThreadLocal used without remove()
  • Listeners or callbacks registered but never unregistered
  • DB connections and streams left unclosed

Thread Safety and Synchronization#

There's a layered approach to thread safety, from safest to most performant.

Tier 1: Eliminate shared state (safest)

  • Use immutable objects
  • Use ThreadLocal for per-thread instances
  • Functional programming (no side effects)

Tier 2: Synchronization mechanisms

  • synchronized: The basic option. Uses a monitor lock; other threads wait on entry.
  • volatile: Guarantees visibility, not atomicity.
  • ReentrantLock: More flexible than synchronized. Supports tryLock and fair mode.
  • ReadWriteLock: Separate read and write locks. Wins when reads dominate.
  • StampedLock: Java 8+. Adds optimistic reads for better throughput.

Tier 3: Atomic operations (lock-free)

  • AtomicInteger, AtomicLong, AtomicReference, etc.
  • Built on CAS (Compare-And-Swap)
  • No locks means no deadlocks, and it's fast
  • Downside: under heavy contention, spin attempts grow and performance can drop (also watch for the ABA problem)

Tier 4: Concurrent collections

  • ConcurrentHashMap: Concurrent version of HashMap. From Java 8, mixes CAS with synchronized.
  • CopyOnWriteArrayList: Copies on write. Great when reads massively outnumber writes.
  • BlockingQueue family: Producer-consumer pattern

Drawbacks of synchronized#

A common follow-up question.

  1. Performance overhead: Monitor lock acquisition and release cost
  2. Deadlock risk: When lock acquisition order goes wrong
  3. Throughput collapse under contention: Heavy contention serializes execution
  4. No fairness: No guarantee about which thread gets the lock
  5. Not interruptible: A thread waiting on the lock can't be woken via interrupt (ReentrantLock can)

Functional Interfaces#

Frequently asked alongside lambdas. A functional interface has exactly one abstract method (SAM).

@FunctionalInterface
interface MyFunction {
    int apply(int x);
}
 
// Lambda implementation
MyFunction square = x -> x * x;

Common standard functional interfaces:

Interface Signature Purpose
Supplier<T> () -> T Produces a value
Consumer<T> T -> void Consumes a value
Function<T,R> T -> R Transforms
Predicate<T> T -> boolean Tests a condition
BiFunction<T,U,R> (T,U) -> R Transforms two values
UnaryOperator<T> T -> T Same-type transform

Collections Framework#

Array vs ArrayList:

Aspect Array ArrayList
Size Fixed Dynamic (internal resize)
Type Supports primitives Objects only (generics)
Performance Faster Slight overhead
Memory Efficient Adds metadata

HashMap internals:

  • Default capacity: 16, load factor: 0.75
  • Resizes (doubles) at 75% full
  • Index computed as (n - 1) & hash (works because capacity is a power of 2)
  • Java 8+ converts heavily-collided buckets to trees

HashSet performance pitfalls:

  • Bad equals() / hashCode() causing many collisions
  • Mutating fields used in hashCode after adding the object
  • Fix: write a well-distributed hashCode, prefer immutable keys

Stream and Parallel Stream#

// Stream
List<Integer> result = list.stream()
    .filter(x -> x > 10)
    .map(x -> x * 2)
    .collect(Collectors.toList());
 
// Parallel Stream
list.parallelStream()
    .filter(x -> x > 10)
    .map(x -> x * 2)
    .collect(Collectors.toList());

When does parallel stream pay off?

Good fit:

  • Dataset is large (typically 10,000+ elements)
  • CPU-bound work
  • Independent units of work
  • Result order doesn't matter

Bad fit:

  • Small datasets (overhead exceeds gains)
  • I/O-bound work (threads sit idle)
  • Mutating shared state (synchronization cost)
  • Inside a transaction (different thread, no transaction propagation)

Parallel streams use a shared common ForkJoinPool by default. One greedy task can starve unrelated work, so production systems should pin streams to a dedicated pool.

Spring#

Effectively required for Java backend roles, so interviewers go deep.

DI and IoC#

  • IoC (Inversion of Control): The container — not your code — manages object creation and lifecycle. Control flow is inverted.
  • DI (Dependency Injection): One way to implement IoC. Dependencies are injected from outside.

Injection styles:

// 1. Constructor injection (recommended)
@Service
public class UserService {
    private final UserRepository userRepository;
 
    public UserService(UserRepository userRepository) {
        this.userRepository = userRepository;
    }
}
 
// 2. Setter injection
@Service
public class UserService {
    private UserRepository userRepository;
 
    @Autowired
    public void setUserRepository(UserRepository userRepository) {
        this.userRepository = userRepository;
    }
}
 
// 3. Field injection (avoid)
@Service
public class UserService {
    @Autowired
    private UserRepository userRepository;
}

Why Constructor Injection?#

A staple interview question. The more reasons you can give, the better.

  1. Catches circular references at startup — Field/setter injection only fails when the method is invoked at runtime; constructor injection fails immediately during container initialization. Spring Boot 2.6+ blocks circular references by default.
  2. Enables immutability — Mark dependencies final so they can't be reassigned.
  3. Easy to test — You can call new UserService(mockRepo) directly without the container. Field injection is untestable without reflection.
  4. Detects missing dependencies — Compile-time checking surfaces missing required dependencies.
  5. Enforces single responsibility — Too many constructor parameters is a smell that the class is doing too much.

Bean Scopes#

Scope Lifecycle
Singleton (default) One per container
Prototype New instance per request
Request Per HTTP request (web)
Session Per HTTP session (web)
Application Per ServletContext (web)
WebSocket Per WebSocket (web)

Inject a prototype bean into a singleton and you keep getting the original instance. To get a fresh one each time, use ObjectProvider or @Lookup.

AOP#

A paradigm for separating cross-cutting concerns from core logic. Logging, transactions, and authn/authz are natural fits.

Key terms:

  • Aspect: A modularized cross-cutting concern
  • Join Point: A point where an aspect can apply (typically a method execution)
  • Pointcut: An expression defining where to apply
  • Advice: What to do (@Before, @After, @Around, @AfterReturning, @AfterThrowing)
  • Weaving: The process of applying aspects to actual code
@Aspect
@Component
public class LoggingAspect {
    @Around("execution(* com.example.service.*.*(..))")
    public Object logExecutionTime(ProceedingJoinPoint joinPoint) throws Throwable {
        long start = System.currentTimeMillis();
        Object result = joinPoint.proceed();
        long elapsed = System.currentTimeMillis() - start;
        log.info("{} took {}ms", joinPoint.getSignature(), elapsed);
        return result;
    }
}

Limits of Spring AOP:

  • Proxy-based, so self-invocation (calling another method on the same instance) bypasses the aspect
  • Only works at the method level (not field access)
  • Only public methods (unless using CGLIB)

Transactions (@Transactional)#

Interviewers love to dig into this one.

Propagation:

  • REQUIRED (default): Joins existing transaction, creates one if absent
  • REQUIRES_NEW: Always starts a new transaction; suspends the existing one
  • NESTED: Nested transaction using a savepoint
  • SUPPORTS: Joins if present, runs without otherwise
  • MANDATORY: Must run inside an existing transaction
  • NEVER: Throws if a transaction exists

Isolation levels:

  • READ_UNCOMMITTED: Allows dirty reads
  • READ_COMMITTED: Allows non-repeatable reads (Oracle default)
  • REPEATABLE_READ: Allows phantom reads (MySQL default)
  • SERIALIZABLE: Most strict, with the biggest performance hit

Common pitfalls:

  • @Transactional only applies to public methods
  • Self-invocation bypasses the proxy, so the annotation does nothing
  • By default only RuntimeException and Error trigger rollback. For checked exceptions, set rollbackFor explicitly.

Databases & JPA#

This area maps directly to performance, so expect detailed questions.

Indexes#

Why are they fast?

  • Built on B+ trees. All leaves sit at the same depth, so the tree stays balanced.
  • A million rows usually means just 3–4 levels of traversal.
  • Leaf nodes link to each other, so range scans are also fast.

B+ tree vs B-tree:

  • B-tree stores data in every node
  • B+ tree stores data only at the leaves; internal nodes only index
  • B+ tree is better for range queries and is more disk-I/O friendly

Indexes aren't always free:

  • Writes get more expensive (every INSERT/UPDATE/DELETE updates the index too)
  • Extra disk space
  • Inefficient on low-cardinality columns (e.g., gender)
  • Full scans can be faster on small tables

The composite index trap:

Given an index on (A, B, C):

  • WHERE A = ? → uses the index
  • WHERE A = ? AND B = ? → uses the index
  • WHERE A = ? AND C = ? → partial use
  • WHERE B = ?does not use it (matching starts from the leftmost column)

Covering indexes:

If every column the query needs lives in the index, the database can answer the query without touching the table — that's a covering index, and it's blazingly fast.

The N+1 Problem#

If you use JPA, this one finds you sooner or later.

The problem:

// Fetch members, then access each member's team
List<Member> members = memberRepository.findAll();  // 1 query
for (Member m : members) {
    System.out.println(m.getTeam().getName());  // N queries
}

That's N+1 queries total. A hundred members means 101 database calls.

Solutions:

  1. JOIN FETCH:
    @Query("SELECT m FROM Member m JOIN FETCH m.team")
    List<Member> findAllWithTeam();
  2. @EntityGraph:
    @EntityGraph(attributePaths = {"team"})
    List<Member> findAll();
  3. Batch size:
    spring.jpa.properties.hibernate.default_batch_fetch_size: 100
    Bundles N items into IN clauses, reducing N+1 to N/batch+1
  4. QueryDSL: Flexible for dynamic queries
  5. Projections (DTO direct fetch): Pull only the columns you need

JOIN FETCH doesn't paginate when fetching collections — it loads everything into memory and paginates there, which is dangerous. For paginated results, use the batch-size approach.

Caching Strategies#

  • When to refresh? — Invalidate or refresh whenever the underlying data changes
  • When does consistency break? — Any moment cache and DB diverge

Cache patterns:

  1. Cache-Aside (Look-Aside): Most common. App checks cache → falls back to DB → fills cache.
  2. Write-Through: Writes update both DB and cache simultaneously.
  3. Write-Behind (Write-Back): Writes go to cache first, then asynchronously to DB.
  4. Read-Through: On a miss, the cache library handles the DB read itself.

Invalidation strategies:

  • TTL (time-based expiration)
  • Event-based: delete the cache when DB changes (@CacheEvict)
  • Versioning: bake a version into the key and bypass via a new key

JPA caching:

  • First-level cache: Per EntityManager. Same entity is fetched only once within a transaction.
  • Second-level cache: Per SessionFactory. Shared across the application. Backed by EhCache, Hazelcast, etc.

Handling Redis Failure#

You may be asked what your plan is if Redis goes down.

Persistence options:

  • RDB (Snapshot): Dumps memory to disk at intervals. Fast recovery but possible data loss.
  • AOF (Append Only File): Logs every write. Safer but produces big files and slower recovery.
  • Mixed mode: RDB + AOF together (Redis 4.0+)

High availability:

  • Replication: Master-replica replication
  • Sentinel: Automatic failover and master monitoring
  • Cluster: Sharding plus replication, horizontal scaling

Surviving cache outages:

  1. Apply a circuit breaker (Resilience4j, Hystrix)
  2. Fall back to direct DB reads
  3. Add a local cache as a second tier (e.g., Caffeine)
  4. Prevent cache stampedes with locks or algorithms like PER (Probabilistic Early Recomputation)

Networking & HTTP#

REST API Design Principles#

  • Resource-centric URLs: /users/123/orders (resources, not actions)
  • Verbs go in HTTP methods: GET, POST, PUT, PATCH, DELETE
  • Use status codes meaningfully: 200, 201, 204, 400, 401, 403, 404, 409, 500
  • HATEOAS (optional): Include links to next actions in the response

PUT vs PATCH:

  • PUT: Replaces the whole resource. Idempotent.
  • PATCH: Partial update. Idempotency depends on the implementation.

HTTP method properties:

Method Safe Idempotent Cacheable
GET Yes Yes Yes
HEAD Yes Yes Yes
OPTIONS Yes Yes No
POST No No Conditional
PUT No Yes No
PATCH No No No
DELETE No Yes No
  • Safe: Doesn't change server state
  • Idempotent: Multiple calls produce the same result

HTTPS and SSL/TLS#

  • TLS handshake: Client and server negotiate the cipher suite and keys
  • Certificates: Issued by a CA, vouch for the server's identity
  • Symmetric vs asymmetric: The handshake uses asymmetric crypto (RSA, ECDH); the actual data transfer uses symmetric (AES)
  • HTTP/2: Multiplexing, header compression, server push. HTTPS is essentially required.
  • HTTP/3: Built on QUIC (UDP). Faster connection setup.

Cookies vs Sessions vs JWT#

Aspect Cookie Session JWT
Storage Client Server Client
Security Low High Medium
Scalability Needs server sync Stateless, easy to scale
Invalidation Immediate Immediate Hard (valid until expiry)

Cookie security flags:

  • HttpOnly: Blocks JavaScript access (XSS protection)
  • Secure: Only sent over HTTPS
  • SameSite: CSRF protection (Strict, Lax, None)

Defending Against SQL Injection#

Core rule: never concatenate user input directly into a SQL string.

Defenses:

  1. PreparedStatement (parameter binding):
    PreparedStatement ps = conn.prepareStatement(
        "SELECT * FROM users WHERE id = ?");
    ps.setString(1, userInput);
  2. Use an ORM: JPA and MyBatis's #{} bind parameters automatically
  3. Watch out for MyBatis ${}: That's string substitution and is injectable. Limit it to things like ORDER BY columns and validate against a whitelist.
  4. Validate input: Whitelist-based validation
  5. Principle of least privilege: Give DB users only the permissions they need

Proxies vs Gateways#

  • Forward proxy: Sits in front of the client, forwarding requests outward (corporate intranet → internet)
  • Reverse proxy: Sits in front of the server, accepting external requests and routing them inward (Nginx, HAProxy)
  • API gateway: A reverse proxy plus authentication, routing, transformation, logging, and more (Spring Cloud Gateway, Kong)

Operations & Infrastructure#

Monitoring (the Three Pillars of Observability)#

  1. Metrics: Numeric values like CPU, memory, response time, throughput
    • Prometheus + Grafana
    • Datadog, New Relic
  2. Logs: Recorded events
    • ELK Stack (Elasticsearch + Logstash + Kibana)
    • Fluentd, Loki
  3. Traces: A request's journey across services
    • Jaeger, Zipkin
    • OpenTelemetry (the standard)

APM (Application Performance Monitoring):

  • Pinpoint, Scouter (open source, popular in Korea)
  • New Relic, Datadog APM (commercial)

Deployment Strategies#

I've been asked how I'd deploy to a fleet of 20+ servers.

Zero-downtime deployment options:

  • Rolling Update: Updates servers in batches. Gradual, but two versions coexist for a while.
  • Blue-Green: Maintains two identical environments, switching traffic in one shot. Doubles resources but rolls back instantly.
  • Canary: Routes a slice of traffic to the new version, expanding gradually if all is well. Similar to A/B testing.

CI/CD pipeline:

  1. Code push triggers CI
  2. Test and build
  3. Build a Docker image, push to a registry
  4. Deploy to staging and verify
  5. Deploy to production (manual approval or automatic)
  6. Health checks and monitoring
  7. Auto-rollback on issues

Incident Response#

"What do you check first when something breaks?" comes up often.

Response order:

  1. Scope the impact: All users? Some? Which features?
  2. Check the monitoring dashboard: CPU, memory, response time, error rate spikes
  3. Check the logs: Look for ERROR, Exception, trace transaction IDs
  4. Look at recent changes: Deployments, config changes, DB migrations
  5. Decide on rollback: If you can't pinpoint the cause and time is dragging, roll back
  6. Contain the bleeding: Feature flags, traffic limits to stop the spread
  7. Run an RCA: Postmortem and prevention

Strong answer pattern: Walk through what you check, in what order, and explain why.

Kubernetes Resource Management#

resources:
  requests:
    memory: '64Mi'
    cpu: '250m'
  limits:
    memory: '128Mi'
    cpu: '500m'
  • requests: Minimum guaranteed resources for the container
  • limits: Maximum the container can consume
  • Setting requests = limits gives you the Guaranteed QoS class — the most stable
  • Exceeding the memory limit triggers OOMKilled; exceeding the CPU limit causes throttling

Architecture (MSA)#

Pros and Cons of Microservices#

Pros:

  • Independent deployment: Each service ships on its own
  • Tech freedom: Different services can use different languages/frameworks
  • Failure isolation: A single service's failure doesn't take everything down
  • Team autonomy: Each team owns its service

Cons:

  • Network latency: Inter-service calls all go over the network
  • Distributed transactions: Spanning multiple services is hard (you'll need patterns like Saga)
  • Complex integration testing: You have to spin up multiple services together
  • Operational complexity: Requires monitoring, logging, and tracing infrastructure
  • Data consistency: Per-service databases make global consistency tricky

Patterns Common in MSA#

  • API Gateway: Single entry point for routing and authentication
  • Service Discovery: Eureka or Consul for dynamic service location
  • Circuit Breaker: Prevents cascading failures (Resilience4j)
  • Saga: Distributed transactions, either choreography (event-driven) or orchestration (central coordinator)
  • Event Sourcing: Records events instead of state
  • CQRS: Separates commands (writes) from queries (reads)

Testing#

Why Adopt TDD?#

  • Prevents regressions — gives you confidence to refactor
  • Improves design — code that's hard to test is usually badly designed
  • Acts as documentation — tests double as usage examples
  • Fast feedback — small steps surface problems early

The TDD cycle (Red-Green-Refactor):

  1. Red: Write a failing test
  2. Green: Write the minimum code to make it pass
  3. Refactor: Remove duplication and improve clarity

F.I.R.S.T Principles#

  • Fast: They have to be fast to run often
  • Independent: Tests can't depend on each other
  • Repeatable: Same result, regardless of environment
  • Self-validating: Pass/fail must be explicit
  • Timely: Written at the right time (TDD: before the code)

Using Mocks#

@ExtendWith(MockitoExtension.class)
class UserServiceTest {
    @Mock
    private UserRepository userRepository;
 
    @InjectMocks
    private UserService userService;
 
    @Test
    void findUser() {
        // given
        given(userRepository.findById(1L))
            .willReturn(Optional.of(new User("kim")));
 
        // when
        User user = userService.findById(1L);
 
        // then
        assertThat(user.getName()).isEqualTo("kim");
        verify(userRepository).findById(1L);
    }
}

Service vs controller tests:

  • Service layer: Mock dependencies with Mockito and verify the business logic
  • Controller layer: Use MockMvc to verify the HTTP request/response flow
    mockMvc.perform(get("/users/1"))
        .andExpect(status().isOk())
        .andExpect(jsonPath("$.name").value("kim"));
  • Keep business logic out of controllers. Verifying delegation is enough at the controller layer.

Test Categories#

  • Unit tests: Single class or method. Fast and isolated.
  • Integration tests: Multiple components together. May involve DB or external APIs. @SpringBootTest
  • E2E tests: Full user scenarios. Slowest and most expensive, but the most trustworthy.
  • Test pyramid: The ideal shape — lots of unit tests, few E2E tests

Wrapping Up#

Interview questions tend to follow a similar shape across topics. If you prepare answers around three threads — "Why do we use it? What are the downsides? What are the alternatives?" — you'll handle whatever any company throws at you.

For technologies you've used in production, write down a one-line answer to "Why did we choose this?" Interviewers don't want the textbook answer; they want to see your thought process.

One last thing: when you don't know an answer, saying "I don't know" is itself a skill. Follow it with "but I'd approach it like this," and the interviewer often comes away more impressed than if you'd guessed. Nobody knows everything.


Lose an hour in the morning, and you will spend all day looking for it.

— Richard Whately


Other posts
Stop Chasing Butterflies — Build a Garden They'll Come To 커버 이미지
 ・ 2 min

Stop Chasing Butterflies — Build a Garden They'll Come To

Key Lessons on Market Sizing and Business Plans from Startup Training 커버 이미지
 ・ 5 min

Key Lessons on Market Sizing and Business Plans from Startup Training

Rebuilding My Neglected Blog with Next.js 커버 이미지
 ・ 4 min

Rebuilding My Neglected Blog with Next.js