Core Backend Concepts for Interview Preparation

 ・ 11 min

photo by Christopher Gower(https://unsplash.com/@cgower?utm_source=templater_proxy&utm_medium=referral) on Unsplash|800x460

When preparing for backend interviews, it's easy to memorize concepts one by one in isolation. In real interviews, though, topics like data structures, operating systems, the JVM, Spring, databases, and deployment are often connected in a single line of questioning.

So instead of turning a question list into a Q&A document, this post reorganizes the material as a study guide for core backend concepts. The goal is to review fundamentals, operational thinking, Spring, data access, and deployment as one connected body of knowledge.

Start with data structures and traversal#

Backend work depends heavily on thinking clearly about how to look up data quickly, handle collisions, and reduce traversal cost. These are basic concepts, but they still show up frequently in interviews.

Two common strategies for resolving hash collisions are open addressing and separate chaining.

  • Open Addressing stores a value in another empty slot inside the same hash table array when a collision occurs.
  • Separate Chaining keeps another structure such as a linked list or tree inside each bucket and stores collided values there.

Open addressing can be cache-friendly because it stays array-based, but performance drops sharply as the load factor rises. Separate chaining uses more memory, but it is usually more stable under collisions. Java's HashMap is fundamentally based on chaining.

You should also clearly distinguish DFS and BFS.

  • DFS (Depth-First Search) goes deep along one path first, then backtracks when it can no longer continue.
  • BFS (Breadth-First Search) expands outward level by level from the starting point.

DFS is usually implemented with recursion or a stack. BFS uses a queue. In practice, DFS is often used for path exploration and backtracking, while BFS is strong for shortest-path problems in unweighted graphs.

Understand memory and operating system perspectives#

Once you move past basic algorithms and data structures, you need to understand how programs behave on top of memory. That becomes directly relevant to troubleshooting and performance analysis.

A representative concept here is LRU. LRU is a page replacement algorithm that removes the least recently used page. It is based on locality: data used recently is likely to be used again soon.

Compared with FIFO, the difference is clear. FIFO evicts the oldest page first, while LRU makes a more practical decision based on recent access history. Strict LRU can be expensive to implement, so real systems often use approximations.

Another important topic is the heap dump. A heap dump captures the JVM heap state into a file so you can inspect which objects are occupying memory and why.

It is especially useful in cases like these:

  • when OutOfMemoryError occurs
  • when objects appear to keep accumulating
  • when GC runs frequently but memory is not reclaimed effectively

In practice, you generate dumps with tools like jmap or jcmd, then analyze them with tools such as Eclipse MAT. The important part is not just saying you have used heap dumps, but being able to explain which objects retained memory and why the leak happened.

To understand JVM stability, study GC and concurrency together#

A backend server keeps handling requests continuously, so memory reclamation and concurrency control need to be understood together if you want a stable system.

First, GC (Garbage Collection) reclaims objects that are no longer referenced and manages JVM memory. The usual explanation starts with the Young and Old generations, then moves into concepts such as Minor GC and Major GC or Full GC.

Representative collectors include:

  • Serial GC: single-threaded and simple, but pauses can be long
  • Parallel GC: optimized for throughput
  • CMS: designed to reduce pause time, though now largely replaced
  • G1 GC: region-based and widely used in production
  • ZGC and Shenandoah: focused on very short pause times

Another concept that often comes up with GC is thread safety. The goal is to maintain correctness even when multiple threads access the same data concurrently.

Typical approaches include:

  • reducing shared state and favoring immutable objects
  • protecting only necessary critical sections with synchronized, Lock, or ReentrantLock
  • using concurrent collections such as ConcurrentHashMap
  • using atomic operations through AtomicInteger or AtomicLong
  • using ThreadLocal when state should remain isolated per thread

The key point is that thread safety does not mean "add locks everywhere." A better design usually starts by reducing shared mutable state itself. Locks can introduce bottlenecks and deadlocks.

Tests and annotations lead into framework understanding#

When interviewers ask about testing, they are usually not checking whether you remember the syntax. They want to know whether you can structure code safely and verify behavior with confidence.

It helps to think about tests in layers:

  • unit tests verify service or utility logic quickly
  • integration tests validate Spring context, database, and external integrations
  • API tests verify controller requests and responses

In practice, combinations such as JUnit, Mockito, MockMvc, and Testcontainers are common. The higher the cost of failure in domains like payments, orders, inventory, or coupons, the more valuable good tests become.

Annotations fit naturally into this topic. In JUnit 4, for example:

  • @BeforeClass runs once for the entire test class
  • @Before runs before each test method
  • @Test marks the actual test method

So the typical sequence is @BeforeClass -> (@Before -> @Test -> @After) repeated -> @AfterClass.

At a deeper level, Java annotations are metadata. If an annotation is retained at runtime, it can be read through reflection, and frameworks or test runners can interpret that metadata to invoke methods at the right time. In that sense, annotations are not just syntax shortcuts. They are a way to express framework rules declaratively.

Java fluency shows up in functional interfaces and design patterns#

A backend developer working in Java should be comfortable with both object-oriented and functional styles.

A functional interface is an interface with exactly one abstract method. It is what allows lambdas and method references to work. The core types in java.util.function are worth knowing well.

  • Supplier<T>: returns a value with no input
  • Consumer<T>: consumes a value and returns nothing
  • Function<T, R>: transforms an input into a result
  • Predicate<T>: returns a boolean result

There are also useful variants such as UnaryOperator, BinaryOperator, BiFunction, BiConsumer, and BiPredicate.

Design patterns are another common interview topic. The important part is not listing names, but explaining what problem each pattern solves.

Patterns that come up frequently include:

  • Singleton: keep only one instance
  • Strategy: swap algorithms cleanly
  • Template Method: keep the common flow in a base class and delegate details
  • Proxy: control access, lazy loading, or add cross-cutting behavior
  • Decorator: extend behavior dynamically
  • Adapter: bridge incompatible interfaces
  • Builder: simplify complex object construction
  • Observer: notify subscribers of state changes
  • Factory: separate object creation from usage

These patterns also appear throughout Spring. AOP maps naturally to the Proxy pattern, builder-style APIs use Builder, and event listeners resemble Observer.

In Spring, bean scopes and AOP are foundational#

Spring is a container that creates and manages objects, so bean scope is a basic concept. Scope defines how long a bean lives and how widely it is shared.

The main scopes are:

  • singleton: the default, one instance per container
  • prototype: a new instance each time it is requested
  • request: one instance per HTTP request
  • session: one instance per HTTP session
  • application: one instance per servlet context
  • websocket: one instance per WebSocket session

In production systems, singleton is the default most of the time, and stateful request or session scoped beans need to be handled carefully.

Another concept you cannot really skip in Spring is AOP (Aspect-Oriented Programming). AOP separates cross-cutting concerns such as logging, transactions, authorization checks, and execution time measurement from core business logic.

The key concepts are:

  • Aspect: the module containing the cross-cutting concern
  • Advice: what runs and when it runs
  • Pointcut: where the advice applies
  • Join Point: a point in execution where advice can be applied

Spring AOP is usually proxy-based. In practice, it is often used for execution-time logging, tracing specific packages, or applying common transactional behavior. Overusing it can hide control flow and make debugging harder.

APIs and networking should be studied together#

When designing RESTful APIs, you need a clear understanding of HTTP method semantics.

The most common comparison is PUT versus PATCH.

  • PUT is generally used to replace an entire resource
  • PATCH is better suited for partial updates

Replacing all user information is a natural fit for PUT, while updating only a nickname is a better fit for PATCH. In real services, PUT is sometimes used for partial updates as well, but semantically PATCH is often more precise.

On the networking side, it is also useful to understand HTTP/2. It was introduced to address inefficiencies in HTTP/1.1.

Its main improvements include:

  • multiplexing, which handles multiple requests over one connection
  • header compression, which reduces repeated header cost
  • server push, which allows the server to send needed resources proactively
  • binary framing, which is more efficient than text-based framing

In practice, developers do not usually implement HTTP/2 directly. It is typically enabled through Nginx, CDNs, load balancers, or web server configuration. So this topic is mostly about understanding the concept and connecting it to infrastructure.

Data access requires both query awareness and modeling clarity#

One of the most common persistence problems in backend interviews is the N+1 problem. It happens when you load a set of entities once, then trigger additional queries for each related entity.

For example, if you load 100 orders and then lazily fetch member information for each order, you may end up issuing 101 queries. As data grows, that becomes a real performance problem.

Common solutions include:

  • using fetch join
  • using EntityGraph
  • applying batch fetching such as @BatchSize or default_batch_fetch_size
  • querying DTOs directly
  • revisiting the loading strategy itself

The important point is not to switch everything to eager loading. The better approach is to choose a loading strategy based on the actual access pattern.

Another frequent topic is how you explain your database structure. There is no single correct answer here. What matters is whether you can describe the domain model of a service clearly and structurally.

For an e-commerce example, you might describe:

  • members
  • products
  • orders
  • order items
  • payments
  • coupons
  • reviews

Then explain the relationships: member to order as 1:N, order to order item as 1:N, order item to product as N:1, and so on. If you can connect that explanation to indexes, normalization versus denormalization, large-table strategies, and transaction boundaries, your answer becomes much stronger.

Operational maturity shows up in monitoring and deployment#

When interviewers ask about operations, naming tools is less important than explaining which signals you watched and how you narrowed down the cause of a problem.

Monitoring tools are often grouped like this:

  • Prometheus + Grafana for metrics collection and dashboards
  • ELK / OpenSearch for logs and search
  • Pinpoint / Datadog / New Relic / SkyWalking for APM and tracing
  • Spring Boot Actuator for exposing application health and metrics

The real point is to combine metrics, logs, and traces to detect incidents quickly and identify root causes.

Deployment strategy matters for the same reason. Once you have multiple servers, the key concern is no longer just "how to deploy," but how to deploy without downtime and with a safe rollback path.

Representative strategies include:

  • rolling deployment
  • blue-green deployment
  • canary deployment

If you have 20 servers, a reasonable explanation is to build and test through CI, publish artifacts or images, deploy to a subset first, verify health checks and production metrics, then expand gradually. If something goes wrong, rollback should be immediate and predictable.

In a Kubernetes environment, you can connect this to rolling updates, readiness probes, and liveness probes. In a VM-based environment, you might explain it through Jenkins, Ansible, or SSH-based automation.

Finally, be able to explain the stack you have actually used#

Questions about which versions of Spring, Spring Boot, or MySQL you have used are not really about memorization. They are closer to checking which ecosystem you have real experience with.

A straightforward answer might include:

  • Spring Framework 5.x and 6.x
  • Spring Boot 2.x and 3.x
  • MySQL 5.7 and 8.0

It is even better if you can mention differences. For example, Spring Boot 3 requires Java 17+ and shifted from javax to jakarta. MySQL 8.0 introduced stronger support for features such as CTEs and window functions.

It also helps to prepare a few books you have learned from, such as Effective Java, Object, Clean Code, Unit Testing, or Designing Data-Intensive Applications, and explain what you applied from them in practice.

Closing#

Preparing for backend interviews is more effective when you understand how core concepts connect to building and operating real services, rather than memorizing isolated answers.

Data structures and traversal form the starting point for problem-solving. Operating systems and JVM concepts connect to reliability. Spring and testing show implementation discipline. Databases and deployment reveal practical engineering judgment.

In the end, most interview topics become much easier to handle if you organize them around four questions:

  1. What is it?
  2. Why is it needed?
  3. When is it used?
  4. What are its tradeoffs?

That structure makes it much easier to answer follow-up questions in a stable and convincing way.


Other posts
Common Backend Engineering Interview Questions 커버 이미지
 ・ 27 min

Common Backend Engineering Interview Questions

Stop Chasing Butterflies — Build a Garden They'll Come To 커버 이미지
 ・ 2 min

Stop Chasing Butterflies — Build a Garden They'll Come To

Key Lessons on Market Sizing and Business Plans from Startup Training 커버 이미지
 ・ 5 min

Key Lessons on Market Sizing and Business Plans from Startup Training