The Thin Layer Between Using and Knowing

 ・ 9 min

photo by Jeremy Bishop(https://unsplash.com/@jeremybishop?utm_source=templater_proxy&utm_medium=referral) on Unsplash

"I've used Spring." "I've used React." "I've operated services on Kubernetes." One line on a resume is enough. But the depth that one line points to varies wildly from person to person.

Anyone can use a tool. What's hard is looking one layer beneath the abstraction — into the place where the tool stops doing its job for you.

There's a thin layer between "I've used it" and "I know it." What makes the difference is whether you can open that box.

This post is about opening the boxes of the tools we use every day, one layer at a time. We'll look one step below "basic usage" across five areas. It's not a model answer — treat it as a checklist for revisiting your own stack.

Using Spring vs. Knowing Spring#

"Using Spring" means learning annotations like @Service, @Repository, and @Autowired to get code that compiles. Knowing Spring starts when you can explain what happens when those annotations disappear.

It all comes back to one term: DI (Dependency Injection). Instead of an object creating its own dependencies, it receives them from outside — that's what loosens coupling and makes testing possible.

There are three injection styles, and they're not equivalent.

Style Characteristics Recommendation
Field injection Short. But can't construct without a container Not recommended
Setter injection Useful for expressing optional dependencies Specific cases
Constructor injection Immutable, finds circular refs early, testable Default

Spring's reason for recommending constructor injection isn't simple.

  1. Catches circular references at startup — Field/setter injection builds the object first and fills it later, so circular refs only surface at runtime
  2. Immutability — Can be declared final, multi-thread safe
  3. Testabilitynew OrderService(mockRepo) works without a container

The third point is decisive for TDD. Writing tests first presupposes a testable design. When dependencies are visible in the constructor signature, you know exactly where to slot in fakes. If dependencies grow unwieldy, you naturally split the class. SRP follows for free.

@Service
@RequiredArgsConstructor // Lombok generates constructor for final fields
public class OrderService {
    private final PaymentClient paymentClient;
    private final OrderRepository orderRepository;
}

Using Java vs. Knowing Java#

Anyone can write a lambda or stream().map(). What's inside the box is what's happening underneath.

Anonymous Classes and Lambdas — Similar, Yet Different#

Runnable a = new Runnable() { public void run() { ... } }; // anonymous class
Runnable b = () -> { ... };                                // lambda

They look alike on the surface, but the internals differ.

  • Anonymous classes generate a separate .class file (Outer$1.class). Captured outer variables become fields of the anonymous class. That's why the effectively-final constraint exists
  • Lambdas don't generate a .class. They use the invokedynamic instruction so LambdaMetafactory builds the instance at runtime. Lower memory cost, more JIT-friendly

It clears up the common misconception that "a lambda is just syntactic sugar for an anonymous class."

Stream — A Lazy-Evaluation Pipeline#

The essence of Stream comes down to three things.

  • Lazy evaluationfilter and map don't execute immediately. The pipeline only starts when a terminal operation like toList is reached
  • Immutability — The source collection isn't touched
  • Composability — Small operations combine into big transformations

In Java 21, Stream Gatherers (JEP 461) let you define your own intermediate operations. Sliding windows, batched processing, and other patterns are now first-class.

Virtual Threads — Java's New Concurrency#

The biggest change in Java 21 LTS is Virtual Threads (JEP 444). You can run tens of thousands of blocking tasks concurrently without consuming OS threads.

try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
    urls.forEach(url -> executor.submit(() -> fetch(url)));
}

The point is you can now write async code in synchronous style. It signals the end of the CompletableFuture callback-chaining era.

Exceptions — Checked vs. Unchecked#

  • Checked (extends Exception, excluding RuntimeException) — Compiler forces handling
  • Unchecked (RuntimeException family) — No compiler enforcement

In practice, almost everyone goes Unchecked plus a domain-exception hierarchy. Checked exceptions don't play well with lambdas or streams and put a burden on callers. Spring's own exception hierarchy (DataAccessException, etc.) is entirely Unchecked.

public abstract class BusinessException extends RuntimeException { }
public class OrderNotFoundException extends BusinessException { }

The standard is to convert them uniformly through @RestControllerAdvice for consistent responses.


Using React vs. Knowing React#

For React, "write JSX and the screen appears" is the entry point. There are a few more layers beyond it.

What JSX and the Virtual DOM Actually Do#

JSX compiles down to React.createElement(...) calls. The object tree those calls produce is the virtual DOM. React compares it against the previous tree (diffing) and applies the minimum necessary changes to the real DOM.

Knowing this answers questions like "why are keys important?" and "why does the component at the same position not unmount?"

The Core of Hooks#

Hook Role
useState Component-local state
useEffect Sync with external systems (subscribe, fetch)
useMemo Cache results of expensive computations
useCallback Stable function references
useRef Mutable value unrelated to rendering
useContext Subscribe to nearest Provider's value

React 19 added hooks like useActionState, useOptimistic, and use, and the React Compiler automates memoization — reducing the need to hand-roll useMemo/useCallback.

Component Composition — The Skeleton of Data Flow#

React's data flow is one-way: props flow down, callback props flow up.

<Child title={title} onSave={(data) => handleSave(data)} />

The problem is when state needs to reach a grandparent component. Walking it up the tree one prop at a time (prop drilling) gets ugly fast as depth grows.

You have three options.

  1. Context — Access the same value from any depth. Bad for frequently-changing state (re-render cost)
  2. State management libraries — Zustand, Jotai, Redux Toolkit. Subscribe to a global store from anywhere
  3. Lifting state up — Move it to a shared parent. The most idiomatic React approach, but the parent gets bloated past a point

Styling and Isolation#

CSS's global nature clashes with the component model. So tools like CSS Modules (Button.module.css), Tailwind (class composition), and CSS-in-JS take care of isolation.

These days, Tailwind plus shadcn/ui is the de facto standard, and in Next.js App Router, runtime CSS-in-JS has lost ground due to compatibility issues with Server Components.


Using Caches and Distribution vs. Knowing Them#

The data layer is where depth shows up fastest.

Cache Stampede — The Trap Right After Expiration#

If "I added @Cacheable" is using it, then explaining the phenomenon where requests pour into the DB simultaneously the moment the cache expires is knowing it. This is called a Cache Stampede (or thundering herd).

How to deal with it

  • @Cacheable(sync = true) — Only one of the concurrent calls for the same key hits the source
  • Distributed locks — Use Redisson's RLock to grant regeneration rights to one request
  • Stale-While-Revalidate — Return the expired value and refresh in the background
  • TTL jitter — Add random offsets so keys don't expire simultaneously

Redis Persistence and Beyond#

When Redis goes down, it's not just the cache. Sessions, distributed locks, and queues wobble too.

  • RDB — Point-in-time snapshot. Fast recovery, potential data loss
  • AOF — Log of every write command. Less loss, larger file

The practical standard is the hybrid mode with both enabled (aof-use-rdb-preamble yes). Add Sentinel (HA) or Cluster (sharding) on top, and open a fallback path on the application side with a Circuit Breaker (Resilience4j).

After Redis's license change in 2024, the OSS world moved quickly to Valkey (the Linux Foundation fork), and AWS ElastiCache/MemoryDB also offer Valkey as the default.

MSA — What You Gain and What You Lose#

What you gain when you slice services small

  • Independent deployment, free choice of stack, fault isolation, team autonomy

What you lose

  • Network latency/traffic, distributed transactions, integration test complexity, operational burden

Distributed transactions get solved with the Saga or Outbox patterns. Integration testing gets handled with Consumer-Driven Contract Testing (Pact) or Testcontainers.

Wide-Column Storage — Key Design Is Everything#

In wide-column stores like HBase, Row Key design is paramount. Keys are stored in lexicographic order and Regions split along that order. Putting the timestamp first crowds the latest data into a single Region (hotspot).

Solutions

  • Salt — Prepend part of a hash to the key
  • Reverse timestampLong.MAX_VALUE - ts
  • Composite keyuserId#reverseTimestamp

DynamoDB and BigTable are essentially the same. Partition key and sort key design is system performance.

Distributed Coordination — Master Election in ZooKeeper#

When one of many nodes must become master, ZooKeeper solves it with ephemeral sequential nodes.

  1. Each node creates a sequential node in ZK
  2. The smallest number wins
  3. When the master's session drops, its ephemeral node is auto-deleted
  4. The next candidate, watching the prior node, gets notified

Sessions are maintained by periodic pings; if no ping arrives within the timeout, ZK expires the session.

Kafka cut its ZK dependency with KRaft mode, and for new projects, etcd (also used by Kubernetes) is the more common choice. The pattern is the same.


Running on Containers vs. Operating on Containers#

A single docker run, one kubectl apply — that's just the start.

Resource Management — requests and limits#

resources:
  requests:
    memory: '256Mi'
    cpu: '250m'
  limits:
    memory: '512Mi'
    cpu: '500m'
  • requests — Scheduling baseline. The node guarantees at least this much
  • limits — Usage ceiling. CPU gets throttled; memory triggers OOMKilled

In JVM containers, memory awareness matters. Since Java 10+, the JVM reads cgroups for sizing the heap, but it's safer to pin it explicitly with something like -XX:MaxRAMPercentage=75.0. If you use the entire limit as heap, Metaspace and thread stacks run out and you OOM.

Autoscaling standards are HPA (horizontal) and VPA (vertical).

From Release to Deployment — Step by Step#

Here's the typical flow from a developer's commit to production.

  1. Commit & push — Git
  2. CI — Build, test, static analysis (GitHub Actions, GitLab CI)
  3. Image build — Dockerfile with multi-stage builds to minimize size
  4. Registry push — Docker Hub, ECR, GHCR. Meaningful tags (v1.2.3)
  5. CD — GitOps tools like ArgoCD/Flux detect manifest changes, or use Helm/Kustomize
  6. Rolling update — When the Deployment's image changes, a new ReplicaSet spins up and rolls in
  7. Health check and traffic shift — Once the readiness probe passes, the Service routes to the new Pod
  8. Observability and rollback — Prometheus/Grafana plus OpenTelemetry. On failure, kubectl rollout undo

Rolling is the default deployment strategy; go Blue-Green when zero-downtime matters most, Canary when you need gradual validation.

Incident Response — What to Look At First#

Order matters.

  1. Impact scope — Which users/services are affected
  2. Recent changes — Last deploy, config/secret changes, dependency updates
  3. Metrics — CPU, memory, error rate, latency (Grafana)
  4. LogsERROR, Exception, Caused by, 5xx, timeout, connection refused
  5. Distributed tracing — Follow inter-service flow with OpenTelemetry
  6. Rollback decision — Cutting impact often beats finding root cause first

"I can't roll back because I haven't found the cause" loses to "let's roll back first and analyze later." That's operational basics.


The Habit of Opening One More Box#

We covered five areas, and it all comes back to the same point.

Real understanding begins where the framework's magic stops.

How does an object get built when @Autowired isn't there? How would the screen update without useState? How would you prevent a cache-miss flood without @Cacheable? Behind that one line of kubectl apply, what does the scheduler look at to decide where the Pod goes?

Look one layer deeper at each and your view of the tools changes. Even writing the same code, you can explain why you chose it and what happens when it breaks.

Today, pick one annotation, one hook, or one line in a manifest from your own code and ask yourself: "Could I implement the same behavior by hand if I removed this?" The point where the answer breaks down is the next box to open.


As we risk ourselves, we grow. Each new experience is a risk.

— Fran Watson


Other posts
Can You Draw Your Backend System End to End? 커버 이미지
 ・ 7 min

Can You Draw Your Backend System End to End?

Will AR Glasses Be the Next Smartphone? 커버 이미지
 ・ 15 min

Will AR Glasses Be the Next Smartphone?

38 Common Backend Interview Questions, Organized 커버 이미지
 ・ 12 min

38 Common Backend Interview Questions, Organized