Will AR Glasses Be the Next Smartphone?

 ・ 15 min

photo by Tomasz Filipek(https://unsplash.com/@tomasz_filipek?utm_source=templater_proxy&utm_medium=referral) on Unsplash

To succeed in the future, you have to spot trends that are undervalued today but will reshape lives tomorrow.

Some people talk about crypto. Some talk about quantum computers. Some talk about AI. I think those technologies matter too. But they're already crowded with money and attention. So I want to ask a different question.

Where will people see information, and how will they input it, going forward?

Right now we look at monitors. We look at smartphones. We use keyboards and mice, tap touchscreens, and occasionally give voice commands. But will this last forever?

Is sitting in one place, eyes darting across a screen, fingers tapping a keyboard, hand clicking a mouse — really the final form of how humans meet computers?

I don't think so.

If you zoom out on the history of computing, humans have steadily moved toward higher information accessibility. And information has been getting closer to the human body and senses.

Mainframe computers lived in rooms.
People had to go to the computer.

Then computers came up onto our desks.
We sat in front of monitors and viewed information.

Then smartphones came into our palms.
We were connected anytime, anywhere.

What's next?

I think one of the intermediate stages is AR glasses.
And beyond that lie neural interfaces, technologies like BCI.

Monitors output information out there.
Smartphones output information in your hand.
AR glasses output information on top of reality.
BCIs try to exchange information directly with the nervous system.

The essence of this trend isn't simply "devices get smaller."

The core is this.

The distance between humans and information is shrinking.


What Is the Human Information I/O Interface?#

In technical terms, it's HCI — human-computer interaction. But here's a simpler way to put it.

A translation layer that connects human thoughts, intentions, senses, and actions to digital systems.

Computers don't think like humans.
And humans don't issue commands in 0s and 1s.

So there's always a translation layer between them.

Keyboards translate finger movement into characters.
Mice translate hand motion into pointer motion.
Microphones translate voice into digital signals.
Cameras translate the world into pixel data.
Monitors translate computer output into light.
Speakers translate data into sound.
Vibration motors translate data into touch.

So an interface isn't just a screen or a button.

It's a mutual translation system between human and machine.

This system is always two-way.

The first direction is output.
The computer shows, plays, or makes the human feel something.

The second is input.
The human conveys intent, commands, state, and context to the computer.

So every interface has this circular structure.

Human intent
→ Input device
→ Computer interpretation
→ Information processing
→ Output device
→ Human senses
→ Human understanding
→ A new intent

We feel like we touch the computer directly, but there's always a translation layer in between.

A good interface makes this translation feel natural.
A bad interface keeps demanding the human do the translating.


What Makes an Interface Good?#

When people see a new device, they often ask:

"Would anyone actually use this?"
"Is it easier than what we have now?"
"Do we even need this?"

Those are fair questions. But there's a more fundamental one.

How much does this reduce the cognitive cost of getting the information you want?

A good interface isn't just pretty.
A good interface bothers your brain less.

In other words, it reduces cognitive friction.

Take wayfinding as an example.

With a paper map, I have to find my position, orient myself, and interpret the route.
A smartphone map figures out my location and shows the path.
AR navigation can overlay arrows on the actual road I'm looking at.

The amount of interpretation I have to do keeps shrinking.

The point isn't "I stop thinking entirely."

A good interface lets me think less about the trivial and more about what matters.

Calculation, search, memory, organization, comparison, repetitive input, formatting — machines can handle these well. It's good for that low-level burden to fade.

Instead, people should focus on higher-level thinking.

What is my goal?
Does this choice align with my values?
What's the risk?
What are the alternatives?
Is this information trustworthy?
What context have I missed?

This is the direction good interfaces should head.

The question isn't whether machines remove thinking — it's which thinking they remove and which thinking they let us do more of.


Where Are Interfaces Evolving?#

The human information I/O interface is evolving in several directions.

1. Information Gets Closer to the Body#

Computers used to be far away.
Machines that filled an entire room.

Then they came to the desk.
Then into the hand.
Then onto the wrist and the ear.
Now they're approaching the eyes.
In the further future, they may enter the nervous system.

This isn't a story about "computers getting smaller."

It's a story about information getting closer to the human sense organs.

Monitors sit on desks.
Smartphones sit in hands.
Watches sit on wrists.
Earbuds sit in ears.
AR glasses sit in the field of view.
BCIs connect to the nervous system.

The physical distance between information and humans keeps shrinking.


2. Input Moves from Explicit to Implicit#

In the past, humans had to issue explicit commands.

You had to press a key.
You had to click a mouse.
You had to tap.
You had to type a search term.

That's explicit input.

But future input becomes increasingly implicit.

Where you're looking,
what you're trying to do,
what expression you're making,
what your heart rate is,
how focused you are,
what context you're in —
machines start to read these.

So the interface shifts.

From a tool waiting for commands to an assistant reading context.

This is where AI becomes critical.

AI isn't just an executor of commands — it becomes something that infers your situation and intent. The direction is for machines to understand more without you having to spell everything out.


3. From Single Sense to Multi-Sense#

Early computers were text-centric.
Then graphics arrived.
Then sound, touch, and vibration.
Now cameras, microphones, location sensors, and biometric sensors are all part of the mix.

Humans are inherently multi-sensory beings.

We see with our eyes,
hear with our ears,
touch with our hands,
and understand space through our sense of orientation.

So future interfaces are likely to integrate multiple senses, not just rely on a single screen.

Sight, sound, touch, location, biometric state, surrounding context — all become part of the interface.

That's why AR glasses matter.

AR glasses aren't just an output device.
They see reality through cameras,
hear sound through microphones,
track gaze and head direction,
understand space,
and overlay information on top of it.

So AR glasses are input device, output device, and reality-perception device, all at once.


4. From Tool to Environment, From Environment to Self#

Early computers were tools.

Things you turned on when you needed them.

Smartphones became companions.

Always carried, always connected.

AR glasses can become an environmental layer.

Information sits on top of the reality I see, and reality itself becomes the interface.

Going further, BCIs may operate as an extension of the self.

Because the lines between thought and input, memory and search, judgment and assistance, sensation and output may blur.

If you simplify the trend, it looks like this.

Computer as tool
→ Computer attached to the body
→ Computer overlaid on reality
→ Computer connected to the nervous system

The important question here is this.

How far will humans accept digital information into their bodies, and even into their minds?


Why AR Glasses Matter#

Skeptics of AR glasses say things like:

"Would anyone wear glasses every day?"
"Aren't camera-equipped glasses uncomfortable?"
"The battery's short and they're heavy."
"Isn't a smartphone enough?"

These are valid points.
Today's AR glasses still fall short in many ways.

But these objections are about the product form — not necessarily about the direction.

The core question isn't "Will people wear today's glasses every day?"

A more important question is this.

How deeply will humans integrate digital information into their perception of reality?

AR glasses matter because they can layer information on top of reality.

You see a door, and reservation info appears.
You see a product, and prices and reviews show up.
You see a person, and their name and your last conversation come back to you.
You see a foreign sign, and it's translated.
You see a machine, and repair instructions are overlaid.
In a meeting, the key points of someone's remarks are summarized beside them.

This isn't just a display.

It's a device that reinterprets the meaning of reality in real time.

On PCs, files and windows were the interface.
On smartphones, apps and screens were the interface.
In AR, the objects, people, and spaces of reality become the interface.

So AR glasses don't just need display tech.

They need cameras, lenses, displays, sensors, batteries, AI, spatial recognition, privacy, and social acceptance — all of it.

That's why the technical bar is so high.

And AR glasses don't have to take today's form.

They could be lighter glasses,
contact-lens-style displays,
combinations of earbuds and projection,
car HUDs,
spatial displays,
or, further out, neural interfaces.

So we should reframe the question.

The wrong question is this.

Will people wear AR glasses every day?

A better question is this.

How deeply will digital information enter human perception of reality?


What Changes When AI and AR Combine?#

AR glasses alone aren't enough.

If they just keep flashing information in front of you, they become noise.
With too many notifications, too much explanation, too cluttered a view, human attention burns out faster.

So the heart of the future interface isn't "show more information."

The important thing is this.

How precisely, with how little disruption, and through which sensory channel can it deliver what you actually need to know right now?

This is where AI is needed.

Without AI, AR is little more than a notification panel hovering in front of your eyes.
With AI, AR becomes a cognitive aid that understands context.

Before a meeting, it summarizes the participants and the issues.
During the meeting, it captures the key points of what's being said.
It surfaces context you missed.
It translates a foreign language in real time.
You look at a machine you're working on, and it tells you the next step.
When you need to focus, it hides notifications.

A good assistant doesn't talk constantly.
They speak only when it matters.

A good AR will be the same.

Future output isn't just information display.
Future output is context-aware cognitive assistance.


The Most Important Resource Isn't Screen, It's Attention#

Modern people aren't suffering from a lack of information.
They're suffering from too much of it.

Too many notifications,
too many apps,
too many choices,
too much content,
too many recommendations.

So the most important resource in the future interface isn't screen real estate.

It's human attention.

A good future interface isn't a device that shows more.
It's a device that hides what's unnecessary and surfaces only what's needed, only when it's needed.

From this angle, for AR glasses to succeed, they can't always be showing something.

Most of the time, they should be quiet.
Only at the truly necessary moment should they show a small amount of information.
And that information has to match your goal and context exactly.

The real competition for future interfaces probably won't be about screen size — it'll be about attention management.


Will a Good Interface Make People Dumber?#

Some people worry like this:

"If machines get too convenient, won't people stop thinking?"
"If AI does everything, won't human capability weaken?"
"Won't a good interface actually make people passive?"

Some of that is true.

If you blindly trust whatever a machine recommends,
take whatever an algorithm shows you at face value,
never review what an AI says,
follow only the route navigation gives you,
and only read summaries instead of originals,
you'll gradually surrender judgment.

That's not "good less-thinking."

That's abdication of judgment.

But the opposite case exists too.

The calculator does the math.
The navigation handles the routing.
The calendar handles the remembering.
AI handles the document organizing.
Automation handles the repetitive work.

Then people can ask more important questions.

Why am I doing this?
What's the goal of this decision?
Does this choice match my values?
What's the risk?
What's the alternative?
Is this information trustworthy?
What human context did the machine miss?

This isn't "not thinking."

It's raising the level of thinking.

If machines reduce the low-level burden of repetitive calculation, search, memory, organization, and comparison, then people can focus on higher-level judgment.

So a good interface's job isn't to make humans dumb.

A good interface should make people think less about what's trivial and more about what matters.


How Should User Attitudes Change Now and in the Future?#

Today's computers and smartphones are still mostly command-driven tools.

I open an app,
search,
type,
copy,
organize,
and choose.

Today's user has to behave like this:

Use it actively, but review the results.

Using computers, smartphones, and AI a lot is fine.
You just can't trust the output as-is.

You should review what AI writes.
You should be skeptical of recommendation algorithms.
You should compare search results.
You should know that autocomplete can flatten your voice.
You should know that notifications and feeds can steal your attention.

A good user today is both an operator and an editor of their tools.

In the near future, machines will increasingly suggest things first.

"Here's how you should reply to this email."
"You need to leave now."
"Here's the conversation you had with this person last time."
"This product is cheaper online."
"You should ask this question in the meeting."

The user's attitude has to change.

Receive the suggestion, but approve it on your own terms.

The more machines suggest first, the more important your questioning ability and your standards become.

Does this recommendation match my goal?
Who benefits from this recommendation?
Are advertising or platform interests baked in?
What are the alternatives?
What part should I judge for myself?

In the further future, machines may operate like an environment.

As AR, wearables, spatial computing, and neural interfaces mature, the computer becomes part of the environment you live in, not a tool inside a screen.

At that point, users should ask:

How is my surrounding environment nudging me?

Who chose the information I'm seeing?
What's been emphasized, and what's been hidden?
Does this interface work for my benefit, or for the platform's?
Can I turn off this filter?
Does my judgment still belong to me?

Future users won't just be people who use machines well.
They'll be people who can take the easy choices machines offer and reconstruct them on their own terms.


The Attitude of Someone Who Uses Future Devices Well#

In the future, the strong won't be those who keep machines at a distance.
But neither will those who hand everything over to machines.

What matters is active coexistence.

Use machines aggressively, but keep your judgment muscle.

People who use future devices well share these traits:

First, delegate repetition, but set the direction yourself.
You can hand organization, comparison, calculation, scheduling, drafting, and repetitive work to machines. But where you want to go and what you consider important — that's on you.

Second, look at reasoning, not just results.
If AI recommends A, ask "why A?" Why not B or C, what info is missing, what conditions would change the conclusion — verify all of that.

Third, add friction to irreversible decisions.
Small decisions can be quick. But money, health, relationships, career, law, ethics — pause once more.

Fourth, use machines as perspective generators, not answer machines.
Don't ask AI for one correct answer. Make it produce arguments for, arguments against, risks, alternatives, and other people's perspectives.

Fifth, use external memory, but keep the core structure in your head.
Details can go to notes, calendar, or AI. But the big picture and structure should live in your mind.

Sixth, distinguish convenience from being manipulated.
Not all convenience is on your side. Recommendation feeds are convenient, but they can steal your time. Autocomplete is convenient, but it can average out your voice. AR guidance is convenient, but it can swap your view of reality for someone else's filter.

Seventh, see technology as an amplifier of capability.
Tech amplifies the user's tendencies. For someone with clear purpose, AI becomes a productivity tool. For someone with vague purpose, AI can become a bigger distraction.

In the end, what matters is this:

Use machines well, but keep your judgment muscle.


The Line Between Human and Computer Keeps Blurring#

Computers outsourced calculation.
The internet connected information.
Smartphones gave us computer and internet access anywhere, anytime.
AR is trying to fuse reality with information.
BCI is trying to fuse thought with information.

At the end of this trend lies the question:

How far will humans accept the digital? Into the body? Into the mind?

The moment thoughts can be read, new possibilities open up.

Memory assistance,
thought correction,
sensory prosthetics,
focus aids,
intent-based commands,
physical-capability augmentation — all become possible.

But the risks also grow.

Targeted advertising,
emotional manipulation,
attention control,
analysis of thinking patterns,
invasions of mental privacy — also become possible.

As technology gets closer to the body and mind, the interface is no longer just a tool.

It becomes part of human cognition.

So the heart of future interfaces isn't only technical performance.

Bandwidth, latency, cognitive load, context understanding, social acceptance, privacy, and control all matter.

For future devices, what matters isn't how much information they show — it's how appropriately they hide and reveal.


Conclusion: AR Glasses Aren't the End — They're an Intermediate Stage#

It's not yet certain whether AR glasses will be the next smartphone.

Whether they'll get light enough for daily wear,
whether the battery problem will be solved,
whether social discomfort with cameras will fade,
whether prices will drop enough,
whether truly compelling use cases will emerge — we'll have to see.

But the questions AR glasses raise are critical.

Where will information be output, going forward?
How will humans input, going forward?
How deeply will computers come to understand human context?

I believe we're moving from an era where humans operate machines to an era where machines assist human context and cognition.

Further out, we may even reach an era where machines perform some tasks without humans, moving like software agents without physical form.

In that journey, AR glasses can be more than just another device — they can be an intermediate stage in the expansion of human cognition.

What matters isn't the glasses themselves.

What matters is how deeply humans accept digital information into their perception of reality.

And the good user of the future isn't someone who refuses machines.
Nor is it someone who hands everything over to machines.

The good user of the future is someone who takes the easy choices machines offer and reconstructs them on their own terms.

Finally, this single line sums it up:

Don't let machines take your thinking. Let machines take your chores.

You can hand off the calculation.
You can hand off the search.
You can hand off the organizing.
You can hand off the drafts.
You can hand off the repetition.

But purpose, standards, values, responsibility, doubt, taste, and final judgment — those should stay with you.

Using technology well isn't about being dragged along by the machine.

Using technology well is about reaching a state where the machine helps you do more important thinking.


Never regret. If it's good, it's wonderful. If it's bad, it's experience.

— Victoria Holt


Other posts
38 Common Backend Interview Questions, Organized 커버 이미지
 ・ 12 min

38 Common Backend Interview Questions, Organized

25 Backend Interview Questions Revisiting Java Server Fundamentals 커버 이미지
 ・ 11 min

25 Backend Interview Questions Revisiting Java Server Fundamentals

Government Grants and Investment — What Every Founder Should Know 커버 이미지
 ・ 4 min

Government Grants and Investment — What Every Founder Should Know