There’s a precise moment, in front of every AI interface, when enthusiasm pauses: the cursor blinks, the screen is blank, and everything depends on the perfect prompt we manage—or fail—to write. It’s the symbol of an asymmetric relationship between humans and artificial intelligence: the AI waits, and we must know what to ask.
Adrian Levy’s article, published in AI Advances, shows how Google’s Notebook LM radically changes this dynamic and represents a new model for the User Experience of artificial intelligence. Levy describes Notebook LM as an AI-native tool, designed not to imitate conversation but to rethink how we interact with knowledge itself. It no longer puts us before an oracle to interrogate but within an environment built around our own context—our documents, notes, and sources. It’s the difference between asking and collaborating; between a generic chat and a cognitive assistant that grows inside our workflow.
The Anxiety of the Empty Chat
Every designer knows that feeling: an interface that’s too empty doesn’t free you—it freezes you. Emptiness demands initiative, confidence, competence. In traditional AI chats, this pressure is clear: if you don’t know how to ask, the AI doesn’t know how to answer.
Notebook LM removes that barrier. Instead of a blank page, it offers an informational environment anchored in the user’s sources. The system doesn’t start from a prompt but from what the user already knows and owns. It’s the AI that adapts to the material, not the user to the machine. This shift—from a prompt-centric to a context-centric experience—is the first true paradigm leap in AI interaction design.
The Source-Grounded Approach
At the heart of Levy’s model is the idea of a source-grounded AI: every answer stems from a set of documents and materials provided by the user. It’s no longer a language model drawing from universal, abstract knowledge, but a personal assistant rooted in the user’s real context.
The result is twofold:
Cognitive relevance: responses are focused and coherent with the user’s language and materials.
Trust: every piece of information is traceable, with clickable citations linking back to the original source.
Levy shows how this approach shifts AI’s role from generative engine to cognitive collaborator, capable of reasoning within our own data and processes.
An AI-Native Interface
Another revolutionary aspect Levy highlights is the interface: a three-panel structure that integrates sources, workspace, and AI responses. This design keeps context visible, reducing fragmentation and supporting cognitive flow. The AI’s answers not only cite sources but connect to them interactively, transforming prompt-and-response into a continuous dialogue with knowledge.
The experience becomes even more accessible through the Audio Overview feature, which generates interactive audio summaries of one’s materials—almost like personalized mini-podcasts. This is true multimodality: text, audio, and synthesis coexist in one coherent experience, adapting to the user’s context and preferences.
For Levy, this shows that the future of AI UX lies not in making chatbots more talkative but in creating cognitive environments that naturally augment human capability.
Intelligent Skeuomorphism
To make a new technology approachable, Notebook LM adopts a form of functional skeuomorphism: the notebook, the podcast, the conversation become familiar metaphors that lower cognitive load. It’s not nostalgic aesthetics—it’s inclusive design, using what we know to help us interact with the unknown. A bridge between human language and computational ability.
From Contextual to Empathic Intelligence
But understanding context isn’t enough. As Debmalya Biswas notes in “Adding Empathy to Agentic AI,” intelligent agents are becoming increasingly capable of acting—booking, planning, organizing—but remain emotionally blind.
Biswas proposes introducing an Empathy Quotient (EQ) to measure and train the ability to adapt tone, language, and priorities based on the user’s emotional state and personality. Through fine-tuning and behavioral observation, an agent can learn to respond in a more human, sensitive, and personalized way.
If Notebook LM builds cognitive relevance—understanding what we’re doing—EQ adds relational resonance, understanding how we feel while doing it. The first creates trust through transparency of sources; the second strengthens it through linguistic sensitivity.
For designers, the lesson is clear:
Empathy isn’t an optional ethical value—it’s a design function.
An agent that recognizes frustration, haste, or excitement can adapt its mode of response, choose when to intervene, or suggest pauses and summaries. Empathy thus becomes the new frontier of personalization.
Three Patterns for the New AI UX
From the dialogue between Levy and Biswas, three patterns emerge that define the next generation of AI-native experiences:
Content-First Workflow – Interaction begins with the user’s material, not a blank prompt. The AI amplifies existing knowledge.
Adaptive Interfaces for Intent and Emotion – The environment shifts based on what the user is doing and how they feel, blending cognition and empathy.
Multimodal Synthesis – Information flows coherently across text, audio, and visuals, following the most natural mode for each context.
My Reflections as a Designer
As a designer, I see in this transition something deeper than a technological upgrade. Notebook LM and the idea of empathic AI force us to rethink the role of design in the age of agentic systems.
Designing an AI-native interface no longer means designing around technology but with it—as if intelligence itself were a design material, like color, typography, or time. The focus shifts from what AI can do to how it can accompany the user’s way of thinking.
I believe the real goal isn’t to build ever-smarter AIs but to create experiences that amplify focus, curiosity, and understanding. A well-designed system shouldn’t appear intelligent—it should make people feel more intelligent when using it.
That’s why the future of AI-native UX won’t be defined by models but by how well we weave transparency, empathy, and context into experiences that feel natural, useful, and—above all—human.
🔹 In Closing
The true evolution won’t come from chatbots that can talk about everything, but from making AI an invisible yet active part of our work, learning, and creative environments. Not an oracle answering in the void, but a cognitive and empathic companion that thinks with us—inside our materials and within our moods.
As Adrian Levy writes, the future of AI interfaces lies in creating creative workspaces that naturally augment human capabilities. I would add: it will be design, more than algorithms, that determines whether this intelligence will truly be human-centered.



