If there’s one thing I’ve learned from designing interfaces in recent years, it’s this: without a deep understanding of the real capabilities of generative AI and the interaction patterns that make those capabilities usable, we risk two opposite errors. On one side, imagining exciting but technically fragile experiences. On the other, adding “AI features” that solve no real problems. Good UX for the AI era doesn’t start from a text box for typing prompts — it comes from next-generation interfaces that orchestrate capabilities, constraints, feedback, and trust.
In recent months, I’ve been following with great interest the work from Carnegie Mellon’s HCII, led by Minjung Park, which mapped what generative AI truly does well today and the most effective interaction patterns emerging from experimental and academic systems. It’s a valuable starting point for designing with clarity — and, above all, for unlocking real value in people’s daily lives.
What generative AI actually does well (today)
The research analyzed 85 artifacts (studies, prototypes, systems) and extracted 294 specific capabilities, organized into 33 clusters and 13 actions — later grouped into three main themes: Generate, Transform, and Understand content.
The numbers overturn a common perception: the dominant part isn’t “creating from nothing,” but rather analyzing and synthesizing large amounts of data. Within “understanding,” the most represented abilities include summarizing (58 capabilities), answering corpus-based questions, finding similarities, identifying entities, and refining existing content.
Why it matters for design
The data says it clearly: today, generative AI creates more value in understanding than in creating. Its strength lies in helping people manage, connect, and synthesize large volumes of information — not just produce new content. It’s a less glamorous truth, but more mature, robust, and measurable — and precisely for that reason, critical for product design.
For designers, this shifts the center of gravity: the interface isn’t (just) a place to “ask something” of AI, but an environment for sense-making. We need to design tools that enable:
Orientation – highlighting what truly matters in the task
Reduction – turning noise into signal
Connection – revealing relationships, duplicates, contradictions
Decision – helping users reach confident, reasoned outcomes
But here’s the crucial point: capabilities alone are unrealized potential. They become value only when embedded into clear, repeatable, understandable interaction patterns.
It’s the same leap our field made with Nielsen’s heuristics — a shared language that made interfaces more predictable and less frustrating. Today, we need an equivalent vocabulary for AI, so that different teams can design coherently and users can recognize and trust the underlying mechanisms (guided interview, options showcase, iterative refinement, etc.).
In short: when we center design around understanding, patterns, and trust, AI stops being a special effect and becomes an experience infrastructure — the real unlock for AI’s everyday potential.
From map to patterns: 7 concrete ways to interact with AI
The study identified seven recurring interaction patterns — “building blocks” for experiences beyond the simple prompt. Here’s how I apply them in practice, through small UI/UX decisions:
Chatbot Interview
When precise data is needed, the chatbot asks one question at a time and guides the user. It can even auto-fill some fields (e.g., by reading an uploaded PDF).
Design tips: progress bar, context reminders, quick-reply buttons, save & resume options.Reveal Dimensions
When users don’t know what to consider, the interface exposes key levers (criteria, limits, pros/cons) with examples and popular choices.
Design tips: sliders with instant preview, short explanatory pills, fast filters based on real data.Something Like This
Instead of describing with words, users upload examples (files, links, moodboards). The AI captures style and proposes variants.
Design tips: clear upload area with visible metadata, “we detected: tone X, palette Y,” controls to weigh each example’s influence.Dessert Cart
When the idea is vague, show ready-made options (snippets, layouts, prototypes) to pick and refine.
Design tips: gallery with comparable previews, useful tags, short “why we suggest this” notes, multi-select for mix-and-match.Refine This
Start from a draft and improve it step by step: view → edit → evaluate (tone, length, structure).
Design tips: highlighted differences between versions, browsable history, safe “undo,” micro-explanations like “reduced repetition by 23%.”Complete This
When users get stuck, AI proposes the next step or fills in obvious parts.
Design tips: contextual suggestions (paragraph or function autocompletion), short “why” rationale, and multiple alternative paths (“3 possible next steps”).Blank Page Paralysis
To start, AI offers drafts or templates tuned to the user’s role and goal.
Design tips: initial button with thematic presets, goal checklist, and real-time tips to improve the draft while working.
Note: these patterns don’t replace ideation; they’re tools to compose flows that respect model strengths and limits, with proper affordances and feedback.
Design principles for “AI-native” interfaces
In my view, designing for AI means designing conversational + structural systems. Here are the principles I apply:
Progressive scaffolding
Guidance scales with user confidence: start with “Dessert Cart” or “Blank Page,” then move to “Refine This.” Reduces cognitive load without restricting exploration.Surfacing model limits
Expose confidence and coverage: “high for summaries, low for unsupported predictions.” Justify outputs with sources or highlighted context. Builds operational trust and aligns with risk-aware AI practices.Human-AI pairing
Integrate phases where humans define quality criteria (taste) and AI explores the solution space. Provide side-by-side comparison UIs and concise “why A differs from B” explanations. True co-creation, not replacement.Intent over prompt
Capture intent through multimodality (examples, files, structures) — not only text. Reduces ambiguity, increases reproducibility, and allows reuse of intent as a design asset.Local, not global controls
Sliders and knobs should be near the output they affect (e.g., tone for a paragraph, style for a chart). Prevents “black-box” effects and supports shared accountability.Integrated verification
Every “understanding” output (the most common class) should offer quick checks: cross-ref with sources, “find similar to validate,” “ask for counterexamples.”
Design interactions for testing results — not just consuming them.
From theory to practice: high-impact use cases
Three scenarios where I see immediate value:
Post-meeting intelligence
Upload recordings and materials; AI summarizes by role (PM, designer, engineer), highlights decisions and open points, drafts tickets.
Pattern: “Understand → Refine → Complete.” (Summarization and Q&A capabilities are already mature.)Knowledge ops for hybrid teams
Connect wiki, drives, and issue trackers; AI finds duplicates, links related topics, flags inconsistencies, and generates comparative briefs.
Pattern: “Reveal Dimensions” + “Something Like This.”Onboarding complex tools
Conversational interview captures user goals and assembles feature presets; proposes dynamic tours and guided tasks.
Pattern: “Chatbot Interview” + “Complete This.”
Avoiding traps: guidance without rails
The fear that “Reveal Dimensions” or “Dessert Cart” might limit serendipity is legitimate. My design response: counter-balances.
Controlled exploration: always keep a “freestyle” lane alongside guided options (e.g., free text + examples).
Randomize & Surprise: a “show me something unexpected” button introducing out-of-cluster ideas.
Hyper-transparency: indicate when a suggestion comes from popularity vs. coverage logic.
Diversity metrics: in recommendation systems, show how much options differ (semantic distance) to encourage non-obvious choices.
This preserves agency and creative openness while retaining the positive friction of guidance.
Conclusion
As an interface designer, I believe the goal isn’t to “put AI everywhere,” but to design contexts where AI is genuinely useful.
Carnegie Mellon’s map reminds us that today, generative AI delivers more value in understanding than in creating. The seven patterns offer a concrete grammar for turning capabilities into reliable experiences. The rest is design responsibility — knowing when to guide and when to leave space.
That’s where next-generation interfaces will make the real difference — unlocking practical value in our daily work, not just spectacular demos.
Key references
Minjung Park (HCII, CMU), What Generative AI Really Does: A Map of Its Capabilities and Interaction Design Patterns, Sept 17, 2025 – public summary and dataset of 85 artifacts, 294 capabilities.
Park et al., Exploring the Innovation Opportunities for Pre-trained Models (ACM DL), July 4, 2025 — includes full list of artifacts, capabilities, and seven patterns.
HCII, CMU – Human-AI Collaboration Can Unlock New Frontiers in Creativity, May 30, 2025 — on co-creation as a driver of quality and diversity.



