Designpixil

Designpixil · AI Design

AI Chatbot Interface Design: What Works and What Doesn't

A practical guide to AI chatbot interface design: when chat is the right UI, conversation starters, threading, history, mobile UX, and the mistakes to avoid.

Anant JainCreative Director, Designpixil·Last updated: March 2026

Chat is the most default answer to "how do we design this AI feature," and it's wrong about half the time. Chat interfaces are familiar, quick to prototype, and easy to explain. They're also cognitively demanding, hard to navigate, and poorly suited to many tasks that don't actually require conversation.

Before you design a chat interface, you need to settle whether chat is actually the right UI for what you're building. If it is, this guide covers the patterns that work, the ones that don't, and the specific decisions that determine whether your chat UI helps users get things done or just adds friction with a conversational wrapper.

When Chat Is the Right Interface — and When It Isn't

Chat works well when:

  • The task is inherently conversational: The user needs to iterate, clarify, and refine. Research, writing assistance, and complex decision-making benefit from back-and-forth. The output at step 5 depends on what happened at steps 1 through 4.
  • The task space is too broad to menu-ify: If the number of things a user can do is genuinely unbounded, a freeform input is more appropriate than a finite set of buttons or form fields.
  • The user's query is naturally language-shaped: "Summarize this document and flag any clauses that aren't standard" is a natural language query. Forcing it into a form with dropdowns and checkboxes would be worse.

Chat works poorly when:

  • The task is structured and predictable: If users will always do the same sequence of things — configure a report, set up an integration, create a new record — a form or wizard is faster and less error-prone than a conversation.
  • The output is visual or spatial: Designing a dashboard layout through chat is frustrating. Configuring a data visualization through natural language is awkward. Some things need direct manipulation.
  • The user doesn't know what to ask: An empty chat box with no guidance is a poor onboarding experience for features where users don't have a clear mental model of what the AI can do.
  • Speed matters more than flexibility: A chat interaction to accomplish a task that could be done with two button clicks is pure friction. Don't add conversation where a button suffices.

The trap is that chat feels innovative and modern, so it gets chosen for reasons of product positioning rather than user experience. Your users don't care how the interface looks in a deck — they care whether they can accomplish their goal in fewer steps.

Conversation Starters and Suggested Prompts

If you decide chat is right, the most important design decision is what the user sees before they've typed anything.

An empty input box with a blinking cursor is not an onboarding experience. Users who haven't used an AI feature before don't know what to ask. They make vague, generic queries. They get generic outputs. They conclude the feature isn't very useful. You've lost them in the first 60 seconds.

Conversation starters — pre-written example queries that users can click to send — solve this. Good ones do three things:

  1. Show the range of what the feature can do (breadth signal)
  2. Show a specific, realistic query (not "Ask me anything!" — something a real user would actually ask)
  3. Give users a working example of how to structure a good query (prompt education)

For a contract analysis AI: "What is the notice period for termination?" is better than "Tell me about this contract." It's specific enough to teach users what level of specificity works well.

How many to show: 3-5 is the right range. Fewer and users don't get a sense of the feature's breadth. More and users experience choice paralysis or don't bother reading them.

When to show them: Show conversation starters in the empty state, before the first message. After the conversation has started, they're no longer useful — the user is now in a specific context.

Rotating vs static: Some products rotate conversation starters to show different examples on each session. This is useful if your feature has a wide capability surface. If your feature is focused on a narrow domain, static starters that always show the same examples are fine.

Message Threading and Conversation Structure

In most simple chat interfaces, messages are linear: user says something, AI responds, user replies, AI responds, and so on down the page. This works for short conversations. It breaks down quickly for long ones.

The problem with linear chat for complex tasks: If a user is using your AI to analyze a 50-page document and asks 20 questions over 45 minutes, they have a 40-message conversation with no navigable structure. Finding a specific exchange from earlier requires scrolling. Revisiting a branched path (what if I had asked this differently?) requires starting a new conversation.

Threading: Some interfaces let users branch from a specific message — a "Reply to this" or "Explore this further" action that creates a nested thread. This works well for research-style use cases where users want to pursue multiple directions from the same starting point. It's overkill for most simpler features.

Conversation sections: For long conversations, collapsing older exchanges (like how some email clients group older messages) helps users focus on current context without losing access to history.

The practical answer for most SaaS products: Keep the interface linear but invest in conversation history and search so users can find past exchanges. Threading is the right design for a research tool; it's more complexity than most chat implementations need.

Chat History Management

Chat history is underdesigned in most AI products. Users return to past conversations. They want to share them, reference them, continue them. And yet most chat interfaces treat each session as essentially ephemeral.

What good history management looks like:

  • Conversations are automatically saved and named (ideally with an AI-generated summary of the conversation topic, not "Chat on March 21" timestamps)
  • A sidebar or history panel is accessible from the chat UI without requiring navigation away
  • History is searchable by content, not just by date
  • Users can delete conversations and rename them
  • Conversations can be shared (by link or export) with teammates or outside collaborators

Naming conversations: The automatic naming problem is worth spending time on. "New chat" and timestamps are useless at scale. AI-generated names ("Contract review - Acme MSA" based on the first user query) are much better. Users can rename manually if the auto-name is off.

Session persistence vs fresh starts: Some AI features should default to continuing the previous conversation (where context accumulation is valuable), others should default to a fresh start (where each session is independent). Make this match the mental model of the task. A research assistant might default to continuing a project. A customer support tool should probably start fresh.

Multi-Modal Inputs: Text, Files, and Voice

The basic chat input — a text box — is the minimum. Most production AI products need to handle more than plain text input.

File uploads: When users need to share documents, images, or data files with the AI, the upload experience needs careful design. Drag-and-drop is table stakes. You need to show a clear preview of what's been uploaded (file name, type, thumbnail for images). You need to communicate that the AI has processed the file before the user starts asking questions about it. And you need to handle oversized or unsupported files with clear error messages.

Multiple files in a single conversation: When users can upload multiple files, they need to know which files are "in context" for any given message. A visible file list in the input area, or indicators on messages that reference specific files, prevents the confusion of "which document did the AI just answer about?"

Voice input: Voice is increasingly available on mobile AI interfaces. It works best for quick, conversational queries and worst for anything requiring precise formatting or technical terms. If you add voice input, handle it as a transcription that appears in the text input (so users can review and edit before sending), not as a direct query submission.

Images: If your model is multi-modal and can analyze images, the input design needs to accommodate images alongside text. A combined input that accepts drops or paste of both text and images, with clear visual distinction between the two in the message thread, is the standard pattern.

Typing Indicators and Response States

Small interaction states make a significant difference in the feel of a chat interface.

Typing indicators: Show a typing indicator (the three-dot animation or a contextual variant) while the AI is generating a response. This is especially important if you're not streaming the output — without an indicator, a 3-second blank response looks identical to a broken interface.

Streaming vs loading: Streaming the output (showing text as it's generated) is almost always better for response times above 1 second. See the UX patterns for LLM features guide for full detail on streaming design. The core point here: in a chat context, streaming output preserves the feeling of a real conversation better than batch-loading responses.

The stop button: A clearly visible stop or cancel button during AI response generation is non-optional. Users who realize mid-response that the AI is going in the wrong direction need to stop it and redirect. This should be visually prominent — not tucked in a menu.

Error states in chat: If a message fails to send or a response fails to generate, the error should appear inline in the chat thread, adjacent to the affected message. A banner error at the top of the chat interface, disconnected from the message that caused it, forces the user to re-establish context.

Rate Limiting UX

If your AI features have rate limits — a cap on the number of messages per day, per hour, or per plan — how you communicate and enforce that limit matters significantly.

Proactive communication: Don't surprise users with a rate limit wall when they're mid-task. Show remaining usage before they hit it. A simple "15 messages remaining today" indicator in the chat header is enough. Some products only show this when the user is getting close to the limit (say, below 20%) to avoid creating anxiety for users who never get close.

When the limit is hit: The error state when a user hits a rate limit should:

  • Explain clearly that this is a usage limit, not a technical error
  • Tell the user when the limit resets (tomorrow at midnight, next billing cycle, etc.)
  • Offer a clear upgrade path if the limit is plan-based
  • Not feel punishing — the user did something legitimate and hit a ceiling

Grace vs hard cutoff: Some products let users continue with a soft limit (slower responses, or responses with reduced capability) rather than a hard cutoff. This is a better experience if technically feasible, but it needs to be clearly communicated — users should know they're in a "reduced" mode, not confused about why things feel different.

Mobile Chatbot UX

Chat interfaces are harder to design well on mobile than they appear. The desktop chat pattern — input at the bottom, messages above, history in a sidebar — doesn't translate directly.

The keyboard problem: On mobile, when the user taps the input and the keyboard appears, it should push the messages up so the input remains visible and the most recent message is still in view. This sounds obvious but is frequently broken. The mobile keyboard layout is one of the most common chat UI failures in production.

Input area design: The text input on mobile should expand vertically as the user types a long message, rather than requiring horizontal scrolling within a single-line input. Multi-line input with a visible send button (not just Enter-to-send) is more appropriate for mobile where Enter behavior is inconsistent.

Sidebar history on mobile: Don't try to replicate the desktop sidebar pattern on mobile. History should be accessible via a dedicated screen or a bottom sheet, not a panel that slides in from the side (which competes with native gestures on iOS and Android).

Tap target sizes: Message action buttons (copy, thumbs down, retry) need to be large enough to tap accurately — minimum 44x44 points. These are often designed at desktop scale and become frustrating on mobile.

Scroll behavior: Auto-scrolling to new messages as they stream should stop when the user manually scrolls up to review older messages. This is a straightforward behavior that's easy to overlook and deeply frustrating when it's wrong — imagine trying to read a previous message while new content keeps yanking you back to the bottom.

What Separates a Good Chat Interface From a Bad One

After working on AI product interfaces across multiple B2B products, the differentiator isn't usually the visual design. It's the quality of the interaction design decisions in exactly the places described above.

Good chat interfaces are forgiving: they make it easy to retry, refine, and revisit. They communicate clearly when something isn't working and offer a path forward. They don't make users feel like they're fighting the interface to accomplish their goal.

Bad chat interfaces are technically chat but behaviorally dead ends: no conversation starters, no history, hard-to-find error states, no stop button, empty inputs on mobile that trigger the keyboard in the wrong place.

Chat is a high-friction UI by nature. You're asking users to compose queries in natural language, evaluate variable outputs, and iterate on something that may or may not be going in the right direction. The design's job is to reduce the friction in every place it can.


Frequently Asked Questions

How do I decide whether to use a chat interface or a more traditional UI for an AI feature?+

Ask whether the task is genuinely conversational — whether the output at each step depends on the previous exchange. If yes, chat is appropriate. If the task is structured, predictable, or visual, a form, wizard, or direct-manipulation interface will be faster and less frustrating. Chat is often chosen because it's easy to prototype, not because it's the best experience for the task.

What should be in the empty state of an AI chat interface?+

Show 3-5 specific, realistic example prompts that demonstrate the range of the feature. Include a brief description of what the AI can help with in this context. Avoid generic starters like "Ask me anything" — they're useless for users who don't know what to ask. The empty state is your primary onboarding moment for the AI feature.

How should I communicate rate limits to users without being annoying?+

Show remaining usage only when it becomes relevant — typically when a user is within 20-30% of the limit. When the limit is hit, explain clearly that it's a usage cap (not a technical error), tell them exactly when it resets, and show a clear upgrade path if the limit is plan-based. The experience should feel like a known system boundary, not a surprise punishment.

What are the most important things to get right in a mobile chat interface?+

The keyboard pushing the input into view (not covering it), multi-line input that expands as the user types, large enough tap targets on action buttons, scroll behavior that pauses auto-scroll when the user manually scrolls up, and a mobile-appropriate pattern for history access (bottom sheet rather than a sidebar).

How should I name conversations in AI chat history automatically?+

Use the first user query (or an AI-generated summary of the first exchange) as the default conversation name. A specific name like "Contract review — Acme MSA" is far more useful than a timestamp. Let users rename conversations manually and delete ones they don't need. Add search so history is navigable at scale.

Work with us

Senior product design for your SaaS or AI startup.

30-minute call. We look at your product and tell you exactly what needs fixing.

Related

← All articles