ServiceTitan was launching an AI-powered chatbot to help customers find answers from support articles before opening a case. The goal was to ease pressure on the support team by resolving more issues through self-service while ensuring customers still had clear paths to help if they needed it.
When I joined the project, the MVP experience existed but needed a stronger content strategy. Initial flows lacked clarity, failed to manage expectations, and didn’t align with our brand’s voice. There was also inconsistency in how different contact methods (chat, email, phone) were introduced and handled.
Clarify the chatbot’s purpose, improve tone and structure, and create scalable conversational patterns that help users find answers or transition smoothly to live support when needed.
Many users misunderstood the bot's capabilities. I rewrote the opening message to set clear expectations about what the chatbot could and couldn't do. Instead of vague intros, I led with phrases like "I can help you find answers in our support articles."
The intro sentence is a little long. Let's introduce it as an AI tool to help set the customer’s expectations. And use more familiar phrasing for the final prompt for input.
I reviewed and rewrote all first-touch and fallback messages to align with ServiceTitan's support voice: helpful, confident, and human. I replaced robotic or overly formal phrases with clear, conversational language. I also restructured longer messages to make them more scannable on mobile.
We should be trimming word count, and reducing the cognitive load on users with simple questions and responses. Also, if this is only pointing them to KB articles, we should be upfront about it, e.g., ”I’m here to help you find the information you’re looking for in the ServiceTitan Knowledge Base.”
Hi there, I’m [Jimmy], and I’m here to help you get answers to your questions. Do you need help with Accounting, Payments, and Inventory?
Yes / No, I need help with something else
We should avoid using the term ‘categories’ because we don’t know how a user would think of it in this context. In a support call, you don’t ask the caller “What category is your question in?” More natural to just ask “what can I help you with” and let the caller tell you.
This response should position the chatbot’s output as a helpful contribution to the customer achieving their goal. Current version feels too much like we’re putting the work back on the customer.
I mapped out fallback logic when the bot didn’t know how to help, with options for rephrasing or connecting to a human. This included triage prompts for routing users to email, live chat, or phone support. Each path included confirmation language and cues for what to expect next.
Focus on phrasing this as the bot asking if it successfully answered the question instead of asking the user if they found an answer. Current version is a question asked about results of a search vs an interaction with an AI tool.
This response should prompt for another question, because that’s what the bot can do. Current version is too open-ended.
Great! Do you have another question?
CTA: Yes, I have another question / I’m done
I created UI copy and confirmation messages for every support channel handoff: live chat, email, and phone. Each message provided reassurance, next steps, and continuity from the chatbot conversation. This reduced confusion and reinforced trust.
Better button labels and adding supporting descriptive text should provide more detail about support options.
“For fastest answer” and “you can also get” headers should be removed as well. The order of the options, as well as the response time tags, puts live chat first. I think the space would be better used by giving descriptions of each support type.
Live Chat (Recommended)
Phone Support
Email Support
Though this work was product-specific, I shared these patterns and recommendations with the broader content team to inform future chatbot and onboarding work. This established reusable intro patterns, escalation prompts, and confirmation logic.
The revised content helped clarify the chatbot’s purpose, reduce bounce-outs, and improve handoff clarity when users needed human help. Support and design teams reported fewer misunderstandings and smoother transitions between bot and agent.
The work reinforced a principle I bring to all conversational UX: clear is kind. Even in automation, users respond best when we tell them exactly what to expect and follow through.