Presentation: Design Guidelines for Conversational Interfaces


4:10pm - 5:00pm

Key Takeaways

  • Learn patterns and practices of conversational interfaces.
  • Understand some of the design considerations of developing systems based on conversation.
  • Hear advice on where the industry is and where it’s heading relative to conversational interfaces.


At Big Nerd Ranch, we literally wrote the course on developing third-party apps for the Amazon Echo, a voice-only interface. With Amazon, Apple, Facebook, Google, and others creating SDKs for voice and messaging interfaces, it is critical to carefully design good user experiences for these conversational interfaces.

One of the most frustrating human-to-machine systems is the customer service IVR. Very few people enjoy navigating through complicated voice menus and hoping that the system understands your command. These systems often leave people feeling annoyed, frustrated, and helpless. It is critical that we learn how to avoid IVR problems and design graceful, smart, and satisfying conversational experiences.

In this session, Angie Terrell will walk through the current state of conversational interfaces and human-centered design principles to guide the design of your conversational apps. From messaging apps, which can take advantage of the screen interface, to voice-only interfaces which require natural language processing, you will learn fundamental user-centered design principles to build experiences that avoid frustration and annoyance and provide value.

If you are a designer or developer creating apps for Alexa, Siri, Messages, Facebook Messenger, or other chatbots you will leave this session with practical methods for designing the best user experience for the conversational interface.


QCon: Can you tell a bit about your role and what do you do for Big Nerd Ranch?

Angie: I am the Director of Design at Big Nerd Ranch, and we do three things: we create and build apps for our clients, we teach people how to design and build apps, and we write books about how to build apps (mainly software development books, which is what a lot of people know us for). 

As the Director of Design, my role is to facilitate a good design practice in conjunction with our really good software development practices. My team is a small group (six designers), and we design iOS, Android, and web applications. Every designer is a full stack designer. They do user experience all the way through to the actual visual design of the application itself. We are all experts in these platforms.

QCon: Your presentation is called Design Guidelines for Conversational Interfaces. Can you tell me a bit about the talk?

Angie: There is this move towards messaging platforms as a means to interact with your computer. Whether it’s Apple, Google, or Facebook using messaging apps to allow their users to do more functional things within the messaging app (or whether it’s companies like Amazon and, to some extent, Google and Apple, doing voice interfaces that allow the user to speak commands to a machine), these interfaces are different in nature than the traditional graphical user interfaces that people are accustomed to. We have to design them differently because the mode and the expectations of the user is different. Even what the user can discover about the system is very different than a graphical user interface.

QCon: In some ways is it a step backwards? I mean we all hate IVR right? With voice, we don’t get things like auto complete. How do you address challenges like auto complete with conversational interfaces?

Angie: You are right. There are many significant challenges that you face with a voice interface compared to anything visual. The designers and the developers of any kind of voice interface application need to think about the most compelling cases in which a user would want to interact with your application using their voice. 

Most of the time, users won’t want to do that because we are very far from the science fiction version of talking to computers. If it doesn’t understand you and you have to repeat yourself,  people are going to revert back to what they know works (which is typing into a search box on their phone or tablet). When natural language processing and artificial intelligence get better, it will become easier for computers to understand humans and their intent. But there are so many complications. 

It’s a very complex problem and many smart people are trying to solve it at the moment. But until it gets to be very natural, people will try. But if it fails, they are going to revert back to something that they can trust. Trust is a huge component to all of these things.

QCon: Where are we today? If I am going to design an interface to interact with Siri or Alexa or Cortana, what do I need to consider?

Angie: We will get into a lot more detail in the talk, but it depends on whether it’s going to be a messaging chat bot that is visual, meaning the user is typing and the bot is replying through some messaging platform. Those considerations are going to be very specific to the platform guidelines. When I say platform, I mean something like Facebook Messenger. How you actually prompt the user to continue through the discussion, so that the users get what they need, is very different than a voice activated experience or a voice only experience. 

Siri and even Cortana are a little bit of a hybrid because Siri can always dump you out into a visual interface if you need it. If she can’t provide what you are looking for through speech, she will throw you back into the graphical user interface on the iPhone, and you can then proceed from there.  Alexa doesn’t do that. It’s meant to be strictly a voice experience. The best practices are going to change depending on whether there is text involved, a visual interface or whether it’s strictly voice.

In order to elevate the experience to an IVR experience, which is probably what most people are familiar with, there needs to be a very clear understanding for the user of what the system can and cannot support. That is one of the biggest challenges. You can group it in with discoverability which is always a challenge when you are designing any kind of systems for a user, they can discover the parameters of what the system can and cannot support. In a visual system, like a graphical user interface, a website or even an app, visually the user can understand pretty quickly what that screen or what that app is meant to do based on what is on it. 

With a voice-only system, it is very difficult unless they have done previous research for the user to know what that system can and cannot support. Guiding the user in a very gentle but clear manner is really important. It requires the designers and the developers to be able to handle all of the cases they cannot support in a very elegant way, and then informing the user at every stage very quickly and succinctly what they can do when it can’t respond to a particular request.

Speaker: Angie Terrell

Director of Design @BigNerdRanch, focused on Mobility & UX

Angie Terrell is the Director of Design at Big Nerd Ranch where she leads a team of user experience and interface designers. She designs mobile and web products for clients and is also an instructor for Big Nerd Ranch, teaching others to design for mobile and emerging technology. Angie has a degree in Cultural Anthropology and over 15 years designing a wide array of user experiences. She is devoted to design solutions that meet the unique needs of the user while never neglecting core design principles. Angie is passionate about sharing her knowledge and experience with the design community, which has led to speaking engagements at SXSW, IxDA, amUX, and Ladies that UX.

Find Angie Terrell at



Monday Nov 7

Tuesday Nov 8

Wednesday Nov 9

Conference for Professional Software Developers