You are viewing content from a past/completed QCon

Track: Future of Human Computer Interaction

Location: Seacliff ABC

Day of week: Wednesday

The fantastical future of diverse and prolific devices that influence our interactions with computers and one another is approaching even more quickly than experts may have predicted. We are entering a world where the new normal is multimodal interaction using various modern user interfaces. These interactions can vary between combinations of voice, visual, facial recognition, AR/VR, IOT, and often utilizing mobile devices at the same time. This track will concentrate on going deep into coding for the digital-physical convergence over the next few years.

Track Host: Courtney Hemphill

Partner & Tech Lead @CarbonFive

Courtney Hemphill is a Partner and Technical Lead at Carbon Five, a strategic digital product development firm. She has been developing software since 1999, first at an early stage e-commerce startup and eventually moving into consulting. In her role at Carbon Five, she has built ground up HIPAA-compliant, cloud-based platforms for health care companies, worked on a large server cluster analysis and forecasting platform, and supported enterprise executives transitioning from third-party solutions to skilled in-house continuous delivery teams. She is currently managing the Carbon Five NYC team and helping companies in insurance and finance develop cloud-native, test driven, continuous delivery software for data management, APIs, and new product creation.

Courtney mentors for TechStars, is an advisor to several startups, and organizes coding workshops for women. She also sits on the Golden Gate Board of NatureBridge, and is constantly finding random new corners of the world to rock climb.

CASE STUDY TALK (50 MIN)

10:35am - 11:25am

Multi-Modal Input Design for Magic Leap

As wearable spatial computing devices become more of a reality, new opportunities for interaction between humans and computers arise. Magic Leap embraces a wide range of inputs including hand and eye tracking, speech, a wireless 6DoF controller, and support for external peripherals. Learn what types of new input modalities are coming online and how they can be used and combined in different ways to surpass existing approaches in terms of throughput, discoverability, accessibility, and prediction with stories and examples from Magic Leap's Interaction Lab.

Colman Bryant, Mixed Reality Game and Product Designer @MagicLeap
CASE STUDY TALK (50 MIN)

11:50am - 12:40pm

Rethinking HCI with Neural Interfaces @CTRLlabsCo

Brain-computer interfaces, neuromuscular interfaces, and other biosensing techniques can eliminate the need for physical controllers. In the context of interaction design, “control” is the process of transforming intention in the mind into action taken in the world (or machine).  When freed from the familiar bonds of the keyboard, mouse, game controller, and touchscreen, we’re faced with a clean slate and an epic design challenge. What happens when we decouple the user interface from hand-held hardware? We'll discuss this and the emerging field of neural interaction design.

Adam Berenzweig, Director of R&D @CTRLlabsCo
CASE STUDY TALK (50 MIN)

1:40pm - 2:30pm

Engineering Dumb: Modern Mobile Thin Clients

As mobile developers, we often hardcode everything from layouts to machine learning right onto the device. While this approach works for many situations, there are times when you’re going to want your app to be a bit more flexible. What I’m going to show you is a top-level walk-through of how I built a complex feature at OkCupid and along the way demonstrate a few design patterns that you can employ in your own app to create remotely configurable layouts and behavior on-the-fly.

Brandon John-Freso, Senior Android Engineer @WeWork
CASE STUDY TALK (50 MIN)

2:55pm - 3:45pm

Open Source Robotics: Hands on with Gazebo and ROS 2

In large part, the recent advancements in robotics have been made possible by open source tools. Open Robotics, a nonprofit organization dedicated to the development, distribution, and adoption of open source software in robotics, supports two main projects — ROS (Robot Operating System) and Gazebo, a multirobot simulator — both of which are widely used by the global robotics community, including industry, academia, and hobbyists. 

ROS is a framework that lets you quickly set up the various parts of a robot and get them all to work together as a meaningful application. ROS does this by setting a common transport layer for all the software inside the robot, from sensors and actuators to decision making. Around the common transport layer, there are several tools built to help developers introspect and diagnose their robots with ease. Gazebo is a simulator that calculates rigid-body dynamics, generates all kinds of sensor data, and allows user interaction through both a programming API and a powerful graphical interface. Some of the uses for Gazebo include robotics competitions, continuous integration, prototyping, machine learning and education. 

In this talk, Louise will give an overview of ROS and Gazebo, the problems they've been solving so far and what's in the roadmap for the future. In the second half of the talk, a hands-on demo will walk through the creation of a robot in simulation and controlling and inspecting it using ROS 2, the next generation ROS.

Louise Poubel, Software Engineer @OpenRoboticsOrg
CASE STUDY TALK (50 MIN)

4:10pm - 5:00pm

Patterns and Practices in Voice Computing

Remember the days when companies had a siloed mobile department that was regarded as an afterthought? The early days of mobile were met with mixed results. But playbooks eventually emerged for how companies could leverage mobile to amplify their business. At Amazon Alexa, we are seeing similar trends emerge with multi-modal voice computing. Drawing from lessons learned with Alexa Skills across Entertainment and IoT, we will work backwards from the customer and share technical and organizational patterns throughout the end-to-end journey. These stories will hopefully inspire you to work voice computing into the fabric of your company.

Yow-Hann Lee, Principal Solutions Architect Alexa @Amazon

Last Year's Tracks

Monday, 5 November

Tuesday, 6 November

Wednesday, 7 November

The all-new QCon app!

Available on iOS and Android

The new QCon app helps you make the most of your conference experience. Easily browse and follow the conference schedule, star the talks you want to attend, and keep tabs on your personal itinerary. Download the app now for free on iOS and Android.