You are viewing content from a past/completed QCon

Presentation: ML in the Browser: Interactive Experiences with Tensorflow.js

Track: Machine Learning for Developers

Location: Ballroom A

Duration: 2:55pm - 3:45pm

Day of week: Wednesday

Slides: Download Slides

Share this on:

This presentation is now available to view on

Watch video with transcript

What You’ll Learn

  1. Hear about Machine Learning in the Browser - when you should use it, example use cases and potential workflows.
  2. Hear about the Tensorflow.js library and its API with examples.
  3. Find out how to develop and train a model offline (GPUs, TPUs,) then export it to run it in the browser using Tensorflow.js


Machine learning (ML) holds opportunity to build better experiences right in the browser! Using libraries such as Tensorflow.js, we can better anticipate user actions, reliably identify sentiment or topics in text, or even enable gesture based interaction - all without sending the user’s data to any backend server. However, the process of building an ML model and converting it to a format that can be easily imported into a front-end web application can be unclear and challenging. 

In this talk, I provide a friendly introduction to machine learning, cover concrete steps on how front-end developers can create their own ML models and deploy them as part of web applications. To further illustrate this process, I will discuss my experience building Handtrack.js - a library for prototyping real time hand tracking interactions in the browser.  Handtrack.js is powered by an object detection neural network (MobilenetV2, SSD) and allows users to predict the location (bounding box) of human hands in an image, video or canvas html tag. 


  • Front end engineers interested in using ML within their web applications.
  • Software engineers interested in training ML models
  • Data Scientists interested in deploying ML 

What you can expect

  • A friendly introduction to ML in the browser using Tensorflow.js
  • When to use ML in the browser
  • How to create a machine learning model with an example (data collection, model training, model evaluation, conversion to Tensorflow.js).
  • Practical tips and pitfalls associated with ML projects (what model to use, data validation checks, what framework to use etc.)
  • A live demo of hand gesture interaction in the browser, using a neural network model.

What is a research engineer doing at Cloudera Fast Forward Labs?


At Fast Forward Labs, we like to see ourselves as the link between academia and industry. Research Engineer have two main tasks. In the first part of our work, we research tools and technologies from academia and the goal is to focus on tech that makes sense for the industry within the next six months to two years. For each selected topic, we conduct in depth research and produce an accessible report for our clients. In addition to that, we build prototypes that communicate the ideas behind these technologies and provides insights on their use in practice. The second half of our jobs entails working with clients, to build and prototype machine learning solutions to their specific problems.


Why would you want to run a machine learning model in a browser?


My research interests are at the intersection of human computer interaction and applied machine learning, and one of the sub-topics in this area focuses on how machine learning can make user interactions more interactive and engaging. ML in the browser, provides opportunities to craft rich experiences such as predicting the user’s next action, message auto-complete or even gesture-based interaction. Beyond interaction improvement, browser deployment enables other benefits such as privacy, ease of distribution and improved latency. By performing predictions on user data locally in the browser, you could lay claim to strong privacy because user data is never sent to remote servers.  Another really exciting benefit is the fact that the distribution of the model becomes a lot easier. For most ML developers, packaging and distributing an ML application can be a challenging process which involves the installation of drivers, libraries and other system specific dependencies. However, If you go ahead and do all of this in the browser, all of that installation and distribution hassle goes away. With Tensorflow.js, it is as easy as including a link to the Tensorflow.js library, and loading your ML model files where appropriate. In addition, there are also cases where, deploying a model in the browser can improve the latency of your application. For these situations, it can be faster to perform computations in the browser as opposed to the round trip of sending user data to a remote server which returns results. Of course, the caveat here is that your model needs to be relatively small for this to work well in practice.


What's the goal of the talk?


It's an intermediate level talk, and the goal is to first get users excited about the possibilities of ML in the browser, and to show an end-to-end use case of how they can develop a model and get that deployed in a web application. This includes steps such as data collection, training a model, converting it into a format we can deploy in the browser and looking at runtime statistics. The talk will also address questions such as - Should you train a model in the browser? Do you train it offline? And if you train it offline, what are the challenges associated with converting it to a format where you can run it in the browser?


In this example that you're going through, will you be training it within the browser or offline?


This is one of the things I will cover in this talk. With Tensorflow.js there are a couple of different flows that a developer can follow. Flow number one, you can collect user data and train your model from scratch in the browser. The user can even specify hyper parameters of the model such as the architecture,  the number of layers, building blocks, etc. In practice, this approach is suitable for small models which are trained on relatively small datasets. In the second flow, the model is trained offline and loaded for inference in the browser. This offline training approach allows you to take advantage of fast compute resources (GPUs, TPUs etc), train on large data sets, use complex models and then convert the trained model into a format that can be loaded by a JavaScript application. My talk and use case example will focus on this approach. The third flow follows a similar pattern where the model is trained offline, loaded in the browser but can now be retrained or finetuned in the browser using additional user data.


Two questions. Who is the core audience and what do you hope that persona to walk away with?


I am talking to an ML engineer, preferably one who is comfortable deploying machine learning models in backend applications in Python but has limited experience with browser use cases. The second target is the software engineer, perhaps even the front-end software engineer who wants to integrate ML models into front-end applications built with Tools like React.

I am hoping audience members will walk away with an understanding of use cases for ML in the browser, how to implement these use cases with the Tensorflow.js library, and best practices around deployment these models.

Speaker: Victor Dibia

Research Engineer in Machine Learning @cloudera

Victor Dibia is a Research Engineer with Cloudera’s Fast Forward Labs. Prior to this, he was a Research Staff Member at the IBM TJ Watson Research Center, New York. His research interests are at the intersection of human computer interaction, computational social science, and applied AI. A senior member of IEEE, Victor has published work at venues like the AAAI Conference on Artificial Intelligence and ACM Conference on Human Factors in Computing Systems. His work has been featured in outlets such as the Wall Street Journal and VentureBeat. He holds an M.S. from Carnegie Mellon University and a Ph.D. from City University of Hong Kong

Find Victor Dibia at

2020 Tracks

  • The Future of the API: REST, gRPC, GraphQL and More

    Web-based API continue to evolve. The track provides the what, how, and why of future APIs, including GraphQL, Backend for Frontend, gRPC, & ReST

  • Resurgence of Functional Programming

    What was once a paradigm shift in how we thought of programming languages is now main stream in nearly all modern languages. Hear how software shops are infusing concepts like pure functions and immutablity into their architectures and design choices.

  • Social Responsibility: Implications of Building Modern Software

    Software has an ever increasing impact on individuals and society. Understanding these implications helps build software that works for all users

  • Non-Technical Skills for Technical Folks

    To be an effective engineer, requires more than great coding skills. Learn the subtle arts of the tech lead, including empathy, communication, and organization.

  • Clientside: From WASM to Browser Applications

    Dive into some of the technologies that can be leveraged to ultimately deliver a more impactful interaction between the user and client.

  • Languages of Infra

    More than just Infrastructure as a Service, today we have librarys, languages, and platforms that help us define our infra. Languages of Infra explore languages and libraries being used today to build modern cloud native architectures.

  • Mechanical Sympathy: The Software/Hardware Divide

    Understanding the Hardware Makes You a Better Developer

  • Paths to Production: Deployments You've Always Wondered About

    Deployment pipelines allow us to push to production at ever increasing volume. Paths to production looks at how some of software's most well known shops continuous deliver code.

  • Java, The Platform

    Mobile, Micro, Modular: The platform continues to evolve and change. Discover how the platform continues to drive us forward.

  • Security for Engineers

    How to build secure, yet usable, systems from the engineer's perspective.

  • Modern Data Engineering

    The innovations necessary to build towards a fully automated decentralized data warehouse.

  • Machine Learning for the Software Engineer

    AI and machine learning is more approachable than ever. Discover how ML, deep learning, and other modern approaches are being used in practice by Software Engineers.

  • Inclusion & Diversity in Tech

    The road map to a inclusive and diverse tech organization. *Diversity & Inclusion defined as the inclusion of all individuals in an within tech, regardless of gender, religion, ethnicity, race, age, sexual orientation, and physical or mental fitness.

  • Architectures You've Always Wondered About

    How do they do it? In QCon's marquee Architectures track, we learn what it takes to operate at large scale from well-known names in our industry. You will take away hard-earned architectural lessons on scalability, reliability, throughput, and performance.

  • Architecting for Confidence: Building Resilant Systems

    Your system will fail. Build systems with the confidence to know when they do, you won't.

  • Remotely Productive: Remote Teams & Software

    More and more companies are moving to remote work. How do you build, work on, and lead teams remotely?

  • Operating Microservices

    Building and operating distributed systems is hard, and microservices are no different. Learn strategies for not just building a service but operating them at scale.

  • Distributed Systems for Developers

    Computer science in practice. An applied track that fuses together the human side of computer science with the technical choices that are made along the way