Unconference: Modern ML

What is an unconference?

An unconference is a participant-driven meeting. Attendees come together, bringing their challenges and relying on the experience and know-how of their peers for solutions. A professional facilitator is also there to help keep the discussion moving forward, but where it goes is up to the participants.

It's a facilitated peer group that avoids the hierarchical aspects of a conventional conference, such as a top-down organization. Only the broad themes are predetermined. Everything else is just space for attendees to sound off ideas together, relate to shared challenges and rewards, and identify new ideas and goals. 

Our unconference sessions have been based on the Open Space Technology and Lean Coffee format since 2006.

Why are we doing unconference sessions?

We have designed QCon for senior software practitioners. That role comes with demanding challenges and complex problems. 

Connecting with your peers in a structured environment allows you to:

  • Broaden your perspective with the benefit of the experience of others.
  • Challenge how you've been doing things by breaking out of your bubble.
  • Learn from peers who have already overcome the challenges you're facing now.
  • Benchmark your solutions against other teams and organizations.
  • Get real-world perspectives on challenges that might be too novel or specific to find solutions in books or presentations.
  • Validate your technical roadmap with real-world research.
  • Connect with others like you and build relationships that go beyond the event.

Date

Tuesday Oct 3 / 03:55PM PDT ( 50 minutes )

Location

Seacliff D

Video

Video is not available

Slides

Slides are not available

Share

From the same track

Session AI/ML

Chronon - Airbnb’s End-to-End Feature Platform

Tuesday Oct 3 / 10:35AM PDT

ML Models typically use upwards of 100 features to generate a single prediction. As a result, there is an explosion in the number of data pipelines and high request fanout during prediction.

Speaker image - Nikhil Simha

Nikhil Simha

Author of "Chronon Feature Platform", Previously Built Stream Processing Infra @Meta and NLP Systems @Amazon & @Walmartlabs

Session AI/ML

Defensible Moats: Unlocking Enterprise Value with Large Language Models

Tuesday Oct 3 / 11:45AM PDT

Building LLM-powered applications using APIs alone poses significant challenges for enterprises. These challenges include data fragmentation, the absence of a shared business vocabulary, privacy concerns regarding data, and diverse objectives among data and ML users.

Speaker image - Nischal HP

Nischal HP

Vice President of Data Science @Scoutbee, Decade of Experience Building Enterprise AI

Session Distributed Computing

Modern Compute Stack for Scaling Large AI/ML/LLM Workloads

Tuesday Oct 3 / 01:35PM PDT

Advanced machine learning (ML)  models, particularly large language models (LLMs), require scaling beyond a single machine.

Speaker image - Jules Damji

Jules Damji

Lead Developer Advocate @Anyscale, MLflow Contributor, and Co-Author of "Learning Spark"

Session AI/ML

Generative Search: Practical Advice for Retrieval Augmented Generation (RAG)

Tuesday Oct 3 / 02:45PM PDT

In this presentation, we will delve into the world of Retrieval Augmented Generation (RAG) and its significance for Large Language Models (LLMs) like OpenAI's GPT4. With the rapid evolution of data, LLMs face the challenge of staying up-to-date and contextually relevant.

Speaker image - Sam Partee

Sam Partee

Principal Engineer @Redis

Session AI/ML

Building Guardrails for Enterprise AI Applications W/ LLMs

Tuesday Oct 3 / 05:05PM PDT

Large Language Models (LLMs) such as ChatGPT have revolutionized AI applications, offering unprecedented potential for complex real-world scenarios. However, fully harnessing this potential comes with unique challenges such as model brittleness and the need for consistent, accurate outputs.

Speaker image - Shreya Rajpal

Shreya Rajpal

Founder @Guardrails AI, Experienced ML Practitioner with a Decade of Experience in ML Research, Applications and Infrastructure