Engineering at AI Speed: Lessons from the First Agentically Accelerated Software Project

Summary

Disclaimer: This summary has been generated by AI. It is experimental, and feedback is welcomed. Please reach out to info@qconsf.com with any comments or concerns.

The presentation, titled Engineering at AI Speed: Lessons from the First Agentically Accelerated Software Project, was given by Adam Wolff, an engineer at Anthropic. The talk focused on the development and architectural decisions behind Claude Code, a developer tool designed to maximize AI development velocity.

Main Takeaways:

  • Simple by Design: Emphasizes the use of simple patterns in high-level languages that support async generators and typed interfaces to outperform complex AI systems, focusing on speed as the primary competitive advantage.
  • Process-Product Convergence: Describes how rapid iteration cycles blur lines between development workflows and product features, with examples from Claude Code's evolution.
  • Engineering for Speed: Explores technical patterns for rapid release, updating, and feedback gathering that enable acceleration at scale.

Key Concepts Discussed:

  • The importance of choosing architectural designs that prioritize speed over complexity in software development.
  • The examples explained how some modules of Claude Code evolved through experimentation and adjustments.
  • Adam shared experiences demonstrating how adoption of AI tools like Claude Code facilitates rapid software delivery, allowing for iterative feedback and adjustments.
  • Discussed the significance of internal experimentation and failure as critical components in validating architectural choices and accelerating development processes.
  • The presentation also showcased specific stories from the development journey of Claude Code, highlighting the decision-making processes and lessons learned.

Lessons from Case Studies:

  • Modular complexity is preferable to tangled complexity when making architectural decisions.
  • Continuous deployment and the ability to quickly reverse changes through mechanisms like feature flags are crucial to maintaining velocity.
  • Utilizing user feedback and development insights to guide product evolution and feature integration is essential.

Overall, the presentation stressed the importance of speed and adaptability in AI-driven software projects, outlining practical strategies and experiences that underpinned the successful use of Claude Code in accelerating development.

This is the end of the AI-generated content.


Abstract

Claude Code is the first developer tool built specifically to maximize AI development velocity. People routinely try to dissect it looking for its secret sauce or advanced architecture, but the key parts of the implementation are just some async generators and nicely typed interfaces composed with basic orchestration patterns. The real technical innovation wasn't building sophisticated systems—it was deliberately choosing simple solutions that maximize development velocity. In this talk, we'll explore the architectural decisions that prioritize speed over complexity, and how this velocity-first approach naturally leads to process-product convergence where internal workflows evolve into user-facing capabilities.

Main Takeaways:

  • Simple by Design: Why deliberately simple patterns in high level languages that support async generators and typed interfaces outperform complex AI systems when speed is the primary competitive advantage
  • Process-Product Convergence: How rapid iteration cycles blur the lines between development workflows and product features, with concrete examples from Claude Code's evolution
  • Engineering for Speed: Technical patterns for rapid release, update, and feedback gathering that enable acceleration at scale

Interview:

What is the focus of your work these days?

The majority time in my day is spent working on Claude Code:

  • Low-level implementation details like Claude’s shell and its integration with native tooling
  • Build and release process
  • Installer and auto-updater
     

What is the motivation behind your talk?

AI agents are reshaping the landscape of software development practices faster than we can settle on shared best practices. As an engineering community, we will have to work together to figure out how to get the most from these incredible tools. So much of what makes Claude Code great is grounded in feedback from our users.

Who is your talk for?

Software engineer, AI engineer, Product Manager, Technical Program Manager, Designer


Speaker

Adam Wolff

Engineer and Individual Contributor to Claude Code @Anthropic, Previously @Robinhood, @Facebook

Adam Wolff currently works at Anthropic as an engineer and individual contributor to Claude Code, the first terminal-based agentic coding tool. Previously, he was the Head of Engineering at Robinhood, and before that, he was in charge of Facebook's product infrastructure group, which was responsible for popular open source technologies including React and GraphQL. Adam joined Facebook when the startup that he co-founded, Sharegrove Inc., was acquired in 2010. Before founding Sharegrove, Adam was the Chief Software Architect for Laszlo Systems, where he worked on rich client apps and open source software before it was cool.

Read more

Date

Tuesday Nov 18 / 10:35AM PST ( 50 minutes )

Location

Ballroom BC

Slides

Slides are not available

Share

From the same track

Session

Dynamic Moments: Weaving LLMs into Deep Personalization at DoorDash

Tuesday Nov 18 / 03:55PM PST

In this talk, we’ll walk through how DoorDash is redefining personalization by tightly integrating cutting-edge large language models (LLMs) with deep learning architectures such as Two-Tower Embeddings (TTE) and Multi-Task Multi-Label (MTML) models.

Speaker image - Sudeep Das

Sudeep Das

Head of Machine Learning and Artificial Intelligence, New Business Verticals @DoorDash, Previously Machine Learning Lead @Netflix, 15+ Years in Machine Learning

Speaker image - Pradeep Muthukrishnan

Pradeep Muthukrishnan

Head of Growth for New Business Verticals @DoorDash, Previously Founder & CEO @TrustedFor, 15+ Years in Machine Learning

Session AI Agents

From Content to Agents: Scaling LLM Post-Training Through Real-World Applications and Simulation

Tuesday Nov 18 / 02:45PM PST

This talk presents a comprehensive journey through modern AI post-training techniques, from Pinterest's production-scale content discovery systems to enterprise agent training through Veris AI’s simulation.

Speaker image - Faye Zhang

Faye Zhang

Staff Software Engineer @Pinterest, Tech Lead on GenAI Search Traffic Projects, Speaker, Expert in AI/ML with a Strong Background in Large Distributed System

Speaker image - Andi Partovi

Andi Partovi

Co-Founder @Veris AI, Making AI Agents World-Ready

Session AI/ML

Automating the Web With MCP: Infra That Doesn’t Break

Tuesday Nov 18 / 05:05PM PST

AI agents are only as strong as the infrastructure beneath them. In this talk, we’ll walk through the architecture behind Browserbase’s model context protocol (MCP), built to support stateful browser automation at scale.

Speaker image - Paul Klein

Paul Klein

Founder @Browserbase, previously Director of Self-Service & Engineering Manager @Mux, Co-Founder & CTO @Stream Club, Technical Lead @Twilio Inc.

Session

Engineering AI for Creativity and Curiosity on Mobile

Tuesday Nov 18 / 11:45AM PST

This talk shares practical lessons from building production-grade AI for creativity and curiosity on mobile devices.

Speaker image - Bhavuk Jain

Bhavuk Jain

Tech Lead @Google

Session AI/ML

Improving Meta Generative Ad Text using Reinforcement Learning

Tuesday Nov 18 / 01:35PM PST

Reinforcement Learning with Performance Feedback (RLPF) unlocks a new way of turning generic GenAI models into customized models fine-tuned for specific tasks. This approach is especially powerful when combined with in-house data and performance metrics.

Speaker image - Alex Nikulkov

Alex Nikulkov

Research Scientist (RL lead for Monetization GenAI) @Meta