With the Enzyme test framework no longer supporting React 18, migrating to React Testing Library (RTL) became imperative.
At Slack, our hybrid approach integrated an Abstract Syntax Tree (AST) method and a Large Language Model (LLM) using Anthropic's AI model, Claude 2.1. Despite initial hurdles, we achieved an 80% conversion success rate.
Key innovations included AST conversions and annotations, DOM tree collection, stringent control mechanisms, and packaging all information into a cohesive pipeline with LLM call and feedback steps. This resulted in a 64% adoption rate and a 22% time-saving in test case conversion.
This success underscores the value of AI in large-scale code migrations and establishes a robust, innovative approach for similar challenges.
Interview:
What is the focus of your work?
The focus of my work these days includes three main areas: code migrations, test code generation, and integrating LLMs into developer workflows. I am particularly interested in understanding where LLMs fit in the development life cycle and how to smoothly integrate them. Additionally, I am working on open-sourcing the Enzyme to RTL codemod, which is the topic of my presentation at QCon.
What’s the motivation for your talk?
The motivation for my talk is to share our innovative approach for complex code conversions that has proven to be successful at Slack. With Enzyme still seeing 1.5 million weekly downloads, there's a real need for an efficient method to automatically convert Enzyme tests to React Testing Library (RTL). By showcasing a real-life use case of LLMs for code conversions, I'd like to demonstrate how this technology can save significant engineering hours and provide a practical solution for developers facing similar challenges
Who is your talk for?
The target audience for my talk is mid to senior software engineers who are actively building AI solutions. These individuals are likely to have a solid foundation in software development and interest in leveraging AI to enhance their workflows and solve complex problems.
What do you want someone to walk away with from your presentation?
I want this persona to walk away with a clear understanding of how we used LLMs to tackle our specific conversion problem and how this approach can be relevant to their own tasks. I would like to showcase the practical benefits of AI, addressing skepticism and proving its effectiveness in certain cases.
What do you think is the next big disruption in software?
I believe the next big disruption in software will be the simplification of writing software that will significantly reduce the need for hard skills. This will likely be achieved through an additional layer of abstraction on top of existing coding languages, enabling developers to accomplish the same tasks with less complexity. This advancement will democratize software development, making it more accessible to a broader range of people and accelerating the pace of innovation.
Speaker
Sergii Gorbachov
Staff Software Engineer @Slack, Specializing in AI-Driven Tools for Automating Code Migrations and Test Authorship
Sergii Gorbachov is a Staff Software Engineer at Slack, based in Vancouver, Canada. As part of the DevXp pillar, he focuses on developing AI-driven tools to automate and streamline development processes. His recent projects include leveraging large language models (LLMs) for code migrations and automating test authorship. Outside of work, Sergii enjoys hiking, running, and biking in the stunning landscapes of British Columbia.