Building Robust and Scalable Intelligent Writing Assistants: Challenges and Opportunities Leveraging GenAI

Text revision is a complex, iterative process. It is no surprise that human writers are unable to simultaneously comprehend multiple demands and constraints of the task of text revision when producing well-written texts, as they are required to cover the content, follow linguistic norms, set the right tone, follow discourse conventions, etc. This presents a massive challenge and opportunity for intelligent writing assistants, which have undergone an enormous shift in their abilities in the past few years and months via large language models and human-feedback learning. In addition to the quality of editing suggestions, writing assistance has undergone a monumental shift in terms of being prescriptive systems to being human-feedback-driven, collaborative systems. However, writing assistants still lack in terms of their quality, personalization, and overall usability, limiting the value they provide to users. In this talk, I will present my research, challenges, and insights on building intelligent and interactive writing assistants for effective communication and navigating challenges pertaining to quality, personalization, and usability.


Vipul Raheja

Applied Research Scientist @Grammarly Working on Robust and Scalable Intelligent Writing Assistants

Vipul Raheja is an Applied Research Scientist at Grammarly. He works on developing robust and scalable approaches centered around improving the quality of written communication, leveraging Natural Language Processing and Machine Learning. His research interests lie at the intersection of large language models and controllable text generation methods for text revision. He has authored multiple papers at top-tier NLP conferences such as ACL, NAACL, and EMNLP. He also co-organizes the Workshop on Intelligent and Interactive Writing Assistants (In2Writing).

Read more
Find Vipul Raheja at:


Wednesday Oct 4 / 11:45AM PDT ( 50 minutes )


Seacliff ABC


AI/ML Natural Language Processing Research Large Language Models


Video is not available


Slides are not available


From the same track

Session Edge

Rethinking Connectivity at the Edge: Scaling Fleets of Low-Powered Devices Using

Wednesday Oct 4 / 10:35AM PDT

Building distributed systems is hard. Today’s organizations demand their applications be as flexible and resilient as possible.

Speaker image - Jeremy Saenz
Jeremy Saenz

Senior Software Engineer @Synadia Working on, Author of Martini, Negroni, Inject & CLI, Previously CPO @Kajabi

Session WebAssembly

Fast, Scalable, Secure: WebAssembly and the Future of Isolation

Wednesday Oct 4 / 03:55PM PDT

We have reached the limits of traditional hardware based isolation technologies such as  virtual machines, containers, and processes.

Speaker image - Tal Garfinkel
Tal Garfinkel

Research Scientist @UC San Diego

Session Hardware

Automating Bare Metal to Improve Your Quality of Life

Wednesday Oct 4 / 02:45PM PDT

Everything we build is built upon a substrate. Even the cloud computing we directly, or indirectly, use every day is built upon a substrate. In computing, we often call this substrate Bare Metal, and the closer you get to it, the more potential you can unlock... if you're willing.

Speaker image - Julia Kreger
Julia Kreger

Senior Principal Software Engineer @Red Hat

Session Networking

Building a Rack-Scale Computer with P4 at the Core: Challenges, Solutions, and Practices in Engineering Systems on Programmable Network Processors

Wednesday Oct 4 / 01:35PM PDT

This talk will present challenges, solutions, and engineering practices around building distributed systems on top of programmable network hardware through the lens of building the Oxide rack-scale computer.

Speaker image - Ryan Goodfellow
Ryan Goodfellow

Engineer @Oxide, Working Group Member @P4Lang, Open Source Developer on @illumos