As deep learning continues to drive advancements across various industries, efficiently navigating the landscape of specialized AI hardware has huge impact in cost and speed of operation. In addition, unleashing the full potential of these hardware through appropriate software stacks can be a daunting task.
This talk explores these advancements, focusing on enhanced AI capabilities in processors, specialized cores in GPUs, and optimized architectures in accelerators. Additionally, we will discuss software advancements that unlock the full potential of this hardware, such as optimized instruction sets, high-speed interconnects, and scalable infrastructures. By examining how these technologies and software enhancements cater to tasks like pretraining, fine-tuning, and inference, attendees will gain insights into selecting the most suitable hardware and software combinations for their AI workloads.
Speaker
Bibek Bhattarai
AI Technical Lead @Intel, Computer Scientist Invested in Hardware-Software Optimization, Building Scalable Data Analytics, Mining, and Learning Systems
Bibek is an AI Technical Lead at Intel, where he collaborates with customers to optimize the performance of their AI workloads across various deployment platforms, including cloud, on-premises, and hybrid environments. These workloads involve pertaining, fine-tuning, and deployment of state-of-the-art deep learning models using cutting-edge AI-specialized hardware in the form of CPUs, GPUs, and AI Accelerators.
Bibek holds a Doctorate in Computer Science and Engineering from George Washington University, where his research focused on large-scale graph computing, mining, and learning technologies. He is keenly interested in HW/SW optimization of various workloads including Graph Computing, Deep Learning, and parallel computing.