Parallelism and Acceleration for Large Language Models with Bryan Catanzaro

EPISODE 507
WATCH
Play Video

Join our list for notifications and early access to events

About this Episode

Today we're joined by Bryan Catanzaro, vice president of applied deep learning research at NVIDIA.

Most folks know Bryan as one of the founders/creators of cuDNN, the accelerated library for deep neural networks. In our conversation, we explore his interest in high-performance computing and its recent overlap with AI, as well as his current work on Megatron, a framework for training giant language models, and the basic approach for distributing a large language model on DGX infrastructure.

We also discuss the three different kinds of parallelism, tensor parallelism, pipeline parallelism, and data parallelism, that Megatron provides when training models, as well as his work on the Deep Learning Super Sampling project and the role it's playing in the present and future of game development via ray tracing.

Connect with Bryan
Read More

Related Episodes

Related Topics

More from TWIML

Leave a Reply

Your email address will not be published. Required fields are marked *