Transformers On Large-Scale Graphs with Bayan Bruss

EPISODE 641

Join our list for notifications and early access to events

About this Episode

Today we’re joined by Bayan Bruss, Vice President of Applied ML Research at Capital One. In our conversation with Bayan, we covered a pair of papers his team presented at this year’s ICML conference. We begin with the paper Interpretable Subspaces in Image Representations, where Bayan gives us a dive deep into the interpretability framework, embedding dimensions, contrastive approaches, and how their model can accelerate image representation in deep learning. We also explore GOAT:  A Global Transformer on Large-scale Graphs, a scalable global graph transformer. We talk through the computation challenges, homophilic and heterophilic principles, model sparsity, and how their research proposes methodologies to get around the computational barrier when scaling to large-scale graph models.

Connect with Bayan

Thanks to our sponsor Capital One

Capital One's AI and machine learning capabilities are central to how it builds products and services — and they're now at the forefront of what’s possible in banking. Whether helping consumers shop more safely online, giving customers new insights into their finances via award-winning mobile apps, or advancing research into cutting-edge applications of AI and machine learning, Capital One is using technology to make banking better.
To learn more about Capital One's Machine Learning and AI efforts and research, visit twimlai.com/go/capitalone!
Capital One Logo

More from TWIML

Leave a Reply

Your email address will not be published. Required fields are marked *