AI Trends 2023: Reinforcement Learning – RLHF, Robotic Pre-Training, and Offline RL with Sergey Levine

EPISODE 612

Join our list for notifications and early access to events

About this Episode

Today we’re taking a deep dive into the latest and greatest in the world of Reinforcement Learning with our friend Sergey Levine, an associate professor at UC Berkeley. In our conversation with Sergey, we explore some game-changing developments in the field including the release of ChatGPT and the onset of RLHF. We also explore more broadly the intersection of RL and language models, as well as advancements in offline RL and pre-training for robotics models, inverse RL, Q learning, and a host of papers along the way. Finally, you don’t want to miss Sergey’s predictions for the top developments of the year 2023! 

Connect with Sergey

Related Episodes

Related Topics

More from TWIML

4 Responses

  1. Please could I confirm the name of the researcher who Sergey references as providing negative results for offline RL, please? Shawn someone?

Leave a Reply

Your email address will not be published. Required fields are marked *