Watermarking Large Language Models to Fight Plagiarism with Tom Goldstein

EPISODE 621
WATCH
Play Video

Join our list for notifications and early access to events

About this Episode

Today we’re joined by Tom Goldstein, an associate professor at the University of Maryland. Tom’s research sits at the intersection of ML and optimization and has previously been featured in the New Yorker for his work on invisibility cloaks, clothing that can evade object detection. In our conversation, we focus on his more recent research on watermarking LLM output. We explore the motivations behind adding these watermarks, how they work, and different ways a watermark could be deployed, as well as political and economic incentive structures around the adoption of watermarking and future directions for that line of work. We also discuss Tom’s research into data leakage, particularly in stable diffusion models, work that is analogous to recent guest Nicholas Carlini’s research into LLM data extraction. 

Connect with Tom
Read More

More from TWIML

Leave a Reply

Your email address will not be published. Required fields are marked *