Session Topics
Perspective
Most AI/ML projects start shipping models into production, where they...
Panel Discussion
Operationalizing machine learning in an organization isn't only an issue of technology. People, organization, and culture play critical roles as well. Get any of these wrong and your organization will struggle. This panel will explore strategies for fine-tuning (or overhauling) your organization's softer side to increase its effectiveness.
Case Study
ML modeling teams at Twitter face a variety of uniquely...
Perspective
This talk will focus on methods that ensure the fair creation of training data for machine learning, whether the annotators are in-house, contracted, or work as crowdsourced workers online. It will show that contrary to the widely-held belief that training data creation is a race to the bottom in pricing, it is possible to maximize quality and fairness at the same time for almost any machine learning task.
Technology
As teams scale their AI platforms, they must decide which capabilities to build versus buy. Whether balancing standards and flexibility or differentiation and scale, there is a playbook that teams should run to make these decisions effectively. Join SigOpt Co-Founder & CEO Scott Clark’s session at TWIMLcon to learn how AI leaders weigh these tradeoffs.
Perspective
How do you set up machine learning projects to be successful, and what are the systems and processes you need to get the most out of your AI platform? This talk will describe the key patterns many organizations have followed to start shipping ML at scale. With the right conditions in place, you will see outsized impact across your company as new data products and ML models rapidly deploy at scale.
Technology
The state of artificial intelligence is continuing to advance rapidly,...
Keynote Interview
Deepak Agarwal, vice president of artificial intelligence at LinkedIn, will join Sam Charrington on the TWIMLcon stage for a live podcast interview to explore the challenges of scaling AI readiness across the organization, tailoring machine learning systems for developer, data scientist and member needs, and how his team is sharing its "data-first" mindset with the company as a whole.
Keynote Interview
What should we take away from how web giants and autonomous vehicles are redefining scale and impact for ML platforms?  From founding the AI platforms team at Facebook, engineering for Google and now pioneering the development of AI/ML platforms in the uncharted waters of autonomous vehicles, Hussein Mehanna, Head of AI/ML at Cruise, is in a unique position to answer this question.
Keynote Interview
Andrew Ng, former Chief Scientist at Baidu and founding lead of Google Brain, will join Sam Charrington on the TWIMLcon stage for a live podcast interview. They'll discuss the state of AI in the enterprise, barriers to using deep learning in production and how to overcome them, developing a strong culture for AI, and other topics from Andrew's recently published AI transformation playbook.
Keynote Interview
Franziska Bell, PHD leads a team of 100 data scientists building use-case-driven data science platforms at Uber, building on top of lower-level capabilities from Uber's Michelangelo. Fran will join Sam on the TWIMLcon stage for a live podcast interview exploring how both low-level and higher-level ML platforms can drive data scientist and developer productivity.
Case Study
The search feature is often the first step for online shoppers. Providing personalized search results is crucial to the E-Commerce experience while posing a challenging and intellectually stimulating engineering problem. In this session, we will use Search as a case study to focus on the technology that allows us to serve these models in highly scalable production environments.
Case Study
Data is central to training and validating every aspect of autonomous vehicle software. In this talk, we’ll look at NVIDIA’s internal end-to-end AI platform, MagLev, which enables continuous data ingest from cars producing multiple terabytes of data per hour, and enables AI designers to iterate on new neural network designs across thousands of GPU systems and validate their behavior over petabyte-scale datasets.
Case Study
Hosting models and productionizing them is a pain point. Let’s fix that. Speakers Sumit Daryani and John Swift demonstrate their work with a reference architecture implementation for building the set of microservices and lay down, step by step, the critical aspects of building a well-managed ML model deployment flow pipeline. 
Technology
Models are the new code: while machine learning models are increasingly being used to make critical product and business decisions, the process of developing and deploying ML models remain ad-hoc. In this talk, we draw upon our experience with ModelDB and Verta to present best practices and tools for model versioning and ensure high quality of deployed ML models.
Case Study
In this talk, we will cover the journey we at Levi's undertook to go from a fully outsourced model to over a dozen internally-built Machine Learning models deployed in production that are ROI positive and that solve real business problems, in about two years, as well as challenges we've faced, key design methodologies and more
Panel Discussion
This panel explores how organizations can go beyond a general desire to be ethical in their use of ML/AI to building transparency, accountability, fairness, anti-bias, etc. into their ML pipelines and practices in a systematic and sustainable way.
Case Study
Productionizing machine learning models in an organization is difficult. The goal of this presentation is to discuss how Kubernetes can be leveraged to train, deploy, and monitor models in production settings, as well as lessons learned from using Kubernetes to productionize machine learning workloads at 2U.
Technology
Machine learning models are increasingly being used to make critical decisions that impact people’s lives. Learn how to measure bias in your data sets & models, and how to apply the fairness algorithms to reduce bias across the machine learning pipeline.
Traditional enterprises are often burdened by factors that make scaling machine learning more difficult than at startups. Here, we discuss how traditional companies in a variety of industries can overcome these challenges and more successfully deliver data science and machine learning models into production.
Technology
Introducing Semblance, a machine learning feature generation system. Semblance features are platform-agnostic, which allows the same feature definition to be evaluated in different platforms, as well as enabling a lambda architecture system to evaluate features in real-time within a few milliseconds.
Case Study
In this talk, we will share how SurveyMonkey was able to extend it's ML platform in a hybrid cloud. We will also share how we took stock of our strengths and existing investments and built a platform that made sense for us, and our learnings along the way.
Panel Discussion
We typically hear conference presentations from the single perspective of an organization's data scientists, data engineers, platform engineers, or ML/AI leaders. "Team Teardown" turns this model on its head, speaking with several members of an organization's team. This panel will explore the evolution of machine learning at SurveyMonkey.
Panel Discussion
In this panel, we will explore how teams at Airbnb collaborate to deliver ML throughout the Airbnb product. We dive into how ML is organized within AirBnB, how teams interface with each other on large initiatives, building ML in a diverse multi-functional organization, and more.
Technology
Building AI applications comes with significant business risks, and nearly half of all companies complain about lack of trust in AI, Explainable AI helps ML teams understand model behavior and predictions, filling a critical gap in operationalizing AI in verticals like FinTech, insurance, banking, logistics, and more.
Perspective
Production ML hurdles are often organization-wide hurdles. In this talk, we discuss challenges in production and industrial-grade ML, ranging from political issues and data product/market fit to limitations with software tools and technology platforms, as well as solutions aimed at tackling these challenges. 
Case Study
In this talk, the Xfinity X1 recommendations platform team at Comcast discusses their journey, from migrating data platforms from serverless technologies like lambdas to streaming frameworks such as Flink, as well as engineering this highly personalized content discovery experience using their A/B Testing platform.