Loading…
September 18-19, 2024
San Francisco, California
View More Details & Registration
Note: The schedule is subject to change.

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for PyTorch Conference 2024 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

This schedule is automatically displayed in Pacific Daylight Time (UTC-7). To see the schedule in your preferred timezone, please select from the drop-down located at the bottom of the menu to the right.

IMPORTANT NOTE: Timing of sessions and room locations are subject to change.

Breakout Sessions clear filter
Wednesday, September 18
 

11:10am PDT

Meta Llama 3 and the Future of Responsible AI Development - Spencer Whitman & Vincent Gonguet, Meta
Wednesday September 18, 2024 11:10am - 11:35am PDT
As AI models become increasingly powerful and pervasive, trust and safety have become top priorities. Join us for a timely talk on Llama 3, our latest foundation model, and the cutting-edge trust and safety models and tools we've developed to ensure responsible AI development. In this talk, we'll dive into: •The advancements of Llama 3 and its applications •Our innovative trust and safety approaches, including toxicity detection and mitigation •The open-source tools and resources we're sharing to empower the community Discover how Meta is pushing the boundaries of trust and safety and learn how you can integrate these solutions into your own projects. Let's build a safer, more responsible AI future together!
Speakers
SW

Spencer Whitman

Product Manager (AI Security), Meta
VG

Vincent Gonguet

Director, GenAI Trust & Safety, Meta
Wednesday September 18, 2024 11:10am - 11:35am PDT
Gateway Pavilion - Cowell Theater
  Breakout Sessions
  • Audience Any
  • Slides Attached Yes

11:10am PDT

Sponsored Session: NeMo-Aligner: A Scalable Toolkit for Model Alignment - Gerald Shen & Jimmy Zhang, NVIDIA
Wednesday September 18, 2024 11:10am - 11:35am PDT
Aligning AI models with human values and preferences is essential for making them safe and helpful. However, building an efficient and scalable toolkit for alignment can be challenging, especially when applied to state of the art foundation models with billions or trillions of parameters. NeMo-Aligner is an open-source, optimized and scalable toolkit that implements alignment algorithms such as Reinforcement Learning from Human Feedback (RLHF), Direct Preference Optimization (DPO), SteerLM and Self-Play Fine Tuning (SPIN). This talk will introduce NeMo-Aligner and show the steps we took to design and optimize the toolkit around various alignment algorithms. In particular, we discuss the RLHF implementation where we observe close to 7x speedup and excellent scaling performance by adding TRT-LLM integration, carefully orchestrating communication and utilizing fast training kernels. We’re able to align state-of-the-art open source models with NeMo-Aligner and hope our framework can enable the community to performantly customize, fine-tune and align foundational models at any scale.
Speakers
avatar for Gerald Shen

Gerald Shen

Engineer, NVIDIA
Gerald Shen is a member of the NVIDIA NeMo NLP Team specializing in model alignment. He leads the development of the NeMo-Aligner toolkit, a scalable toolkit to align large language models. This toolkit has been used to align models at NVIDIA with algorithms such as reinforcement... Read More →
avatar for Jimmy Zhang

Jimmy Zhang

Machine Learning Engineer, NVIDIA
Jimmy Zhang is a Senior Deep Learning Architect at NVIDIA. His work focuses on researching and developing the performance of deep learning frameworks, including NeMo and Megatron-LM. He completed his M.S. at UIUC where he was mentored under Professor Rakesh Kumar.
Wednesday September 18, 2024 11:10am - 11:35am PDT
Festival Pavilion - Breakout Room B

11:40am PDT

Building Scientific Computing Infrastructure Software with the PyTorch Ecosystem - Bharath Ramsundar, Deep Forest Sciences
Wednesday September 18, 2024 11:40am - 12:05pm PDT
The DeepChem library is a scientific computing library that implements deep learning infrastructure for drug discovery, materials discovery, and biology. The DeepChem community is one of the largest scientific open source projects built in PyTorch, with over 5K stars on Github and thousands of citations. The DeepChem community has learned a number of useful lessons for building and maintaining high quality scientific code built on top of PyTorch. In this talk, I will share our learnings with the PyTorch community and also highlight opportunities for improving scientific support in the ecosystem.
Speakers
avatar for Bharath Ramsundar

Bharath Ramsundar

CEO, Deep Forest Sciences
Bharath received a BA and BS from UC Berkeley in EECS and Mathematics and was valedictorian of his class in mathematics. He received his PhD in computer science from Stanford where he founded the DeepChem project. Bharath is founder and CEO of Deep Forest Sciences, a startup building... Read More →
Wednesday September 18, 2024 11:40am - 12:05pm PDT
Gateway Pavilion - Cowell Theater

11:40am PDT

ExecuTorch Beta and on-Device Generative AI Support - Mergen Nachin & Mengtao (Martin) Yuan, Meta
Wednesday September 18, 2024 11:40am - 12:05pm PDT
During this session, we will discuss real-life case studies focusing on the productionization of PyTorch models onto edge devices and welcome the community to begin adopting ExecuTorch. Since announcing the ExecuTorch MVP at the previous PTC, we have made significant progress in terms of stability, model coverage, accelerator performance, and developer experience, reaching a milestone that marks the transition to beta status. In addition to the above improvements, we continue to support generative AI models. Since the alpha launch that initially enabled support for LLama2/3 models, we have now expanded our capabilities to include multimodal use cases and developed mobile demo apps showcasing these new features.
Speakers
avatar for Mengtao (Martin) Yuan

Mengtao (Martin) Yuan

Tech Lead Manager, Meta
Mengtao (Martin) Yuan is a Tech Lead Manager in Meta’s PyTorch Edge team. With multiple years of experience in the AI industry, Mengtao is focused at building software systems to help AI researchers and engineers to deploy their models on edge devices such as mobile phones, AR/VR... Read More →
avatar for Mergen Nachin

Mergen Nachin

Software Engineer, Meta
Mergen Nachin is a Software Engineer specializing in creating rich AI experiences on low latency, high performance, and privacy-aware embedded systems. With a background in distributed systems, developer infrastructure, remote sensing, and localization, he brings a versatile skill... Read More →
Wednesday September 18, 2024 11:40am - 12:05pm PDT
Festival Pavilion - Breakout Room A

2:10pm PDT

Maximizing Training Throughput Using Torch.Compile and FSDP - Linsong Chu & Antoni Viros i Martin, IBM Research; Brian Vaughan, IBM
Wednesday September 18, 2024 2:10pm - 2:35pm PDT
torch.compile is a graph compilation technique that improves GPU utilization. A key challenge in getting torch.compile to perform well is to minimize (or eliminate) graph breaks, however, this isn't trivial as even the Llama implementation provided by Meta has many graph breaks resulting in reduced training throughput. In this talk we discuss 1. how we addressed these challenges in order to train a model using torch.compile 2. how we combined torch.compile with FSDP and selective activation checkpointing to achieve the maximum throughput for training 3. model quality comparison between models trained with compile and no-compile, and lastly 4. the best setup we have for different model sizes in the Llama family that achieves the maximum throughput and MFU number (e.g. 68% MFU for the 7B model on A100 GPUs!)
Speakers
avatar for Antoni Viros i Martin

Antoni Viros i Martin

Research Scientist, IBM Research
Antoni is currently a Research Scientist at IBM Research, investigating optimization approaches for ML inference and training, with a focus on open-source technologies such as PyTorch. He holds a PhD in Aerospace Engineering from Texas A&M University, and has previously worked at... Read More →
avatar for LINSONG CHU

LINSONG CHU

Senior Technical Staff Member, IBM Research
Linsong is a STSM at IBM Research, focusing on FSDP, torch compile and FP8 in the area of pre-training.
avatar for Brian Vaughan

Brian Vaughan

Senior Technical Staff Member, IBM
An STSM at IBM focusing on foundation models.
Wednesday September 18, 2024 2:10pm - 2:35pm PDT
Festival Pavilion - Breakout Room B

2:10pm PDT

State of PyTorch - Ji Li & Damien Sereni, Meta
Wednesday September 18, 2024 2:10pm - 2:35pm PDT
This talk gives a run through of who builds PyTorch, new and upcoming improvements to the framework and how to get involved. All thanks to our awesome community of contributors, partners and ecosystem tools.
Speakers
DS

Damien Sereni

Engineering director, Meta
JL

Ji Li

Data Scientist, Meta
Wednesday September 18, 2024 2:10pm - 2:35pm PDT
Festival Pavilion - Breakout Room A

2:10pm PDT

The Impact and Challenges of Open Source Generative Datasets and Models - Aaron Gokaslan, Cornell University
Wednesday September 18, 2024 2:10pm - 2:35pm PDT
Open source generative models like OpenGPT2, BLOOM, and others have been pivotal in advancing AI technology. These models leverage extensive text data to achieve advanced linguistic capabilities. However, the trend towards proprietary tools and closed large language models is growing, posing unique challenges in open-source AI development. This discussion will explore the intricacies of training such models, the hurdles in dataset management, and the regulation of open-source contributions. We'll explore how to effectively iterate on collected data, prepare for extensive training sessions, and coordinate research across large open-source organizations. We will discuss the challenges of generative models in three different modalities: text, image, and genomics. The talk will draw from the speaker’s personal experience on working on OpenWebText, OpenGPT2, BLOOM, CommonCanvas, Caduceus, and other generative models. We will also cover the changing AI environment and how the future of open souce is threatened by onerous regulation, ever increasing compute costs, and the commoditization of previously open data.
Speakers
avatar for Aaron Gokaslan

Aaron Gokaslan

PhD Student, Cornell University
Aaron Gokaslan has worked on many popular generative models and datasets such as OpenWebText, CommonCanvas, BLOOM, DBRX, and Caduceus, collectively downloaded millions of times. His work on open source has earned him a Community Contributor Award at PyTorch Con and recognition from... Read More →
Wednesday September 18, 2024 2:10pm - 2:35pm PDT
Gateway Pavilion - Cowell Theater

2:40pm PDT

Running State-of-Art Gen AI Models on-Device with NPU Acceleration - Felix Baum, Qualcomm
Wednesday September 18, 2024 2:40pm - 3:05pm PDT
Since the boom of generative AI, the industry is now moving towards on-device AI inferencing, as it is not only a trend but a necessity now in order to save costs, achieve the best inference performance, ultra-low latency at the lowest power possible. In this session we go over the new features added on the Qualcomm AI Stack and how it works with the public release of ExecuTorch 1.0. We will discuss how to run traditional workloads as well as GenAI use cases including the latest version of Llama on the Mobile device while using Qualcomm Hexagon NPU.
Speakers
avatar for Felix Baum

Felix Baum

Senior Director of Product Management, Qualcomm
Felix Baum has an extensive background of over two decades in the embedded industry, where he has excelled both as an embedded developer and a product manager. Currently he is responsible for AI Software Products at Qualcomm. Prior to that, he led efforts for various real-time operating... Read More →
Wednesday September 18, 2024 2:40pm - 3:05pm PDT
Festival Pavilion - Breakout Room B
  Breakout Sessions

2:40pm PDT

Sponsored Session: Accelerating AI Innovation: High Performance PyTorch at AMD - Robert Suderman & Ian Nordeng, AMD
Wednesday September 18, 2024 2:40pm - 3:05pm PDT
Explore the powerful collaboration between AMD and PyTorch, driving advancements in AI and machine learning. Learn how AMD’s Day-0 PyTorch support delivers cutting-edge performance and seamless compatibility.

This session will highlight the technical synergies that make AMD hardware ideal choice for PyTorch frameworks, with real-world examples of accelerated workflows and breakthrough AI applications. Attendees will gain insights into how this dynamic partnership is enabling researchers, developers, and data scientists to push the boundaries of innovation and achieve unprecedented results in AI projects.

Speakers
avatar for Robert Suderman

Robert Suderman

Engineering Manager, AMD
Rob Suderman manages front-end support with AMD’s SHARK AI group with a goal of pushing tier one support for as many ML compute languages as possible. This has included core work on Torch-mlir, JAX, TOSA, and StableHLO, including being a founding team member on the IREE project... Read More →
avatar for Ian Nordeng

Ian Nordeng

Manager Software Development, AMD
Ian Nordeng is a manager within AMD’s AIG-Sharks group where he spearheads machine learning model development for IREE’s compiler consumption to enable AI workloads to efficiently run across AMD’s hardware portfolio. He has been working in the AI compiler space for the past... Read More →
Wednesday September 18, 2024 2:40pm - 3:05pm PDT
Festival Pavilion - Breakout Room A
  Breakout Sessions
  • Slides Attached Yes

3:10pm PDT

TorchInductor CPU Backend Advancements: New Features and Performance Improvements - Jiong Gong & Leslie Fang, Intel
Wednesday September 18, 2024 3:10pm - 3:35pm PDT
This presentation provides an update on the latest advancements in the TorchInductor CPU backend since the last conference to bring best-in-class CPU performance for broad DL workloads. We will discuss new features and performance enhancements, including: • Max-autotune support with codegen for GEMMs, boosting performance for GEMM-related operations • Enhanced vectorized codegen support, now covering all data types beyond floating points with flexible vector factors, and optimized loop scheduling • Comprehensive quantization support, including weight-only-quantization (WoQ), and optimizations for dynamic quantization and quantization-aware training • Improved Attention support, featuring attention masks and optimizating SoftMax via flash attention v2 etc. • AOTInductor support, enabling high-performance inference with frozen weights • Native Windows support, with improved vectorization capabilities These advancements, combined with ongoing optimizations, have resulted in significant performance improvements since PyTorch 2.1, demonstrated through extensive benchmarks and large language models (LLMs).
Speakers
avatar for Leslie Fang

Leslie Fang

Software Engineer, Intel
Leslie is a software engineer from Intel who works on PyTorch performance optimization on X86 servers for the past 4 years. Currently, he is mainly focusing on the feature domain of Quantization, Autocast, and Inductor CPP/OpenMP backend in Stock PyTorch.
avatar for Jiong Gong

Jiong Gong

Principle Engineer, Intel
Jiong is a software architect from Intel who works on PyTorch framework optimizations. He is the PyTorch module maintainer for CPU and compiler.
Wednesday September 18, 2024 3:10pm - 3:35pm PDT
Festival Pavilion - Breakout Room B
  Breakout Sessions

4:30pm PDT

A Distributed Stateful Dataloader for Large-Scale Pretraining - Davis Wertheimer, IBM & Linsong Chu, IBM Research
Wednesday September 18, 2024 4:30pm - 4:55pm PDT
Large-scale model pretraining crucially relies on specialized and dedicated dataloaders that can, for example, partition and stream data asynchronously across multiple processes and physical nodes. In this talk we discuss one of the torch-native dataloaders we built and use at IBM Research for addressing these needs. Intended for use in large-scale model pretraining, particularly in research settings where rapid iteration between datasets may be required, our dataloader is distributed, stateful, checkpointable, composable and rescalable – while remaining a simple extension of the existing PyTorch dataloading framework. It automatically and invisibly handles data sharding, shuffling, subdataset weighting, checkpoint saving and loading, and custom user-defined preprocessing functions, with minimal overhead and high throughput. We discuss these properties and how we achieved them, such as reducing overhead by implementing a custom LCG random number generator, and demonstrate proof of concept on production-scale training of a 7B parameter Llama model over 4 trillion tokens.
Speakers
avatar for Davis Wertheimer

Davis Wertheimer

Staff Research Scientist, IBM
Davis Wertheimer earned his Ph.D. in Computer Science at Cornell University in 2022, conducting research under Bharath Hariharan on few-shot learning and machine learning under constraints. He now researches and develops AI models for IBM, training and accelerating large language... Read More →
avatar for LINSONG CHU

LINSONG CHU

Senior Technical Staff Member, IBM Research
Linsong is a STSM at IBM Research, focusing on FSDP, torch compile and FP8 in the area of pre-training.
Wednesday September 18, 2024 4:30pm - 4:55pm PDT
Gateway Pavilion - Cowell Theater

5:00pm PDT

Pushing the Performance Envelope: An Optimization Study for 3D Generative Modelling with PyTorch - Suvaditya Mukherjee & Shireen Chand, University of Southern California
Wednesday September 18, 2024 5:00pm - 5:25pm PDT
This work explores performance optimization strategies for training 3D generative models using PyTorch. We focus on training Variational Autoencoders (VAEs) on the ShapeNet dataset, a popular benchmark for this task. Our objective is to achieve high-fidelity reconstructions while minimizing the computational footprint and training time. We focus on: 1) Large-scale 3D dataset loading strategies using PyTorch & Google Cloud Storage Buckets 2) Implementation details and insights for 3D VAEs using PyTorch 2.x 3) Training using Automatic Mixed-precision regimes 4) Optimized training using torch.compile and different quantization techniques (as supported) - Dynamic Quantization - Static Quantization - Static Quantization-aware Training 5) Comparative Benchmark over several experiments performed with a focus on execution time and memory footprint Through this comprehensive study, we present a comparative analysis of the performance gains achieved by our optimized models. Our findings present empirical insights into the trade-offs between model accuracy, computational complexity, and hardware resource utilization.
Speakers
avatar for Shireen Chand

Shireen Chand

Student, University of Southern California
Shireen is a Masters student at the University of Southern California. She is majoring in Artificial Intelligence. She is also a Machine Learning Developer, a Google Summer of Code Contributor, and a Technical Writer for Medium.
avatar for Suvaditya Mukherjee

Suvaditya Mukherjee

MS AI @ USC | ML GDE, University of Southern California
Suvaditya is a Masters student at the University of Southern California, majoring in Artificial Intelligence. He is also a Google Developer Expert for Machine Learning, and an external author at PyImageSearch. He likes to work on problems related to Computer Vision, VLMs, 3D Reconstruction... Read More →
Wednesday September 18, 2024 5:00pm - 5:25pm PDT
Gateway Pavilion - Cowell Theater
 
Thursday, September 19
 

10:50am PDT

Sponsored Session: Democratizing AI: Powering the Future with Arm’s Global Compute Ecosystem - Gian Marco Iodice, Arm
Thursday September 19, 2024 10:50am - 11:15am PDT
Arm is excited to be at the center of the world's largest compute ecosystem at the dawn of the AI era. A key tenant of our mission is to democratize AI capabilities, empowering millions of developers to put advanced AI features into the hands of billions of users.

In this presentation, we'll explore how Arm is enabling the world’s leading open-source AI frameworks to leverage power-efficient Arm-based computing platforms and Arm architecture features, as a tool for enabling fast and secure AI workloads. The session focuses on how our strategic partnership with the Pytorch and Executorch community is enabling a seamless and transparent developer experience, to run workloads everywhere from cloud to edge. This session will highlight some of our optimized libraries, upstreamed contributions and a wealth of AI-related developer material to build the future of AI on Arm.
Speakers
avatar for Gian-Marco Iodice

Gian-Marco Iodice

GenAI Engineering Lead, Arm
Gian Marco Iodice is an experienced edge and mobile computing specialist at Arm for machine learning (ML) and leads engineering development for on-device GenAI. He received the MSc with honors in electronic engineering from the University of Pisa (Italy), where he specialized in HW/SW... Read More →
Thursday September 19, 2024 10:50am - 11:15am PDT
Gateway Pavilion - Cowell Theater

10:50am PDT

The Rise of `Transformers` in the Growing PyTorch Ecosystem - Arthur Zucker, Hugging Face
Thursday September 19, 2024 10:50am - 11:15am PDT
Explore how the `tranformers` library grows and adapts to the fast paced and ever-changing AI field to bring the best to the AI community
Speakers
avatar for Arthur Zucker

Arthur Zucker

Core Maintainer, Hugging Face
Arthur is a Core maintainer at Hugging Face, maintaining several critical libraries such as transformers and tokenizers. He is the owner of the text and LLM parts of Hugging Face's open-source toolkits, resulting in the implementations of LLaMa, Mistral, MoEs, etc and torch.compile... Read More →
Thursday September 19, 2024 10:50am - 11:15am PDT
Festival Pavilion - Breakout Room B

11:20am PDT

Sponsored Session: Torchchat: A Showcase of PyTorch LLM Ubiquity - Jack Khuu & Jesse White, Meta
Thursday September 19, 2024 11:20am - 11:45am PDT
This talk explores the journey of enabling LLMs in the PyTorch ecosystem, as well as how the teams behind AOT Inductor, ExecuTorch, and torchao collaborated to create torchchat, a showcase of PyTorch’s ability to run LLM inference everywhere.

Torchchat demonstrates the ubiquity, simplicity, and quality of PyTorch’s LLM support through performant, reproducible implementations for not only Python environments, but on desktop, server, and on-device as-well.

All of our work is open source and available on GitHub.
Speakers
avatar for Jack Khuu

Jack Khuu

Software Engineer, Meta
Software Engineer @ Meta working on the PyTorch Edge team. TL for torchchat, which is PyTorch's showcase of LLM inference ubiquity (Python, Desktops, Mobile, etc.). More broadly, I focus on the "Experience" of PyTorch Edge, encompassing User, Developer, and Community Experience.Ex-Lecturer... Read More →
avatar for Jesse White

Jesse White

Software Engineering Manager, Meta
Jesse is an engineering manager at PyTorch @ Meta, where he supports the Edge Experience team in improving the experience for on-device inference and training, including mobile, laptops, and embedded devices. With nearly 20 years of experience in startups, Jesse is passionate about... Read More →
Thursday September 19, 2024 11:20am - 11:45am PDT
Festival Pavilion - Breakout Room A
  Breakout Sessions

11:20am PDT

Training MoEs at Scale with PyTorch - Mihir Patel & Brian Chu, Databricks
Thursday September 19, 2024 11:20am - 11:45am PDT
Mixture-of-Experts MoE (models) are becoming an increasingly popular architecture choice for large language models (LLMs). In this talk, we describe how to train MoE models with PyTorch. After discussing various performance tradeoffs, we use PyTorch distributed tools like DTensor to build custom parallelism approaches, including expert parallelism via MegaBlocks. We then show how to get near linear scaling to thousands of GPUs, combining PyTorch FSDP and HSDP with our parallelism strategies. We discuss many of the challenges of training at scale, including communication bottlenecks, hardware failures, and networking challenges. We further improve training at scale setups using tools like PyTorch Distributed Checkpointing for rapid saving and loading. We then highlight further optimizations to minimize challenges only present at scale, such as object store failures for large checkpoints.
Speakers
avatar for Mihir Patel

Mihir Patel

Research Engineer, Databricks
Mihir Patel is a Research Engineer at MosaicML / Databricks, where he works on distributed training at scale and serves as the tech lead for Composer, an open-source deep learning training library. His primary focus is on large model training, and he has helped build several open... Read More →
avatar for Brian Chu

Brian Chu

Research Engineer, Databricks
Brian is a Research Engineer at MosaicML / Databricks, where he contributes to Composer and Foundry, open-source libraries for training LLMs. He has been involved in the DBRX project and products like the Databricks finetuning and pretraining API. Prior to joining Databricks, Brian... Read More →
Thursday September 19, 2024 11:20am - 11:45am PDT
Festival Pavilion - Breakout Room B

2:15pm PDT

Building PyTorch Computer Vision Algorithms for 100 Skin Shades - Emmanuel Acheampong, roboMUA
Thursday September 19, 2024 2:15pm - 2:40pm PDT
At roboMUA we're leading the charge in building predictive AI models for diverse skin shades with the use of Convolutional Neural Networks (CNNs), and harnessing the power of Generative Adversarial Networks (GANs) specifically for generating realistic images of black hairstyles. Our session showcases PyTorch's versatility in both predictive and generative tasks, offering a comprehensive approach to inclusive AI. For predictive AI models, we leverage PyTorch's flexible framework to develop CNNs. Through innovative techniques in feature engineering and model architecture design, we demonstrate how PyTorch enables accurate prediction across 100 skin shades. Simultaneously, we showcase the transformative potential of GANs in the realm of black hairstyles. By training GANs on a curated dataset of diverse hair textures and styles, we illustrate how PyTorch facilitates the generation of lifelike images that celebrate the beauty and diversity of black hair. Attendees will gain insights into the data preprocessing, model training, and evaluation processes and and learn how PyTorch empowers developers to build inclusive solutions.
Speakers
avatar for Emmanuel Acheampong

Emmanuel Acheampong

CEO / Head of AI, yShade.ai (formerly roboMUA)
Emmanuel Acheampong is a co-founder and CEO of roboMUA - an innovative AI solutions company with a visionary focus on catering to all skin shades and types. He graduated from Notre Dame’s ESTEEM program with a Masters thesis on the intersection of Artificial Intelligence and directed... Read More →
Thursday September 19, 2024 2:15pm - 2:40pm PDT
Gateway Pavilion - Cowell Theater

2:15pm PDT

Data-Dependent Shapes in PT2 - Edward Yang, Meta
Thursday September 19, 2024 2:15pm - 2:40pm PDT
Data-dependent shapes are ubiquitous whenever you want to take advantage of sparsity in your data representation, whether it is in recommendation systems, mixture of experts or other use cases. We have made a lot of improvements to torch.compile's support for capturing and compiling data dependent shapes, but they also require some user knowledge to work with effectively. This talk will give an overview of PT2's facilities for data dependent compute and how to use them effectively.
Speakers
avatar for Edward Z. Yang

Edward Z. Yang

Research Engineer, Meta
Edward Yang has worked on PyTorch at Meta since nearly the very beginning. Currently, he works on all aspects of PT2, but with a particular focus on dynamic shapes support across the stack.
Thursday September 19, 2024 2:15pm - 2:40pm PDT
Festival Pavilion - Breakout Room A

2:15pm PDT

vLLM: Easy, Fast, and Cheap LLM Serving for Everyone - Woosuk Kwon & Xiaoxuan Liu, UC Berkeley
Thursday September 19, 2024 2:15pm - 2:40pm PDT
We will present vLLM, an open-source high-performance LLM inference engine built on top of PyTorch. Starting as a research project at UC Berkeley, vLLM has been one of the fastest and most popular LLM inference solutions in industry, reaching 20K+ stars and 350+ contributors. In this talk, we will cover how vLLM adopts various LLM inference optimizations and how it supports various AI accelerators such as AMD GPUs, Google TPUs, and AWS Inferentia. Also, we will discuss how vLLM benefits from PyTorch 2 and its ecosystem.
Speakers
avatar for Lily Liu

Lily Liu

Student, UCB
Lily (Xiaoxuan) Liu is a PhD student at UC Berkeley, working with Professors Ion Stoica and Alvin Cheung. Her research focuses on machine learning systems, particularly optimizing latency for LLM inference and addressing memory bottlenecks in LLM systems. Her recent work explores... Read More →
avatar for Woosuk Kwon

Woosuk Kwon

PhD Student, UC Berkeley
Woosuk Kwon is a Ph.D. student at UC Berkeley, advised by Prof. Ion Stoica. He is interested in building practical, flexible, and high-performance software systems for emerging applications such as large language models. Recently, he has been developing vLLM, a high-performance open-source... Read More →
Thursday September 19, 2024 2:15pm - 2:40pm PDT
Festival Pavilion - Breakout Room B

2:45pm PDT

Blobs to Clips: Efficient End-to-End Video Data Loading - Andrew Ho & Ahmad Sharif, Meta
Thursday September 19, 2024 2:45pm - 3:10pm PDT
The PyTorch team has improved training speed by an order of magnitude for teams at Meta working on Small-to-Large-Scale MultiModal Video models. In this talk we’ll share our learnings on reducing GPU starvation by overcoming data loading challenges such as dealing with large distributed datasets, worker imbalance, compute-bottlenecks due to parallel video decoding and sampling, checkpointing, and debuggability. As part of our commitment to open-source, we are releasing a new decoding library and updating existing PyTorch libraries on GitHub, and invite feedback and contributions from the community.
Speakers
avatar for Ahmad Sharif

Ahmad Sharif

Software Engineer, Meta
SWE in Pytorch Content Domains Past: SWE at Google in Search, Privacy, ChromeOS
avatar for Andrew Ho

Andrew Ho

Machine Learning Engineer, Meta Platforms
We are ML Engineers at Meta on PyTorch working on multi-modal LLM dataloading
Thursday September 19, 2024 2:45pm - 3:10pm PDT
Gateway Pavilion - Cowell Theater

2:45pm PDT

Torchtitan: Large-Scale LLM Training Using Native PyTorch 3D Parallelism - Wanchao Liang, Meta & Linsong Chu, IBM Research
Thursday September 19, 2024 2:45pm - 3:10pm PDT
torchtitan is a proof-of-concept for Large-scale LLM training using native PyTorch. It is a repo that showcases PyTorch's latest distributed training features in a clean, minimal codebase. We show-cased end to end large scale training features enablement: 1. 3D/4D Parallelism 2. Efficient distributed checkpoint save/load/resharding 3. Many efficient training techniques including Float8, torch.compile, activation checkpoint, etc.
Speakers
avatar for Wanchao Liang

Wanchao Liang

Software Engineer, Meta Platforms, Inc.
Software Engineer at Meta, PyTorch team Tech Lead in PyTorch Distributed training. Author of torchtitan, Tensor Parallel and DTensor, a fundamental distributed abstraction to perform distributed computation. Previously worked on the TorchScript compiler, ONNX.
avatar for LINSONG CHU

LINSONG CHU

Senior Technical Staff Member, IBM Research
Linsong is a STSM at IBM Research, focusing on FSDP, torch compile and FP8 in the area of pre-training.
Thursday September 19, 2024 2:45pm - 3:10pm PDT
Festival Pavilion - Breakout Room B

3:15pm PDT

Slaying OOMs - Mark Saroufim & Jane Xu, Meta
Thursday September 19, 2024 3:15pm - 3:40pm PDT
Have you ever hit an OOM (and wished you had more VRAM)? Who hasn't! Hop on the bus with us and feel the road become smoother as we talk about stacking together techniques like FSDP2 + QLoRa + CPU Offloading + Fused ADAM (thanks Intel) + more in PyTorch native. We will give an overview of these techniques as well as the hard edges we solved in their composition. Curious for more? Or...still OOMing? We also plan on discussing our more researchy work on offloading, pagedness, and low precision optimizers.
Speakers
avatar for Jane Xu

Jane Xu

SWE, Meta
I'm Jane and I work on the PyTorch core library! Tell me your favorite optimizer, complain to me about your latest OOM, teach me about what you’re excited about.
avatar for Mark Saroufim

Mark Saroufim

Software Engineer, Meta
Mark Saroufim is a PyTorch Engineer at Meta working on inference, compilers and community.
Thursday September 19, 2024 3:15pm - 3:40pm PDT
Festival Pavilion - Breakout Room B

3:15pm PDT

Sponsored Session: PyTorch Support by Google Enabling Performance from Cloud to Edge - Mark Sherwood & Shauheen Zahirazami, Google
Thursday September 19, 2024 3:15pm - 3:40pm PDT
In this session we will cover various ways teams at google are working to help the Pytorch community achieve performance and scale from cloud to edge. We will cover how Google Cloud customers can use PyTorch and OpenXLA to get competitive performance for their ML workloads.  We’ll also cover how Google AI Edge Torch works with Pytorch to help developers integrate LLMs, vision models and more to easily create new edge applications that can run on a wide set of devices.
Speakers
avatar for Mark Sherwood

Mark Sherwood

Senior Product Manager, Google AI Edge, Google
Mark is a Senior Product Manager on the Google AI Edge team, responsible for LiteRT (formerly known as TensorFlow Lite) and MediaPipe. He specializes in shipping ML powered features on Android, iOS, and Web using the very smallest to the very largest on-device models.
avatar for Shauheen Zahirazami

Shauheen Zahirazami

Senior Staff Engineering Manager, Cloud Machine Learning Compute Services, Google
Shauheen has a PhD in control engineering with a BSc in applied mathematics. He is currently leading Cloud TPU Machine Learning teams at Google who are responsible for ML Frameworks and 3P ecosystem including the PyTorch teams that develop PyTorch/XLA.
Thursday September 19, 2024 3:15pm - 3:40pm PDT
Gateway Pavilion - Cowell Theater

3:15pm PDT

Torch.Compile for Autograd, DDP and FSDP - Will Feng , Chien-Chin Huang & Simon Fan, Meta
Thursday September 19, 2024 3:15pm - 3:40pm PDT
In this talk, we will present the latest advancements in torch.compile for distributed training via DDP and FSDP. We will first introduce Compiled Autograd, a torch.compile mode to fully capture the backpropagation step, including the communication collective operators used in distributed. We will then cover the improvements this new approach brought to Compiled DDP/FSDP, notably by removing DDP/FSDP graph breaks which brings the potential of improving compute/communication overlap.
Speakers
CH

Chien-Chin Huang

Software Engineer, Meta
Software Engineer, PyTorch Distributed, Meta
avatar for Simon Fan

Simon Fan

Software Engineer, Meta
I'm a software engineer on the PyTorch Compiler team, I focus on torch.compile for distributed training frameworks.
avatar for Will Feng

Will Feng

Software Engineer, Meta Platforms, Inc.
Will Feng is a Software Engineer in PyTorch Compiler team at Meta. He has been working in PyTorch core and ecosystem for the past 7 years. He is now working on and most excited about torch.compile for distributed training performance.
Thursday September 19, 2024 3:15pm - 3:40pm PDT
Festival Pavilion - Breakout Room A

4:05pm PDT

Understanding the LLM Inference Workload - Mark Moyou, NVIDIA
Thursday September 19, 2024 4:05pm - 4:30pm PDT
Understanding how to effectively size a production grade LLM deployment requires understanding of the model(s), the compute hardware, quantization and parallelization methods, KV Cache budgets, input and output token length predictions, model adapter management and much more. - Why LLM inference is different to standard deep learning inference - Current and future NVIDIA GPU overview - which GPU(s) for which models and why - Understanding the importance of building inference engines - Deep recap on the attention mechanism along with different types of popular attention mechanisms used in production - Deep dive on KV Cache and managing KV Cache budgets - Parallelism (reducing latency) - mainly tensor parallelism, but data, sequence, pipeline, and expert parallelism will be highlighted - Quantization methods on weights, activations, and KV Cache to reduce engine sizes for more effective GPU utilization - Increasing throughput with inflight batching and other techniques - Detailed performance analysis of LLM deployments looking at Time to first token, inter-token latencies, llm deployment characterizations, and more that can help reduce deployment costs
Speakers
avatar for Mark Moyou

Mark Moyou

Sr. Data Scientist, NVIDIA
Dr. Mark Moyou Senior Data Scientist at NVIDIA working with enterprise clients on AI strategy and deploying machine learning applications to production. He is the host of the Caribbean Tech Pioneers Podcast, The AI Portfolio Podcast and is the Director of the Optimized AI Confere... Read More →
Thursday September 19, 2024 4:05pm - 4:30pm PDT
Festival Pavilion - Breakout Room B

4:35pm PDT

Intel GPU in Upstream PyTorch: Expanding GPU Choices and Enhancing Backend Flexibility - Eikan Wang & Min Jean Cho, Intel
Thursday September 19, 2024 4:35pm - 5:00pm PDT
The integration of Intel GPU support into PyTorch marks a pivotal enhancement for PyTorch device and runtime. We generalized the PyTorch device and runtime to accommodate streaming devices. The generalization not only facilitates the deployment of PyTorch on ubiquitous hardware but also makes the integration of different HW backends easier. In addition, PyTorch with Intel GPU supports various Intel GPUs from the data center to the client. It enriches and democratizes PyTorch HW ecosystem. Particularly in AIPC scenarios where Intel's integrated and discrete GPUs are prevalent, Pytorch with Intel GPU can deliver promising performance and improved OOB experience in the AIPC domain that can extend PyTorch's applicability significantly.
Speakers
avatar for Eikan Wang

Eikan Wang

AI Frameworks Engineer, Intel
Eikan is a staff engineer from Intel and a DL framework tech lead having full-stack experience in DL, from various AI applications to framework, library, and DL compiler. He is actively optimizing on torch.compile stack for Intel platforms, including optimizing Inductor C++/OpenMP... Read More →
MJ

Min Jean Cho

Deep Learning Software Engineer, Intel Corporation
Thursday September 19, 2024 4:35pm - 5:00pm PDT
Festival Pavilion - Breakout Room B

4:35pm PDT

Unlocking the Enigma: Crafting Unbiased, Transparent, and Explainable Large Language Models - Rashmi Nagpal, Patchstack
Thursday September 19, 2024 4:35pm - 5:00pm PDT
In an era where artificial intelligence reigns supreme, the statistics are both perplexing and thought-provoking – only a mere 13% of large language models manage to transcend the realms of research and enter the practical world of production. Who bears the responsibility when these models err, spewing out biased or discriminatory outputs? It's time to demystify the complex landscape of machine learning ethics and carve a path towards a brighter, more accountable future! In this talk, firstly, we will navigate the profound impacts of large language models across diverse domains, from the lifesaving advances in medicine to safeguarding our nations through enhanced security protocols. Secondly, as we marvel at data-driven decisions laid by these models, we will confront the darker shadows cast by – the looming spectre of bias in the data. Finally, we will delve deep into the art of building interpretable models and navigating the maze of ethical considerations. Through a live demonstration in PyTorch, we will witness how to craft unbiased, transparent, and explainable models.
Speakers
avatar for Rashmi Nagpal

Rashmi Nagpal

Machine Learning Engineer, Patchstack
Rashmi, a passionate researcher at the MIT CSAIL and machine learning engineer at Patchstack, is dedicated to crafting beautiful AI applications. With nearly 5 years of industrial experience, she has brought ideas to life at pre-seed startups and contributed to impactful redesigns... Read More →
Thursday September 19, 2024 4:35pm - 5:00pm PDT
Festival Pavilion - Breakout Room A
  Breakout Sessions

5:05pm PDT

CANCELED: The Ethical Implications of AI and the Environment: A Focus on Water - Amber Hasan, Ethical Tech AI & Senegal Tuklor Williams, Broken Pencil Pictures llc
Thursday September 19, 2024 5:05pm - 5:30pm PDT
Artificial Intelligence (AI) has the potential to revolutionize various sectors, including environmental conservation and water management. However, the deployment of AI technologies raises ethical questions about the environmental impact, particularly water resources. This presentation will discuss the ethical implications of AI concerning water while also exploring how AI can both positively and negatively affect water resources along with the broader ecosystem. My goal is to facilitate a critical conversation around how to balance technological advancements with environmental stewardship. Objectives: Understanding Ethical Implications: Provide an in depth overview of how AI impacts water resources. Focus on ethical concerns related to AI's water footprint, including, but not limited to energy consumption and water usage in data centers. Explore Positive Applications: Talk about the possible successful implementations of AI in water conservation, pollution monitoring, and efficient resource management. Discuss potential future applications where AI could contribute to sustainable water management and connect stakeholders to address ethical concerns and solutions.
Thursday September 19, 2024 5:05pm - 5:30pm PDT
Festival Pavilion - Breakout Room A

5:05pm PDT

Implementing a Custom Torch.Compile Backend - A Case Study - Maanav Dalal & Yulong Wang, Microsoft
Thursday September 19, 2024 5:05pm - 5:30pm PDT
This presentation will dive into the development of the ONNXRuntime (ORT) backend for torch.compile. We'll cover the implementation process, starting with a PyTorch 2.0 generated FX graph, highlighting the unique challenges encountered when serving ORT-specific scenarios and how we solved them. Attendees will gain insights into optimizing performance, overcoming integration hurdles, and achieving efficient execution. Whether you're a developer looking to extend PyTorch's capabilities for your own use cases, keen to learn about ONNX Runtime, or interested in backend performance optimization, and the many steps we've taken to get to where we are now, this session promises valuable takeaways and practical knowledge.
Speakers
avatar for Maanav Dalal

Maanav Dalal

Program Manager, Microsoft
PM @Microsoft, working on the ONNX Exporter team. I adore learning about consumer tech and experimenting with bleeding edge software. I'm passionate about creating delightful user experiences.
YW

Yulong Wang

Software Engineer, Microsoft
Thursday September 19, 2024 5:05pm - 5:30pm PDT
Festival Pavilion - Breakout Room B
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Audience
  • Slides Attached
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.