Loading…
Attending this event?
September 18-19, 2024
San Francisco, California
View More Details & Registration
Note: The schedule is subject to change.

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for PyTorch Conference 2024 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

This schedule is automatically displayed in Pacific Daylight Time (UTC-7). To see the schedule in your preferred timezone, please select from the drop-down located at the bottom of the menu to the right.

IMPORTANT NOTE: Timing of sessions and room locations are subject to change.

Lightning Talks clear filter
arrow_back View All Dates
Thursday, September 19
 

10:50am PDT

Lightning Talk: d-Matrix LLM Compression Flow Based on Torch.Fx: Simplifying PTQ/QAT - Zifei Xu & Tristan Webb, d-Matrix Corporation
Thursday September 19, 2024 10:50am - 11:00am PDT
We introduce dmx-compressor, d-Matrix's open-source LLM compression toolkit that is modular, robust, efficient, and user-friendly. It utilizes symbolic tracing and fx.Transformer for network compression while keeping the model a first-class citizen in PyTorch for the user, despite prevalent graph dynamism in LLMs. It achieves this by maintaining both the original nn.Module and a just-in-time (JIT) traced and transformed fx.GraphModule representation behind the scenes, in conjunction with an abstraction that cleanly decouples network compression from the original model graph definition. This design allows the FXIR to dynamically adapt to diverse forward call signatures and flow-control arguments throughout quantization-aware training and post-training quantization written in plain PyTorch, yielding a compressed FXIR fully compatible with application-level APIs like the Hugging Face pipeline. We also provide a graph visualizer based on fx.Interpreter for ease of debugging. We believe this project shall empower the community to build efficient LLMs for deployment on custom hardware accelerators and contribute to the PyTorch ecosystem.
Speakers
avatar for Zifei Xu

Zifei Xu

Senior Machine Learning Research Engineer, d-Matrix Corporation
Zifei is a Senior Machine Learning Research Engineer at d-Matrix. Her current work focuses on developing model quantization pipelines and efficient quantization algorithms. She graduated from Stanford University with a Master's degree in Computational & Mathematical Engineering and... Read More →
avatar for Tristan Webb

Tristan Webb

ML Engineer, d-Matrix
Tristan's background is primarily in computer science and mathematics, and which let him to graduate with a PhD in Complexity Science at the University of Warwick, where he worked with large computational neuroscience models of spiking neural networks using simulators written in C... Read More →
Thursday September 19, 2024 10:50am - 11:00am PDT
Festival Pavilion - Breakout Room A
  Lightning Talks

11:05am PDT

Lightning Talk: LLMs on Edge with AI Accelerators - Chen Lai, Kimish Patel & Cemal Bilgin, Meta
Thursday September 19, 2024 11:05am - 11:15am PDT
LLMs are known to be compute heavy and consume lots of resources (almost all resources on phones), including memory and power. A natural thought is to leverage the AI hardware accelerators, for example, Apple Neural Engine (ANE) on Apple devices and HTP on Qualcomm SoCs, to make it run fast and efficiently. Only by optimizing the model latency, memory consumption and power usage to a certain level will users be interested in installing the models on their devices. In this session, we’d like to introduce how we leverage these AI accelerators within the PyTorch ecosystem to achieve the state-of-art performance for llama3 on device, via ExecuTorch and the partnership with Apple and Qualcomm. Hardware companies usually have their own AI accelerators. Likely they have different characteristics, one may support a list of different operators than others, and one may only support static shapes (like HTP). However, transformers-based optimization can be generic. We’ll discuss in more detail how we apply the generic optimization as well as the backend specific optimization. The techniques we applied here are not just for LLMs, but can be applied to other transformer-based models.
Speakers
avatar for Kimish Patel

Kimish Patel

Software Engineer, Meta Platforms
Kimish has worked on enabling PyTorch on Meta's family of apps, primarily focusing on performance optimizations. His past experiences include hardware/software co-design, CPU architecture, and CPU/GPU performance optimization.
avatar for Chen Lai

Chen Lai

Software Engineer, Meta
Software engineers focusing on bringing up accelerators on devices
avatar for CEMAL Bilgin

CEMAL Bilgin

Engineering Manager, Meta
Engineering Manager PyTorch Edge Acceleration
Thursday September 19, 2024 11:05am - 11:15am PDT
Festival Pavilion - Breakout Room A
  Lightning Talks

11:20am PDT

Lightning Talk: Building and Supporting the Chinese PyTorch Community: Resources, Tutorials, and Engagement - Zong Zesheng, Huawei
Thursday September 19, 2024 11:20am - 11:30am PDT
Description: This proposal aims to provide a comprehensive introduction to the Chinese PyTorch community, we hope to inspire more users to join and contribute, fostering a vibrant and inclusive environment for PyTorch enthusiasts in China. Chinese PyTorch Homepage Introduction to the official Chinese version of the PyTorch website, highlighting its features. Navigation tips and key sections, such as documentation, tutorials, and community events. Improve the connection of users from China with PyTorch Community. Localized Tutorials and Documentation The 2.x version not have Translated version, it hard to catch up with latest features of PyTorch if the beginner not good at English. We translated official documents and tutorials, covering everything from basic PyTorch concepts to advanced applications. Interactive tutorials No interactive tutorials(Like Google Colab) for Chinese students or beginners before, they have to setup environment before start with PyTorch, which might be hard for beginners. And now, an online notebook & tutorials are available to practice or tuning steps for beginners.
Speakers
avatar for zong zesheng

zong zesheng

Software Engineer, Huawei
Currently, trying to let Chinese users to have easier access to PyTorch resources and make a friendly user experiences for beginners.
Thursday September 19, 2024 11:20am - 11:30am PDT
Gateway Pavilion - Cowell Theater
  Lightning Talks

11:35am PDT

Lightning Talk: Distributing a Million Open Models in the Wild: Lessons Learned from the Hugging Face Hub - Omar Sanseviero, Hugging Face
Thursday September 19, 2024 11:35am - 11:45am PDT
The Hugging Face Hub has over 300,000 PyTorch models. Distributing such number of models poses challenges. In this talk, Omar will share how the community has tackled these challenges, including techniques to ensure torch model security and tooling for researchers to share their models. He'll also take attendees on a journey through the evolution of torch models distributed by the community, highlighting new trends and directions. Attending this talk will give attendees practical insights into the latest developments in model distribution and ecosystem trends.
Speakers
avatar for Omar Sanseviero

Omar Sanseviero

Chief Llama Officer - Head of Platform and Community, Hugging Face
Omar Sanseviero is the Chief Llama Officer and Head of Platform and Community at Hugging Face, where he works at the intersection of open source, community, and product. Omar leads multiple ML teams that work on topics such as Mobile ML, ML for art, and ML Partnerships. Previously... Read More →
Thursday September 19, 2024 11:35am - 11:45am PDT
Gateway Pavilion - Cowell Theater

11:50am PDT

Lightning Talk: Empowering Developers: Tools and Resources for Running Generative AI on Arm CPUs - Pareena Verma, Arm
Thursday September 19, 2024 11:50am - 12:00pm PDT
As the demand for accessible and scalable AI solutions grows, leveraging CPUs for generative AI offers significant advantages in cost, energy efficiency and widespread availability. This sessions aims to equip developers with the ecosystem of tools, resources and technical content needed to effectively run generative AI use cases on Arm CPUs. We have launched a range of easily digestible tutorials for developers, part of our Learning Paths on https://learn.arm.com/, which demonstrate how you can easily and efficiently run small and large language models on Arm-based devices. Learn about end-to-end workflows to accelerate PyTorch based sentiment analysis models from Hugging Face on Arm servers with optimizations in Arm Compute Library kernels for fp32 and bfloat16. Use the new KleidiAI library to accelerate LLMs with AI frameworks and build an Android chat app on your Arm mobile device with ExecuTorch, and XNNPACK. Find out about our roadmap for learning content demonstrating the feasibility and successful deployment of generative AI on Arm-based devices. Help us shape the support that we offer developers.
Speakers
avatar for Pareena Verma

Pareena Verma

Principal Solutions Architect, Arm
Pareena is a Principal Solutions Architect at Arm. She has extensive experience working with software developers and SoC architects on numerous Arm based projects involving usage of modeling, ML frameworks, compilers, debuggers and virtual prototyping simulation tools. Pareena holds... Read More →
Thursday September 19, 2024 11:50am - 12:00pm PDT
Festival Pavilion - Breakout Room B

11:50am PDT

Lightning Talk: New Activation Checkpointing APIs in PyTorch - Jeffrey Wan & Horace He, Meta
Thursday September 19, 2024 11:50am - 12:00pm PDT
Activation checkpointing is a commonly used technique to reduce memory usage during model training by reducing the number of activations saved for backward. Instead of keeping tensors needed for backward alive until they are used in gradient computation during backward, those tensors are recomputed during the backward pass. This talk will introduce new activation checkpoint APIs that can help achieve a better trade off between memory savings and compute overhead that recomputing introduces.
Speakers
avatar for Horace He

Horace He

Software Engineer, Meta
To be filled
avatar for Jeffrey Wan

Jeffrey Wan

Software Engineer, Meta
Software Engineer working on PyTorch
Thursday September 19, 2024 11:50am - 12:00pm PDT
Festival Pavilion - Breakout Room A

11:50am PDT

Lightning Talk: Understanding and Optimizing PyTorch Models with Thunder - Luca Antiga, Lightning AI
Thursday September 19, 2024 11:50am - 12:00pm PDT
A hallmark feature of PyTorch is the natural expression of computation. This enables practitioners to implement AI models with ease. However, it prompts the question how to optimize the workload for a given hardware setup because those optimizations clutter our code and are tricky to combine. Lightning Thunder provides a Python-to-Python compiler to scale and optimize PyTorch programs that focuses on usability, understandability, and extensibility. A key tool in delivering on these goals is the composability of transformations: without changing the user code, we can stack quantization, distributing the computation across multiple GPUs, dispatching to optimized kernels, offloading, and other pluggable optimizations. Lightning Thunder flourishes in the PyTorch ecosystem: with PyTorch eager and with executors like torch.compile and nvFuser. It also dispatches to libraries like cuDNN, TransformerEngine, Apex, OpenAI Triton. The ability to apply multiple optimizations just-in-time leads to significant compounded speed-ups over unoptimized code out of the box. Luca will discuss the design of Thunder and demonstrate applications on training and inference for large language and multimodal models.
Speakers
avatar for Luca Antiga

Luca Antiga

CTO, Lightning AI
CTO @ Lightning AI, Founder (Orobix, Tensorwerk), early PyTorch core contributor, Manning Author (Deep Learning with PyTorch). PhD in Bioengineering.
Thursday September 19, 2024 11:50am - 12:00pm PDT
Gateway Pavilion - Cowell Theater

12:00pm PDT

Lightning Talk: Fast, Scalable Distributed Training with StreamingDataset - Saaketh Narayan, Databricks
Thursday September 19, 2024 12:00pm - 12:10pm PDT
StreamingDataset makes training on large datasets from cloud storage as fast, cheap, and scalable as possible. It’s specially designed for multi-node, distributed training for large models — maximizing correctness guarantees, performance, and ease of use. Key features include elastically deterministic training, instant mid-epoch resumption, effective shuffling, high training throughput, and flexible data mixing, among other features. When training with StreamingDataset, the data shards are written to cloud storage in MDS, our file format that allows for low-latency random access to samples. By being as efficient as possible with shard downloads and shuffling, StreamingDataset minimizes egress costs while ensuring that dataloading never bottlenecks model training. StreamingDataset powers training for LLMs with over 100 billion parameters like DBRX, to advanced diffusion models, to two-tower recommendation models, and more, scaling to training jobs on thousands of GPUs with ease. Join us to learn how StreamingDataset can elevate your distributed model training experience.
Speakers
avatar for Saaketh Narayan

Saaketh Narayan

Machine Learning Engineer, Databricks
Saaketh Narayan is a machine learning engineer at Databricks. As part of the Mosaic AI Runtime team, he works on the GenAI training stack, including dataloading, training frameworks, and performance across the Mosaic Streaming, Composer, and LLM Foundry libraries.
Thursday September 19, 2024 12:00pm - 12:10pm PDT
Gateway Pavilion - Cowell Theater

12:00pm PDT

Lightning Talk: FlexAttention - The Flexibility of PyTorch + The Performance of FlashAttention - Yanbo Liang & Horace He, Meta
Thursday September 19, 2024 12:00pm - 12:10pm PDT
Introducing a novel abstraction leveraging the PyTorch compiler stack to enable custom, user-defined attention mechanisms. This new API supports dynamic modifications to attention scores within SDPA, providing both runtime and memory efficiency through kernel fusion with the FlashAttention algorithm.
Speakers
avatar for Yanbo Liang

Yanbo Liang

software engineer, Meta
I'm software engineer at PyTorch team working on torch.compile and LLM.
avatar for Horace He

Horace He

Software Engineer, Meta
To be filled
Thursday September 19, 2024 12:00pm - 12:10pm PDT
Festival Pavilion - Breakout Room A

12:00pm PDT

Lightning Talk: Optimized PyTorch Inference on aarch64 Linux CPUs - Sunita Nadampalli, Amazon (AWS)
Thursday September 19, 2024 12:00pm - 12:10pm PDT
In the last 2 years we've optimized performance of PyTorch on Arm processors. The optimizations have included changes to ATen, C10, MKLDNN operators, GEMM backend, and Torch inductor. In many cases instead of writing our own kernel we integrated the Arm compute library, used fastmath kernels with format types like bf16, implemented operator caching, selected optimal backend based on the input context etc. Through these optimizations we improved performance by over 2x. In this presentation first we will talk about how we went across this process, what those optimizations are, performance numbers for AWS Graviton3 processors for around 75 models, and CI/CD workflow details. Next, we will walk through a sample PyTorch application showing basic usage, how to tune runtime and the resulting speed up. At the end of the presentation attendees will learn about PyTorch performance optimizations on Arm processors, how to use them, and the areas where they can collaborate to further improve PyTorch for aarch64 CPUs.
Speakers
avatar for Sunita Nadampalli

Sunita Nadampalli

Software Development Manager, Amazon/AWS
Sunita Nadampalli is a Software Development Manager at AWS. She leads Graviton software performance optimizations for AI/ML and HPC workloads. She is passionate about open source software development and delivering high-performance and sustainable software solutions with Arm SoCs... Read More →
Thursday September 19, 2024 12:00pm - 12:10pm PDT
Festival Pavilion - Breakout Room B
  Lightning Talks
  • Audience Any
  • Slides Attached Yes

12:10pm PDT

Lightning Talk: AOTriton: Ahead of Time Triton Kernel Libraries on ROCm - Jeff Daily, AMD
Thursday September 19, 2024 12:10pm - 12:20pm PDT
Scaled dot product attention provides significant acceleration of the transformer layer through fusion of the multihead attention layer. There are several different algorithms to achieve this but tiled attention through scaled dot product attention via Flash Attention is a very popular approach. In PyTorch on the ROCm platform this is currently achieved through ahead of time compiled (AOT) Triton kernels in a linkable archive. AMD’s work to enable and package these kernels is done through AOTriton, which aims to use Triton’s compiler and GPU kernels for faster development. AOTriton maintains an optimized set of tiling sizes and other parameters to provide optimized, pre-compiled Triton kernels. The differences between JIT and AOT are few but are very important. Despite this, prototyping kernels in Triton is much faster than template-based C++ libraries. In this presentation we will go into detail on the interaction layer between PyTorch and AOTriton, the structure of AOTriton and how to add new triton kernels to AOTriton.
Speakers
avatar for Jeff Daily

Jeff Daily

Principal Member of Technical Staff, Advanced Micro Devices
Jeff Daily is the chief architect of the Machine Learning Software Engineering group supporting ML frameworks such as PyTorch and onnxruntime on AMD GPUs.  He enjoys delivering open source software to answer the challenges of the rapidly-changing ML landscape.  For over five years... Read More →
Thursday September 19, 2024 12:10pm - 12:20pm PDT
Festival Pavilion - Breakout Room B

12:10pm PDT

Lightning Talk: Implementing and Using Iterable Datasets: What Could Go Wrong? - Nicolas Hug, Meta
Thursday September 19, 2024 12:10pm - 12:20pm PDT
PyTorch supports two kinds of datasets: Iterable datasets and indexable "map-style" datasets. Iterable datasets can be more flexible and potentially faster than their indexable cousins. They are also much harder to use correctly, and can easily lead to silently wrong results. This talk is a quick and fun intro to some of the traps that Iterable datasets lay out for you, with some tips to help you avoid them.
Speakers
avatar for Nicolas Hug

Nicolas Hug

Research Engineer, Meta
Nicolas is a software engineer in the PyTorch team at Meta, where he mainly contributes to the torchvision library. Prior to that, Nicolas was a research scientist at Columbia University, where he became part of the scikit-learn core development team. Nicolas holds a PhD in machine... Read More →
Thursday September 19, 2024 12:10pm - 12:20pm PDT
Gateway Pavilion - Cowell Theater
  Lightning Talks

12:10pm PDT

Lightning Talk: Making the Most of Heterogeneous Memory Capacity Using PyTorch - Syed Ahmed, NVIDIA Corporation
Thursday September 19, 2024 12:10pm - 12:20pm PDT
Memory intensive deep learning workloads require efficient use of all kinds of memories that are available in a system. In this session, we will discuss how we can utilize such heterogeneous memory through memory pools in PyTorch. We will show how to mix-and-match different CUDA system allocators in the same PyTorch program using memory pools. Consequently, this API unlocks new use cases such as Extended GPU Memory (EGM) based all-gathers, Unified Virtual Memory (UVM), and NVLink Sharp (NVLS) reductions. New NVIDIA architectures accelerate such use cases with high-bandwidth and low-latency interconnects in the hardware, driven by extended functionality of CUDA system allocators in the software. Learn how to use these techniques on memory-intensive deep learning models like LLMs, and discover new CUDA features powered by PyTorch.
Speakers
avatar for Syed Ahmed

Syed Ahmed

Senior Software Engineer, NVIDIA
Syed Ahmed is a Senior Software Engineer on the PyTorch Core team at NVIDIA, focused on keeping PyTorch fast and numerically stable on current NVIDIA platforms, and making PyTorch more expressive on future NVIDIA platforms. He holds a Master’s degree in Electrical Engineering from... Read More →
Thursday September 19, 2024 12:10pm - 12:20pm PDT
Festival Pavilion - Breakout Room A

2:45pm PDT

Lightning Talk: What's New for PyTorch Developer Infrastructure - Sahan Paliskara & Catherine Lee, Meta
Thursday September 19, 2024 2:45pm - 2:55pm PDT
Having a chat about all of the work being done to continue supporting PyTorch's Developer Infrastructure needs including updates around Target Determination, Releases, and OSS Tooling.
Speakers
avatar for Catherine Lee

Catherine Lee

Software Engineer, META
Software engineer on the PyTorch Dev Infra team primarily working on reducing time to signal, testing infrastructure, and CI related developer tooling.
avatar for Sahan Paliskara

Sahan Paliskara

Software Engineer, Meta
After spending a lot of time using PyTorch to train computer vision models, Sahan joined the PyTorch team three years ago. He started off working on inference and packaging, and now he's part of the dev infra team. These days, he's involved in everything from managing releases to... Read More →
Thursday September 19, 2024 2:45pm - 2:55pm PDT
Festival Pavilion - Breakout Room A

3:00pm PDT

Lightning Talk: PyTorch Release Process - Andrey Talman, Meta
Thursday September 19, 2024 3:00pm - 3:10pm PDT
I would like to present and quickly discuss PyTorch Release process, how it happens. What are milestones. What is our cherry-picking criteria, how we validate the release.
Speakers
avatar for Andrey Talman

Andrey Talman

Software Engineer, Meta Inc.
Software Engineer - Meta Inc. 2021-Present Part of PyTorch Dev Infra team. Working on PyTorch OSS Releases. Lead Software Engineer - Dow Jones & Company 2019-2021 Part of the team developing software and the API Services used by Dow Jones Factiva website and WSJ. Software Engineer... Read More →
Thursday September 19, 2024 3:00pm - 3:10pm PDT
Festival Pavilion - Breakout Room A

4:05pm PDT

Lightning Talk: Debiasing the Data Lifecycle - Shailvi Wakhlu, Shailvi Ventures LLC
Thursday September 19, 2024 4:05pm - 4:15pm PDT
Biased data, results in biased decision-making. Making sure that at every step of the data lifecycle, we make conscious attempts to debias the data is an important responsibility for all data scientists. In this talk, I highlight the typical data lifecycle, and how to prevent biases at every step. ---- The key takeaways from my talk include: 1) Understanding the data lifecycle 2) What are the typical ways biases creep in 3) How we can proactively prevent and fix biases in data
Speakers
avatar for Shailvi Wakhlu

Shailvi Wakhlu

Founder, Shailvi Ventures LLC
Shailvi is a seasoned Data Leader and Self-Advocacy Expert with over sixteen years of experience building technology products. She has spoken at nearly 100 global conferences and Fortune 500 events, coached close to 500 individuals, and authored the best-selling book "Self-Advocacy... Read More →
Thursday September 19, 2024 4:05pm - 4:15pm PDT
Festival Pavilion - Breakout Room A

4:20pm PDT

CANCELED: Lightning Talk: PyTorch-Wildlife: A Collaborative Deep Learning Framework for Conservation - Zhongqi Miao, Microsoft
Thursday September 19, 2024 4:20pm - 4:30pm PDT
The alarming decline in global biodiversity, driven by various factors, underscores the urgent need for large-scale wildlife monitoring. To address these challenges, we introduce Pytorch Wildlife, an open-source deep learning platform built on PyTorch. It is designed for creating, modifying, and sharing powerful AI models. This platform emphasizes usability and accessibility, making it accessible to individuals with limited or no technical background. It also offers a modular codebase to simplify feature expansion and further development. Pytorch-Wildlife offers an intuitive, user-friendly interface, accessible through local installation or Hugging Face, for animal detection and classification in images and videos. As two real-world applications, Pytorch-Wildlife has been utilized to train animal classification models for species recognition in the Amazon Rainforest and for invasive opossum recognition in the Galapagos Islands. The Opossum model achieves 98% accuracy, and the Amazon model has 92% recognition accuracy for 36 animals in 90% of the data. As Pytorch-Wildlife evolves, we aim to integrate more conservation tasks, addressing various environmental challenges.
Speakers
avatar for Zhongqi Miao

Zhongqi Miao

Research Scientist, Microsoft
My research focus is AI (especially modern computer vision) applications in environmental science and ecology. I am currently in the AI for Good Lab, working on large-scale wildlife recognition through ground-based cameras (i.e., camera traps), bioacoustics, and overhead imagery... Read More →
Thursday September 19, 2024 4:20pm - 4:30pm PDT
Festival Pavilion - Breakout Room A
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Audience
  • Slides Attached
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -