Loading…
September 18-19, 2024
San Francisco, California
View More Details & Registration
Note: The schedule is subject to change.

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for PyTorch Conference 2024 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

This schedule is automatically displayed in Pacific Daylight Time (UTC-7). To see the schedule in your preferred timezone, please select from the drop-down located at the bottom of the menu to the right.

IMPORTANT NOTE: Timing of sessions and room locations are subject to change.

Advanced clear filter
Wednesday, September 18
 

11:25am PDT

Lightning Talk: Low Precision Dtypes in PyTorch - Vasiliy Kuznetsov, Meta
Wednesday September 18, 2024 11:25am - 11:35am PDT
This talk deep dives into the new native PyTorch float8 training library, and previews PyTorch's strategy for supporting upcoming low precision dtypes such as float6, float4 and MX for efficient training and inference.
Speakers
avatar for Vasiliy Kuznetsov

Vasiliy Kuznetsov

software engineer, Meta
Software Engineer, PyTorch Core
Wednesday September 18, 2024 11:25am - 11:35am PDT
Festival Pavilion - Breakout Room A
  Lightning Talks

4:15pm PDT

Lightning Talk: In-Transit Machine Learning Using PyTorch on Frontier Exascale System - Vineeth Gutta, University of Delaware
Wednesday September 18, 2024 4:15pm - 4:25pm PDT
Traditional ML workflows use offline training where the data is stored on disk and is subsequently loaded into accelerator (CPU,GPU, etc) memory during training or inference. We recently devised a novel and scalable in-transit ML workflow for a plasma-physics application (chosen as 1 out of 8 compelling codes in the country) for the world’s fastest supercomputer, Frontier) with an aim to build a high-energy laser particle accelerator. Data generated in distributed HPC systems like Frontier create volumes of data that is infeasible to store on HPC file systems. A mismatch between modern memory hierarchies occurs due to high volume and rate of data generation. Our novel ML workflow utilizes continuous learning where the data is consumed in batches as the simulation produces the data and then discards after each batch is trained. This in-transit workflow integrates particle-in-cell simulations with distributed ML training on PyTorch using DDP allows for an application coupling enabling the model to learn correlations between emitted radiation and particle dynamics within simulation in an unsupervised method. This workflow is demonstrated at scale on Frontier using 400 AMD MI250X GPUs
Speakers
avatar for Vineeth Gutta

Vineeth Gutta

PhD Student, University of Delaware
Vineeth is a fifth-year PhD student in the department of Computer and Information Sciences at the University of Delaware. He is a member of the Computational Research and Programming Lab (CRPL) and is advised by Dr. Sunita Chandrasekaran. His research interests lie at the intersection of High Performance Computing and Machine Learning. He works with the National Cancer Institute (NCI/NIH) on improving drug response and drug discovery models. Currently working on improving the AMPL model that predicts binding free energy of ligand-protein dock... Read More →
Wednesday September 18, 2024 4:15pm - 4:25pm PDT
Gateway Pavilion - Cowell Theater
 
Thursday, September 19
 

2:15pm PDT

Data-Dependent Shapes in PT2 - Edward Yang, Meta
Thursday September 19, 2024 2:15pm - 2:40pm PDT
Data-dependent shapes are ubiquitous whenever you want to take advantage of sparsity in your data representation, whether it is in recommendation systems, mixture of experts or other use cases. We have made a lot of improvements to torch.compile's support for capturing and compiling data dependent shapes, but they also require some user knowledge to work with effectively. This talk will give an overview of PT2's facilities for data dependent compute and how to use them effectively.
Speakers
avatar for Edward Z. Yang

Edward Z. Yang

Research Engineer, Meta
Edward Yang has worked on PyTorch at Meta since nearly the very beginning. Currently, he works on all aspects of PT2, but with a particular focus on dynamic shapes support across the stack.
Thursday September 19, 2024 2:15pm - 2:40pm PDT
Festival Pavilion - Breakout Room A
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Audience
  • Slides Attached
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.