Close

Presentation

Tutorial
:
Deep Learning at Scale
Event Type
Tutorial
Passes
Tags
AI/Machine Learning/Deep Learning
HPC workflows
Parallel Applications
TimeSunday, June 16th9am - 6pm
LocationMatterhorn 2
DescriptionDeep learning is rapidly and fundamentally transforming the way science and industry use data to solve challenging problems. Deep neural network models have been shown to be powerful tools for extracting insights from data across a large number of domains. As these models grow in complexity to solve increasingly challenging problems with larger and larger datasets, the need for scalable methods and software to train them grows accordingly.

This tutorial will continue and expand upon our well attended tutorial at Supercomputing 18 and aims to provide attendees with a working knowledge on deep learning on HPC class systems, including core concepts, scientific applications, and techniques for scaling. We will provide training accounts and example Jupyter notebook-based exercises, as well as datasets, to allow attendees to experiment hands-on with training, inference, and scaling of deep neural network machine learning models.
Content Level The content level will be 40% beginner, 40% intermediate, and 20% advanced.
Target AudienceThe audience should be familiar with python and jupyter notebooks. Any previous experience with machine learning and distributed computing will be beneficial but not necessary for participation.
PrerequisitesHardware requisites for the hands-on sessions are a laptop with working wireless connection and web browser for the jupyter service. Installed SSH capabilities (e.g. Linux shell or PuTTY ) desired as backup solutions. Training accounts for the NERSC Cori system will be handed out at the event after the NERSC Appropriate Use Policy (AUP) was signed and verified by each participant.
Authors
Application Performance Specialist
Principal Engineer and Lead Architect Artificial Intelligence