Large Model Support – Overcoming GPU Memory Limitations

In the field of deep learning, where data scientists are attempting to solve increasingly complex problems using deep learning, or using ever growing datasets, we’re placing a huge strain on the GPU. Increasingly developers and engineers are looking for ways to overcome the bounds of GPU memory, which limits the size of models and the scale at which datasets can grow.

In this session we’ll discuss the GPU memory limitation and approaches to solving the problem with infrastructure. Discussing various techniques for overcoming the challenge to improve model accuracy and performance with large datasets. Specifically looking at the LMS (Large Model Support) TensorFlow implementation

Objective of the talk

This talk aims to illustrate the impact of the GPU memory limitation on Machine Learning and Deep Learning workloads (including some worked examples in the Medical Imaging domain.)

Required audience experience

Knowledge of TensorFlow/Machine Learning Fundamentals

Track 3
Location: Burton and Redgrave Date: September 30, 2019 Time: 5:05 pm – 5:50 pm Chris Parsons, Nvidia