This workshop teaches deep learning techniques for a range of computer vision tasks. After an introduction to deep learning, you’ll advance to building and deploying deep learning applications for image classification and object detection, modifying your neural networks to improve their accuracy and performance, and implementing the techniques you’ve learned on a final project. At the end of the workshop, you’ll have access to additional resources to create new deep learning applications on your own.

At the conclusion of the workshop, you’ll have an understanding of the fundamentals of deep learning and be able to:

It is important that you have meet the workshop pre-requisites below as theoretical aspects of deep learning will NOT be covered in this hands-on training session.

Upon successful completion of the assessment, you will receive an NVIDIA DLI certificate to recognize your subject matter competency and support your professional career growth.
0900 – 0915 Introduction
> Meet the instructor.
> Create an account at courses.nvidia.com/join
0915 – 1030 Unlocking New Capabilities
> Learn the biological inspiration behind deep neural networks (DNNs).
> Explore training DNNs with big data.
> Train neural networks to perform image classification by harnessing the three main ingredients of deep learning: deep neural networks, big data, and the GPU.
1030 – 1100 Morning Tea Break
1100 – 1145 Unlocking New Capabilities (cont.)
1145 – 1230 Unlocking New Capabilities and Measuring and Improving Performance
> Deploy trained neural networks from their training environment into real applications.
> Optimize DNN performance.
> Incorporate object detection into your DNNs.
1230 – 1330 Lunch
1330 – 1445 Unlocking New Capabilities and Measuring and Improving Performance (cont.)
1445 – 1530 Final Project
> Validate learnings by applying the deep learning application development workflow (load dataset, train, and deploy model) to a new problem.
> Learn how to set up your GPU-enabled environment to begin work on your own projects.
> Explore additional project ideas and resources to get started with NVIDIA AMI in the cloud, nvidia-docker, and the NVIDIA DIGITS container.
1530 – 1600 Afternoon Tea Break
1600 – 1715 Final Project (cont.)
1715 – 1730 Final Review
> Review key learnings and wrap up questions.
> Complete the assessment to earn a certificate.
> Take the workshop survey.

Dr Gabriel Noaje, Senior Solutions Architect, NVIDIA
Gabriel Noaje has more than 10 years of experience in accelerator technologies and parallel computing. Gabriel has a deep understanding of users’ requirements in terms of manycore architecture after he worked in both enterprise and public sector roles. Prior to joining NVIDIA, he was a Senior Solutions Architect with SGI and HPE where he was developing solutions for HPC and Deep Learning customers in APAC.

To maximize your training time during your DLI training, please follow the instructions below, before attending your first training session:

  1. You must bring your own laptop, power supply and adaptor (if required) in order to run the training.
  2. A current browser is needed. For optimal performance, Chrome, Firefox, or Safari for Macs are recommended. IE is operational but does not provide the best performance.
  3. Create an account at http://courses.nvidia.com/join using your university email address.
  4. Ensure your laptop will run smoothly by testing your laptop at http://websocketstest.com/.

For enquiries, please contact Ms Audri Tan at [email protected]

No longer an experimental topic, containers are here to stay in HPC. They offer software portability, improved collaboration, and data reproducibility. A variety of tools (e.g. Docker, Shifter, Singularity, Podman) exist for users who want to incorporate containers into their workflows, but oftentimes they may not know where to start.

This tutorial will cover the basics of creating and using containers in an HPC environment. We will make use of hands-on demonstrations from a range of disciplines to highlight how containers can be used in scientific workflows. These examples will draw from Bioinformatics, Machine Learning, Computational Fluid Dynamics and other areas.

Through this discussion, attendees will learn how to run GPU- and MPI-enabled applications with containers. We will also show how containers can be used to improve performance in Python workflows and I/O-intensive jobs.

Lastly, we will discuss the best practices for container management and administration. These practices include how to incorporate good software engineering principles, such as the use of revision control and continuous integration tools.

Video preview of the previous tutorial in SC19: https://youtu.be/VgWtZdCF9S4

Reference: https://sc19.supercomputing.org/presentation/?id=tut168&sess=sess179 

1330 – 1530
  • Overview of containers
  • Basics of Singularity
  • Mounting external filesystems
  • Writable containers
  • MPI enabled workflows
  • Using GPUs
1530 – 1600
  • Afternoon Tea Break
1600 – 1800
  • Building container images
  • GUI applications (RStudio)
  • Python workflows
  • Other tools
  • Best practices and tips

Dr Marco De La Pierre, Supercomputing Applications Specialist, Pawsey Supercomputing Centre

Marco completed a PhD in Materials Science, specialising in theoretical and computational chemistry. Joining Pawsey in 2018, Marco engages with researchers in the fields of computational materials science, computational chemistry and bioinformatics. Marco’s area of expertise is quantum mechanical calculations and classical molecular dynamics (including free energy calculations). He is also highly experienced in software development (mostly Fortran, plus a bit of C++) and has accumulated an extensive skill set of bash/Python scripting to automate workflows for pre-processing, post-processing and visualisation of simulation data. Marco was also employed for a number of years as a university researcher, developing and applying methods to model, simulate and analyse materials properties and processes.

Mr Mark Gray, Head of Scientific Platforms, Pawsey Supercomputing Centre

The research cloud service at Pawsey Supercomputing Centre is called Nimbus. It is utilised by researchers around the globe. Mr Gray brings a strong research and data management background to his role of managing all aspects of this service. This includes procurement, deployment, training, user expectation management – and leadership of the team who administer and operate the service. Mr Gray represents Pawsey Supercomputing Centre both nationally and internationally, with respect to the use of Nimbus. He also provides project management skills for projects of key strategic importance to the Centre.

For enquiries, please contact Ms Aditi Subramanya at [email protected]

Manycore platform is a simple yet efficient way to build more and more powerful supercomputers in the post Dennard scaling era.

In this lecture, the various challenges scientists have to face when using modern manycore platforms will be discussed, and several techniques (such as vectorization,  mixed precision and parallelization) will be introduced to address them, and help the attendees make the best use of manycore supercomputers at scale.

1330 – 1500

1) Introduction

Why do we need manycore platforms?

2) Single-thread optimization
a) Vectorization
Topics: SIMD, Use of compiler report, Hierarchical memory, Data layout (AoS vs. SoA), Indirect access
b) Performance analysis (Roofline analysis)
Topics: Arithmetic intensity, Roofline analysis via Intel Advisor
c) Mixed precision
3) Parallelization
a) Performance scaling
Topics: Strong scaling, Weak scaling
b) Parallel programming model: MPI+X
c) OpenMP
Topics: Basic usage, Data locality (on cc-NUMA arch.), MPI+OpenMP binding”
4) Fugaku architecture

Mr Gilles Gouaillardet, HPC Consultant, Department of HPC Support, RIST

Gilles brings 20+ years of HPC experience and held various positions including software development, pre-sales, large supercomputers deployment and end user support. Gilles is currently based in Kobe, Japan and as a member of the Advanced User Support group at RIST, assists users in porting and tuning their applications on HPCI systems and the upcoming flagship Fugaku supercomputer.

For enquiries, please contact Sophie Chong at https://www.sc-asia.org/

DO YOU HAVE A QUESTION ABOUT SCA20?

You can contact us at [email protected]

Contact Us