NVIDIA – Fundamentals of Deep Learning for Computer Vision
- Date: 24 February 2020 (Monday)
- Time: 0900 – 1800
- Venue: Room 301, Level 3, Suntec Singapore Convention & Exhibition Centre
- Registration: Click here
* Get special rates for the SCA20 conference registration when you sign up now for an SCA20 Conference Pass!
To do this, register via the Standard Pass Registration link, include the Tutorial or User Group, and use the following promo code to enjoy a discounted rate on your conference pass!Promo code: SCATUT
<Only applicable for Standard Pass Registration>
This workshop teaches deep learning techniques for a range of computer vision tasks. After an introduction to deep learning, you’ll advance to building and deploying deep learning applications for image classification and object detection, modifying your neural networks to improve their accuracy and performance, and implementing the techniques you’ve learned on a final project. At the end of the workshop, you’ll have access to additional resources to create new deep learning applications on your own.
At the conclusion of the workshop, you’ll have an understanding of the fundamentals of deep learning and be able to:
- Implement common deep learning workflows, such as image classification and object detection
- Experiment with data, training parameters, network structure, and other strategies to increase performance and capability of neural networks
- Integrate and deploy neural networks in your own applications to start solving sophisticated real-world problems
It is important that you have meet the workshop pre-requisites below as theoretical aspects of deep learning will NOT be covered in this hands-on training session.
- Familiarity with programming fundamentals such as functions and variables
|0900 – 0915||Introduction
> Meet the instructor.
> Create an account at courses.nvidia.com/join
|0915 – 1030||Unlocking New Capabilities
> Learn the biological inspiration behind deep neural networks (DNNs).
> Explore training DNNs with big data.
> Train neural networks to perform image classification by harnessing the three main ingredients of deep learning: deep neural networks, big data, and the GPU.
|1030 – 1100||Morning Tea Break|
|1100 – 1145||Unlocking New Capabilities (cont.)|
|1145 – 1230||Unlocking New Capabilities and Measuring and Improving Performance
> Deploy trained neural networks from their training environment into real applications.
> Optimize DNN performance.
> Incorporate object detection into your DNNs.
|1230 – 1330||Lunch|
|1330 – 1445||Unlocking New Capabilities and Measuring and Improving Performance (cont.)|
|1445 – 1530||Final Project
> Validate learnings by applying the deep learning application development workflow (load dataset, train, and deploy model) to a new problem.
> Learn how to set up your GPU-enabled environment to begin work on your own projects.
> Explore additional project ideas and resources to get started with NVIDIA AMI in the cloud, nvidia-docker, and the NVIDIA DIGITS container.
|1530 – 1600||Afternoon Tea Break|
|1600 – 1715||Final Project (cont.)|
|1715 – 1730||Final Review
> Review key learnings and wrap up questions.
> Complete the assessment to earn a certificate.
> Take the workshop survey.
Dr Gabriel Noaje, Senior Solutions Architect, NVIDIA
Gabriel Noaje has more than 10 years of experience in accelerator technologies and parallel computing. Gabriel has a deep understanding of users’ requirements in terms of manycore architecture after he worked in both enterprise and public sector roles. Prior to joining NVIDIA, he was a Senior Solutions Architect with SGI and HPE where he was developing solutions for HPC and Deep Learning customers in APAC.
To maximize your training time during your DLI training, please follow the instructions below, before attending your first training session:
- You must bring your own laptop, power supply and adaptor (if required) in order to run the training.
- A current browser is needed. For optimal performance, Chrome, Firefox, or Safari for Macs are recommended. IE is operational but does not provide the best performance.
- Create an account at http://courses.nvidia.com/join using your university email address.
- Ensure your laptop will run smoothly by testing your laptop at http://websocketstest.com/.
- Under ENVIRONMENT, confirm that “WebSockets” is checked yes.
- Under WEBSOCKETS (PORT 80), confirm that “Data Receive,” “Send,” and “Echo Test” are checked yes
- If there are issues with WebSockets, try updating your browser. We recommend Chrome, Firefox, or Safari for an optimal performance.
Containers in HPC
- Date: 24 February 2020 (Monday)
- Time: 1330 – 1730
- Venue: Room 311, Level 3, Suntec Singapore Convention & Exhibition Centre
No longer an experimental topic, containers are here to stay in HPC. They offer software portability, improved collaboration, and data reproducibility. A variety of tools (e.g. Docker, Shifter, Singularity, Podman) exist for users who want to incorporate containers into their workflows, but oftentimes they may not know where to start.
This tutorial will cover the basics of creating and using containers in an HPC environment. We will make use of hands-on demonstrations from a range of disciplines to highlight how containers can be used in scientific workflows. These examples will draw from Bioinformatics, Machine Learning, Computational Fluid Dynamics and other areas.
Through this discussion, attendees will learn how to run GPU- and MPI-enabled applications with containers. We will also show how containers can be used to improve performance in Python workflows and I/O-intensive jobs.
Lastly, we will discuss the best practices for container management and administration. These practices include how to incorporate good software engineering principles, such as the use of revision control and continuous integration tools.
Video preview of the previous tutorial in SC19: https://youtu.be/VgWtZdCF9S4
|1330 – 1530||
|1530 – 1600||
|1600 – 1800||
Dr Marco De La Pierre, Supercomputing Applications Specialist, Pawsey Supercomputing Centre
Marco completed a PhD in Materials Science, specialising in theoretical and computational chemistry. Joining Pawsey in 2018, Marco engages with researchers in the fields of computational materials science, computational chemistry and bioinformatics. Marco’s area of expertise is quantum mechanical calculations and classical molecular dynamics (including free energy calculations). He is also highly experienced in software development (mostly Fortran, plus a bit of C++) and has accumulated an extensive skill set of bash/Python scripting to automate workflows for pre-processing, post-processing and visualisation of simulation data. Marco was also employed for a number of years as a university researcher, developing and applying methods to model, simulate and analyse materials properties and processes.
Mr Mark Gray, Head of Scientific Platforms, Pawsey Supercomputing Centre
The research cloud service at Pawsey Supercomputing Centre is called Nimbus. It is utilised by researchers around the globe. Mr Gray brings a strong research and data management background to his role of managing all aspects of this service. This includes procurement, deployment, training, user expectation management – and leadership of the team who administer and operate the service. Mr Gray represents Pawsey Supercomputing Centre both nationally and internationally, with respect to the use of Nimbus. He also provides project management skills for projects of key strategic importance to the Centre.
RIST – Manycore Architecture Tutorial
- Date: 24 February 2020 (Monday)
- Time: 1330 – 1500
- Venue: Room 304, Level 3, Suntec Singapore Convention & Exhibition Centre
- Registration: Click here
* To register, please proceed to the Tutorial Pass Registration link and use the following promo code after selecting the tutorial from the drop-down list.
Promo code: SCARIST
Manycore platform is a simple yet efficient way to build more and more powerful supercomputers in the post Dennard scaling era.
In this lecture, the various challenges scientists have to face when using modern manycore platforms will be discussed, and several techniques (such as vectorization, mixed precision and parallelization) will be introduced to address them, and help the attendees make the best use of manycore supercomputers at scale.
|1330 – 1500||
Why do we need manycore platforms?
|2) Single-thread optimization
Topics: SIMD, Use of compiler report, Hierarchical memory, Data layout (AoS vs. SoA), Indirect access
b) Performance analysis (Roofline analysis)
Topics: Arithmetic intensity, Roofline analysis via Intel Advisor
c) Mixed precision
a) Performance scaling
Topics: Strong scaling, Weak scaling
b) Parallel programming model: MPI+X
Topics: Basic usage, Data locality (on cc-NUMA arch.), MPI+OpenMP binding”
|4) Fugaku architecture|
Mr Gilles Gouaillardet, HPC Consultant, Department of HPC Support, RIST
Gilles brings 20+ years of HPC experience and held various positions including software development, pre-sales, large supercomputers deployment and end user support. Gilles is currently based in Kobe, Japan and as a member of the Advanced User Support group at RIST, assists users in porting and tuning their applications on HPCI systems and the upcoming flagship Fugaku supercomputer.