+44 40 8873432 [email protected]

Tutorials

NVIDIA DLI  – Fundamentals of Deep Learning for Multi-GPUs
Date: 11 March 2019 (Monday)
Time: 8.30am – 6.00pm
Venue: Room 300, Level 3, Suntec Singapore Convention & Exhibition Centre
Registration: Click here
Instructor Name & Bio

TBC

Important Notes to Participants

Attendee Set Up Requirements
To maximize your training time during your DLI training, please follow the instructions below, before attending your first training session:
1. You must bring your own laptop in order to run the training. Please bring your laptop and its charger.
2. A current browser is needed. For optimal performance, Chrome, Firefox, or Safari for Macs are recommended. IE is operational but does not provide the best performance.
3. Create an account at http://courses.nvidia.com/join. Click the “Create account” link to create a new account. If you are told your account already exists, please try logging in instead. If you are asked to link your “NVIDIA Account” with your “Developer Account”, just follow the on-screen directions.
4. Ensure your laptop will run smoothly by going to http://websocketstest.com/. Make sure that WebSockets work by ensuring Websockets is supported under “Environment”. Additionally, make sure that “Data Receive”, “Data Send” and “Echo Test” all check Yes under “WebSockets”. If there are issues with WebSockets, try updating your browser.

If you have any questions, please contact [email protected].

Abstract

Learn how to use multiple GPUs to train neural networks and effectively parallelize training of deep neural networks using TensorFlow.

The computational requirements of deep neural networks used to enable AI applications like self-driving cars are enormous. A single training cycle can take weeks on a single GPU, or even years for the larger datasets like those used in self-driving car research. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible.

This course will teach you how to use multiple GPUs to training neural networks. You’ll learn:
• Approaches to multi-GPU training
• Algorithmic and engineering challenges to large-scale training
• Key techniques used to overcome the challenges mentioned above

Upon completion, you’ll be able to effectively parallelize training of deep neural networks using TensorFlow.

Agenda
  • 08:30 Registration
  • 09:00 Theory of Data Parallelism
  • 09:45 Introduction to Multi GPU Training
  • 10:30 Morning Break
  • 11:00 Introduction to Multi GPU Training (Continue)
  • 12:30 Lunch
  • 13:30 Algorithmic Challenges to Multi GPU Training
  • 15:30 Afternoon Break
  • 15:45 Engineering Challenges to Multi GPU Training
  • 17:45 Closing Comments and Questions (15 mins)

*Agenda is subjected to change
Content Level (e.g. Beginner): Beginner; Introduction to Deep Learning

Training Syllabus

Lab 1: Introduction to Multi-GPU Training

Define a simple neural network and a cost function and iteratively calculate the gradient of the cost function and model parameters using the SGD optimization algorithm.

Lab 2: Algorithmic Challenges to Multi GPU Training

Learn to transform single GPU to Horovod multi-GPU implementation to reduce the complexity of writing efficient distributed software. Understand the data loading, augmentation, and training logic using AlexNet model.

Lab 3: Engineering Challenges to Multi GPU Training

Understand the aspects of data input pipeline, communication, reference architecture and take a deeper dive into the concepts of job scheduling.

Pre-requisites

Experience with Stochastic Gradient Descent, Network Architecture, and Parallel Computing.

NVIDIA DLI – Deep Learning for Healthcare Genomics
Date: 11 March 2019 (Monday)
Time: 8.30am – 5.30pm
Venue: Room 301, Level 3, Suntec Singapore Convention & Exhibition Centre
Registration: Click here
Instructor Name & Bio

TBC

Instructions to Participants

IMPORTANT: Please follow these pre-workshop instructions.

Attendee Set Up Requirements
To maximize your training time during your DLI training, please follow the instructions below, before attending your first training session:

1. You must bring your own laptop in order to run the training. Please bring your laptop and its charger.
2. A current browser is needed. For optimal performance, Chrome, Firefox, or Safari for Macs are recommended. IE is operational but does not provide the best performance.
3. Create an account at http://courses.nvidia.com/join. Click the “Create account” link to create a new account. If you are told your account already exists, please try logging in instead. If you are asked to link your “NVIDIA Account” with your “Developer Account”, just follow the on-screen directions.
4. Ensure your laptop will run smoothly by going to http://websocketstest.com/. Make sure that WebSockets work by ensuring Websockets is supported under “Environment”. Additionally, make sure that “Data Receive”, “Data Send” and “Echo Test” all check Yes under “WebSockets”. If there are issues with WebSockets, try updating your browser.

If you have any questions, please contact [email protected].

Abstract

Learn how to apply convolutional neural networks (CNNs) to detect chromosome co-deletion and search for motifs in genomic sequences.

This course teaches you how to apply deep learning to detect chromosome co-deletion and search for motifs in genomic sequences. You’ll learn how to:

• Understand the basics of convolutional neural networks (CNNs) and how they work
• Apply CNNs to MRI scans of low-grade gliomas (LGGs) to determine 1p/19q chromosome co-deletion status
• Use the DragoNN toolkit to simulate genomic data and to search for motifs

Upon completion, you’ll be able to understand how CNNs work, evaluate MRI images using CNNs, and use real regulatory genomic data to research new motifs.

Agenda
  • 08:30 Registration
  • 09:00 Welcome
  • 09:15 Image Classification with Digits
  • 10.30 Morning Break
  • 11.00 Image Classification with Digits (Continued)
  • 11.45 Deep Learning for Genomics using DragoNN with Keras and Theano
  • 12:30 Lunch
  • 13:30 Deep Learning for Genomics using DragoNN with Keras and Theano (Continued)
  • 14:45 Radiomics 1p19q Chromosome Image Classification with TensorFlow
  • 15:30 Afternoon Break
  • 16:00 Radiomics 1p19q Chromosome Image Classification with TensorFlow (Continued)
  • 17:15 Closing Comments and Questions
Training Syllabus 

Lab #1: Image Classification with Digits (120mins)

Learn to interpret deep learning models to discover predictive genome sequence patterns using the DragoNN toolkit on simulated and real regulatory genomic data.

Lab #2: Deep Learning for Genomics using DragoNN with Keras and Theano (120 mins)

Learn to interpret deep learning models to discover predictive genome sequence patterns using the DragoNN toolkit on simulated and real regulatory genomic data.

Lab #3: Radiomics 1p19q Chromosome Image Classification with TensorFlow (120 mins)

Learn how to apply deep learning techniques to detect the 1p19q co-deletion biomarker from MRI imaging.

PRE-REQUISITES
  • Basic familiarity with deep neural networks, and basic coding experience in Python or a similar language.
Mellanox – Pave the Road to Exascale HPC
Date: 11 March 2019 (Monday)
Time: 9.00am – 6.00pm
Venue: Room 302, Level 3, Suntec Singapore Convention & Exhibition Centre
Registration: Click here
Instructor Name & Bio

Mr. Jeff Adie (Principal Solutions Architect, APJI Region, NVIDIA)

Jeff is a HPC specialist with over 25 years of experience in developing, tuning and porting scientific codes and architecting HPC solutions. Jeff’s primary area of expertise is in CFD and NWP, having previously worked at the New Zealand Oceanographic Institute (now NIWA), Toyota Motor Corporation, and on FEA/CFD analysis for America’s cup class yachts for Team New Zealand. Prior to joining Nvidia, Jeff worked for SGI for 16 years in Asia, Before that, he worked for various Post Production companies in his native New Zealand as a Visual Effects artist, technical director, and software development roles. Jeff holds a post-graduate diploma from the University of Auckland in Computer Science, specialising in Parallel programming and Computer graphics.

Abstract

Presentation: Engineering an HPC cluster solution for GPU-accelerated workloads

GPU-accelerated computing has become an integral part of HPC over the last few years and, when coupled with a high performance Infiniband interconnect, it is important to properly architect a solution to maximise productivity. This talk will cover the requirements for GPU accelerators and present best practices for designing and deploying GPU-based HPC solutions to deliver the optimal results.

Instructor Name & Bio

Dr. Yang Jian, Fellow, AMD

Dr. Yang Jian has graduated from CAG&CG State Key Lab with PhD in 2002.  He previous industry experiences included several IC companies on 3D graphics acceleration, Trident Multimedia Co. Ltd, Centrality Communications Co. Ltd and S3 Graphics Co Ltd. In 2006 Dr Yang joined ATI/AMD.  Dr Yang has built up a strong team on performance verification, analysis and optimization of modern GPUs.  The team has completed more than 40 ASICs’ tape-out.  Dr Yang is concentrating on computer architect of HPC and Artificial Intelligence and deep learning algorithm optimization and ROCm open-source platform and HPC apps from AMD

Abstract

Presentation: AMD Radeon Instinct™ Platforms For HPC and Machine Intelligence

AMD speeds up the HW/SW platforms for virtualization,  HPC and machine intelligence with 7nm CPU ROME  and 7nm GPU MI60&MI50.  AMD RADOEN INSTICTTM MI60 has 7.4 FP64 computing capability,  64GB/s bandwidth PCIe Gen4 and 200GB/s infinite fabric Links. ROCm over OpenUCX provides short latency and high transmission bandwidth for MPI intranode and internode communications.  Rapid evolution of ROCM open source software stack supports rapid HPC apps’ porting and many machine intelligence frameworks.   Many Math libraries and various machine intelligence primitives are developed and optimized  in ROCm on AMD RADOEN INSTINCT GPUs. AMD is working with many partners to promote ROCm for  computing marketing.

Agenda
  • 09:00 Opening
  • 09:10 The Key Technologies to Exascale HPC by Mr. Gilad Shainer, Chairman, HPC-AI Advisory Council
  • 10:00 GPU Computing in HPC System by Mr. Jeff Adie, Principal Solutions Architect, APJI Region, NVIDIA
  • 10:30 Tea Break
  • 11.00 SNVMesh is a storage game changer for SuperComputing by Mr. Oren Laadan, Chief Technical Officer, Excelero
  • 11:30 In-Network Computing in HPC System by Mr. Avi Telyas, Director of System Engineering, Mellanox
  • 12:00 MPI Acceleration in HPC System by Mr. Richard Graham., HPC Scale Special Interest Group Chair, HPC-AI Advisory Council
  • 12:30 Lunch
  • 13:30 AMD Radeon Instinct™ Platforms For HPC and Machine Intelligence by Dr. Yang Jian, Fellow, AMD
  • 14:00 In-Network Computing in HPC System by Richard Graham, HPC Scale Special Interest Group Chair, HPC-AI Advisory Council
  • 14:30 Storage Computing in HPC System by Mr. Ziyan Ori, Chief Executive Officer and Co-founder, E8 Storage
  • 15:00 Disaggregation in ML/AI Hyperscale Data Center by Mr. Ma Shaowen, Director, Ethernet Switches, APAC, Mellanox
  • 15:30 Tea Break
  • 16:00 Exascale HPC Fabric Optimization by Mr. Ashrut Ambastha, Sr. Stuff of Solution Architect, Mellanox
  • 16:30 Exascale HPC Fabric Topology by Mr. Qingchun Song, HPC-AI Advisory Council
  • 17:00 Panel Discussion and Q&A
  • 17:30 Closing by Mr. Qingchun Song, HPC-AI Advisory Council (30mins)
Altair – PBS Works User Group
Date: 11 March 2019 (Monday)
Time: 9.00am – 12.30pm
Venue: Room 303, Level 3, Suntec Singapore Convention & Exhibition Centre
Registration: Click here
Abstract

Altair PBS Works™ is the market leader in comprehensive, secure workload management for high-performance computing (HPC) and cloud environments. It allows HPC users to simplify the management of their HPC infrastructure while optimizing system utilization, improving application performance, and maximizing ROI on hardware and software investments.

PBS User Group meetings are a regular feature in USA. ASIA with its rapidly growing user base more than qualifies to have its own event with many organisations now adopting this technology of choice.

Apart from introducing the latest features and solutions from the PBS stable, discussing the new acquisitions and partnerships, the user group allows a platform for knowledge and experience sharing amongst the peer group. The past user groups in other geographies have been as much a learning experience for us as has been for the PBS users, and having resulted in product enhancement based on user feedback. With a mass of expert users of PBS Works in ASIA now, the user group also offers an opportunity to you to list your requirements and features that can aid you in your business, and provides us an insight into what can we do to help you achieve it.

The event will end with recognising some of the key contributors and users of the PBS Suite of products; a small token of our appreciation for those contributing to the adoption of HPC in general, and PBS in particular in the region.

Agenda
  • 09:00 PBS User Group Inauguration by Mr. Bill Nitzberg, CTO, PBS Works
  • 10:00 Customer Presentation 1: National University of Singapore
  • 10:30 Tea Break
  • 11.00 Customer Presentation 2: National Supercomputing Centre (NSCC) Singapore
  • 11.20 PBS Works Custom Solutions on Various HPC Platforms by Mr. Piush Patel, VP, Corporate Development for HPC and Cloud
  • 11:50 User Group Open Forum Discussion
  • 12:30 Closing Notes and Awards (10mins)

For enquiries, please contact Mr Manjunath Doddam at [email protected]

DDN User Group
Date: 11 March 2019 (Monday)
Time: 1.30pm –6.00pm
Venue: Room 303, Level 3, Suntec Singapore Convention & Exhibition Centre
Registration: Click here
Abstract

Since the 1950s, the increasing use of computers has transformed many domains of research—ranging from molecular dynamics to climate research or astronomy—through increasingly powerful computer simulations. Over the past decades, advances in instruments and sensors for data ingest, together with the spread of GPU computing, and enhanced by new approaches to data analysis and artificial intelligence, have provided scientists with a new set of tools that connect large empirical data and computation in a novel, and exciting way.

Scalable storage environments that provide sufficient performance for data ingest and analysis is an important ingredient for this new approach to research computing. Our focus in this DDN User Group is on new applications and approaches, supported by scalable high-performance storage, to harness the power of Big Data and AI in industry and research.

Agenda
  • 13:30 Opening DDN & Whamcloud User Group by Mr. Atul Vidwansa, DDN
  • 13:40 ABCI and its Storage Architecture by Hirotaka OGAWA, Ph.D., Leader, AI Cloud Research Team, AI Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Japan
  • 14:20 Customer Keynote: “GPGPU Driven Data Warehousing for Enterprise Messaging Services on All-Flash Storage” by Mr. Ajit Singh, Sr Director, ACL-Mobile
  • 14.50 Partner Update: NVIDIA DGX Ready Data Centre by Mr. Dennis Ang, NVIDIA
  • 15.15 High-Performance File Storage for Containerized Environments by Mr. Shuichi Ihara, DDN
  • 15:40 Tea Break
  • 16:00 Partner Update: GPU Enhanced, Accurate, scalable genomic analysis for the planet by Mehrzad Samadi, CEO, Parabricks
  • 16:20 DDN Product & Roadmap Update by Mr. Nobu Hashizume, DDN
  • 16.35 ExaScaler for HPC, Big Data & AI by Mr. Carlos Thomaz, DDN
  • Impact of AI on the Future of HPC
  • 16:55 HPC, Big Data, and AI in Singapore
  • 17:10 HPC, Big Data, and AI in India by Mr. Sanjay Wandekar, CDAC and NSM India
  • HPC, Big Data, and AI in Life Sciences
  • 17.25 Advancing Microscopy with HPC, Big Data, and AI
  • 17:40 Advancing Genomics with HPC, Big Data, and AI
  • 17:55 Closing Remarks by Mr. Atul Vidwansa, DDN (5mins)

For enquiries, please contact [email protected]

*Note: This agenda is subject to change.

IBM Spectrum Scale User Group
Date: 11 March 2019 (Monday)
Time: 8.00am – 5.30pm
Venue: Room 304, Level 3, Suntec Singapore Convention & Exhibition Centre
Registration: Click here
Abstract

Spectrum Scale User Group is an event for Spectrum Scale users to gather & communicate their experience on this product. We will invite Spectrum Scale experts from around the world to join this event to share the latest news of Spectrum Scale with the audience.

If you are have experienced with HPC, don’t hesitate to join this event. If you are an IBM Spectrum Scale User, you are welcome to listen to our presenters as well as to share with the user group on your experience.

Agenda
  • 08:00 Registration and Networking
  • 09:00 Welcome
  • 09:15 Keynote: Faster insights with GPUs
  • 09:35 Accelerating AI workloads with IBM Spectrum Scale by Ms. Par Hettiga, IBM
  • 09:55 Accelerating NVIDIA DGX workloads with IBM Spectrum Scale by Ms. Par Hettiga, IBM
  • 10:05 Software Defined Infrastructure for Data Intensive Science
  • 10:25 Meet the Developers
  • 10:30 Coffee and Networking
  • 11:00 ‘What is new in Spectrum Scale?’ by Mr. Wei Gong, IBM
  • 11:30 ‘What is new in ESS?’ by Mr. Chris Maesta, IBM
  • 11:45 ‘What is new in Support?’
  • 12:00 Spectrum Scale on AWS (Live Demo) by Ms. Smita Raut, IBM
  • 12:30 Lunch and Networking
  • 13:30 Customer/Partner Talk
  • 13:50 Customer/Partner Talk
  • 14:10 Enabling Precision Medicine with IBM Spectrum Scale by Mr. Frank Lee, IBM
  • 14:40 Customer/Partner Talk – Life Science
  • 15:30 Coffee and Networking
  • 15:50 Running Spark/Hadoop workload on Spectrum Scale by Mr. Wei Gong, IBM
  • 16:15 Spectrum Scale support for Container by Ms. Smita Raut, IBM
  • 16:40 Audit Logging/Watch Folder or Security or Object Access
  • 17:05 Open Discussion
  • 17:25 Wrap-up by Mr. Chris Schlipalius (User Group), 5-minute

For enquiries, please contact Chris at [email protected].

KEEP ME UPDATED ON SCA19

For more information about the conference, Please contact