Dr Dario Gil is the Director of IBM Research, one of the world’s largest and most influential
corporate research labs. He is the 12th Director in its 74-year history.
IBM Research is a global organization with over 3,000 researchers at 19 locations on six
continents advancing the future of computing. Dr Gil leads innovation efforts at IBM, directing
research strategies in Quantum, AI, Hybrid Cloud, Security, Industry Solutions, and
Semiconductors and Systems.
Prior to his current appointment, Dr Gil served as Chief Operating Officer of IBM Research and
the Vice President of AI and Quantum Computing, areas in which he continues to have broad
responsibilities across IBM. Under his leadership, IBM was the first company in the world to
build programmable quantum computers and make them universally available through the
cloud. An advocate of collaborative research models, he co-chairs the MIT-IBM Watson AI Lab,
a pioneering industrial-academic laboratory with a portfolio of more than 50 projects focused
on advancing fundamental AI research to the broad benefit of industry and society.
A passionate advocate of scientific discovery and education, Dr Gil is a member of the
President’s Council of Science and Technology Advisors (PCAST) and is a Trustee of the New
York Hall of Science, which provides schools, families and underserved communities in the
New York City area with exposure to science, technology, engineering and math (STEM).
Dr Gil received his Ph.D. in Electrical Engineering and Computer Science from MIT.
Ushering in a new decade of computing for Smarter cities
Being a “smarter city” is a journey and not an overnight transformation. Government and the urban council must prepare for the changes that will be revolutionary, rather than evolutionary, as they put in place next generation systems which work in entirely new ways. This level of disruption led by AI, high performance computing and even Quantum requires a surrounding compute architecture which can make the entire setup work efficiently, and sustainably. Hear from the Global Head of IBM research how the future of tech is nearer than we think and is being is deployed across the different cities in the world, making it a better place to live.
Senior Computer Scientist, Director of the Urban Center for Computation and Data, USA
Charles Catlett is a Senior Computer Scientist at the U.S. Department of Energy’s Argonne National Laboratory and a Senior Fellow at the University of Chicago’s Mansueto Institute for Urban Innovation. His current research focuses on urban data analytics, urban modeling, and the design and use of sensing and “edge” computing and software technologies embedded in urban infrastructure. He is the principal investigator of the NSF-funded “Array of Things” (AoT), an experimental urban infrastructure to measure the city’s environment with sensors and embedded (“edge”), remotely programmable artificial intelligence hardware. Operating at over 130 locations in Chicago, AoT is an experimental instrument for measuring the urban environment, air quality, street activity, and other factors. He is a co-principal investigator of a new NSF-funded Mid-Scale Research Infrastructure project, “SAGE,” that extends the underlying cyberinfrastructure used with AoT, creating “software-defined sensors” for not only urban but also environmental and emergency management deployments.
Catlett has served as Argonne’s Chief Information Officer and before joining UChicago and Argonne in 2000, he was Chief Technology Officer at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign. From NCSA’s founding in 1985 he participated in the development of NSFNET, one of several early national networks that evolved into what we now experience as the Internet. During the exponential growth of the web following the release of NCSA’s Mosaic web browser, his team developed and supported NCSA’s scalable web server infrastructure. He is the founding director of the Urban Center for Computation and Data at the University of Chicago and is a Computer Engineering graduate of the University of Illinois at Urbana-Champaign.
Understanding Cities through HPC Supporting Embedded AI, Modeling, and Predictive Analytics
Urbanization is one of the great challenges and opportunities of this century, inextricably tied to global challenges ranging from climate change to sustainable use of energy and natural resources, and from personal health and safety to accelerating innovation and education. There is a growing science community—spanning nearly every discipline—pursuing research related to these challenges. For many urban questions, there is a need for measurements with greater spatial and temporal resolution than is currently available for understanding air quality, microclimate, vibration, noise, and other factors. Concurrently, many factors—from the flow of people through a public space to the impact of at-grade rail crossings on emergency response—require more sophisticated “measurements” involving embedded, or “edge” computing within the urban infrastructure. Ultimately, these and other data sources are also critical to effectively modeling individual urban processes and systems as well as their interactions—requiring high-performance computation. With exascale systems, such coupled models can provide exploratory capabilities through both high-fidelity models as well as ensembles, such as identifying buildings that are particularly vulnerable to extreme heat waves, or transportation investments and policies that are most likely to address congestion and associated heat and emissions challenges. Catlett will discuss initiatives and projects that provide a glimpse into how high-performance cyberinfrastructure will be increasingly central to capturing the opportunities, and addressing the challenges, of urbanization and climate change.
Director of the Australian National Computational Infrastructure (NCI)
Sean Smith is Director of the Australian National Computational Infrastructure (NCI) and conjointly Professor of computational nanomaterials science and technology at the Australian National University. He has extensive theoretical and computational research experience in chemistry, nanomaterials and nano-bio science and technology. He returned to Australia in 2014 at UNSW Sydney, founding and directing the Integrated Materials Design Centre to drive an integrated program of materials design, discovery and characterization. Prior to this, he directed the US Department of Energy funded Center for Nanophase Materials Sciences (CNMS) at Oak Ridge National Laboratory, one of five major DOE nanoscience research and user facilities in the US, through its 2011-2013 triennial phase. During his earlier career, he joined The University of Queensland as junior faculty in 1993 after post-doctoral research at UC Berkeley (1991-1993) and Universität Göttingen (Humboldt Fellow 1989-1991); became Professor and Director of the Centre for Computational Molecular Science 2002-2011; and built up the computational nanobio science and technology laboratory the Australian Institute for Bioengineering and Nanotechnology (AIBN) at UQ 2006-2011. He worked with colleagues in the ARC Center of Excellence for Functional Nanomaterials 2002-2011 as Program Leader (Computational Nanoscience) and Deputy Director(Internationalisation).
Professor, Applied Mathematics, Stony Brook University, USA
Yuefan Deng earned his BA (1983) in Physics from Nankai University and his Ph. D. (1989) in Theoretical Physics from Columbia University. He is currently a professor (since 1998) of applied mathematics and the associate director of the Institute of Engineering-Driven Medicine, at Stony Brook University in New York. Prof. Deng’s research covers parallel computing, molecular dynamics, Monte Carlo methods, and biomedical engineering. The latest focus is on the multiscale modeling of platelet activation and aggregation (funded by US NIH) on supercomputers, parallel optimization algorithms, and supercomputer network topologies. He publishes widely in diverse fields of physics, computational mathematics, and biomedical engineering. He has received 13 patents.
Fast and Accurate Multiscale Modeling of Platelets Aided by Machine Learning
Abstract: Multiscale modeling in biomedical engineering is gaining momentum because of progress in supercomputing, applied mathematics, and quantitative biomedical engineering. For example, scientists in various disciplines have been advancing, slowly but steadily, the simulation of blood including its flowing and the physiological properties of such components as red blood cells, white blood cells, and platelets. Platelet activation and aggregation stimulate blood clotting that results in heart attacks and strokes causing nearly 20 million deaths each year. To reduce such deaths, we must discover new drugs. To discover new drugs, we must understand the mechanism of platelet activation and aggregation. To model platelets’ dynamics involves setting the basic space and time discretization in huge ranges of 5-6 orders of magnitudes, resulting from the relevant fundamental interactions at atomic, to molecular, to cell, to fluid scales. To achieve the desired accuracy at the minimal computational costs, we must select the correct physiological parameters in the force fields as well as the spatial and temporal discretization, by machine learning. We demonstrate our results of speeding up a multiscale platelet aggregation simulation, while maintaining desired accuracies, by orders of magnitude, compared with traditional algorithm that uses the smallest of temporal and spatial scales in order to capture the finest details of the dynamics. We present our analyses of the accuracies and efficiencies of the representative modeling. We will also outline the general methodologies of multiscale modeling of cells at atomic resolutions guided by machine learning.
Senior Vice President of Supercomputing Technology, Mellanox
Gilad Shainer is an HPC evangelist that focuses on high-performance computing, high-speed interconnects, leading-edge technologies and performance characterizations. He serves as a board member in the OpenPOWER, CCIX, OpenCAPI and UCF organizations, a member of IBTA and contributor to the PCISIG PCI-X and PCIe specifications. Gilad is the Senior Vice President of Marketing in Mellanox as well as the President of UCF Consortium. He holds multiple patents in the field of high-speed networking. He is also a recipient of 2015 R&D100 award for his contribution to the CORE-Direct collective offload technology. Gilad holds an M.Sc. degree and a B.Sc. degree in Electrical Engineering from the Technion Institute of Technology.
In-Network Computing – The Next Generation of Supercomputing
The latest revolution in HPC and AI is In-Network Computing. In-Network Computing is the result of a collaborative industry and academia effort to reach Exascale performance by taking a holistic system-level approach toward fundamental performance improvements. The latest generations of smart interconnects offload both the network functions from the CPU and selected data algorithms. This allowing users to actually run data algorithms on the data while the data is being transferred within the system interconnect, rather than waiting for the data to reach the CPU. This technology is referred to as In-Network Computing. In-Network Computing transforms the data center interconnect into a “distributed CPU”, and “distributed memory,” to overcome performance bottlenecks and enabling faster and more scalable data analysis.
In-Network Computing and IPUs Technology for Compute Intensive Applications
The ever increasing demands for higher computation performance drive the creation of new datacenter accelerators and processing units. Previously CPUs and GPUs were the main sources for compute power. The exponential increase in data volume and in problems complexity, drove the creation of a new processing unit – the I/O processing unit or IPU. IPUs are interconnect elements that include specific and generic In-Network Computing engines that can analyse application data as it being transferred within the data center, or at the edge. The combination of CPUs, GPUs, and IPUs, creates the next generation of data center and edge computing architectures.
Director, Strategy Planning, Cloud and AI, Futurewei Technologies, USA
Francis brings 20+ years of IT industry experience specialized in server systems design, HPC and cloud scale computing architecture. Before joining Futurewei Technologies as Director of Strategy Planning in 2019, Francis served in Huawei Enerprise USA as Director of Product Management and Hardware Architect. Francis is responsible of driving future direction of cloud and AI compute infrastructure.
Prior to joining Futurewei, Francis had 20+ years experiences with industry leading IT solution providers such as Hewlett-Packard, Sun Microsystems and Oracle.
Empowering Smart Cities with an Intelligent Computing Infrastructure
Smart cities drive the needs for a massive computing infrastructure, from the cloud, to the edge. New generation of broadband fibre and 5G networks connect and aggregate tremendous amount of data. An AI-native, distributed and cloud scale compute infrastructure is becoming essential to process, analyse and make decisions. High performance computing plays a critical role in transforming data centres in order to meet the needs of such a demanding intelligent future. HPC empowers and enables smart cities to function. This talk examines the key success factors and practical considerations in designing such an intelligent computing infrastructure, the backbone of smart cities.
Vice President & General Manager, HPC & AI Solution Segment, Hewlett Packard Enterprise, Singapore
Bill Mannel is Vice President and General Manager of High-Performance Computing (HPC) and Artificial Intelligence (AI), for Hybrid IT, Hewlett Packard Enterprise.
Bill joined HPE in 2014 and is a seasoned veteran of the servers and high performance computing industry. For Bill’s first three years at HPE, the HPC business grew significantly more than the overall market. HPE acquired Silicon Graphics International Corp. (SGI), a pure-play HPC company, closing the integration in November, 2017. At the Supercom-puting 2015 show, Bill was named “Person to Watch in 2016” by HPCWire.
Bill joined HPE from SGI, where he was the VP and GM for Compute and Storage products. Prior to SGI, Bill worked at NASA and the U.S. Air Force on aircraft programs such X-29, F-16, B-1B, and AFTI-111.
Bill holds a Bachelor’s degree in Mechanical from Duke University and an MBA from California State University.
Directions for the Exascale Era
An ocean of data is being generated from billions of connected devices at the Edge, as well as larger and larger HPC systems driving simulation and modeling. The world clearly needs new approaches to extract maximum value from all of this data. This sessions lays out the challenges and suggests solutions.
Corporate Vice President and CTO, AMD Datacenter Solutions, USA
Raghu Nambiar is the Corporate Vice President and CTO of Datacenter ecosystems and solution at AMD.
High Performance Computing with AMD
AMD is all about innovation and our mission is to deliver products that help to solve the world’s toughest challenges – in life sciences, earth science, energy, manufacturing, fundamental research, oil and gas, machine intelligence and many more.
Creating an inflection point with trailblazing performance and unprecedented scalability for today’s HPC workloads, AMD EPYC processors and AMD Radeon Instinct mark the next milestone in exascale computing. This session will cover our roadmap, ecosystem partnerships and solutions to address the performance and scalability needs of emerging demands of HPC workloads.
Distinguished Expert – VP, Head of HPC, AI & Quantum Business Operations, Atos
After an engineering degree in Modeling and Scientific Computing in Polytech Lyon (France), Damien Déclat integrated the HPC community in 1999 at Argonne National Laboratory (Chicago, USA). He was then part during 5 years to the Global change group at Cerfacs (Toulouse, France), developing coupling software for climate modeling simulations. He joined the Atos Bull Group in 2004 as an Applications Expert and Pre-Sales consultant. He then successively led the Applications & Performances group, Pre-Sales activities and managed strategic HPC projects such as the deliveries of the Curie (Genci) and Meteo-France supercomputers. Damien is now Head of HPC, AI & Quantum Global Business Operations at Atos.
Mastering HPC & AI on the road to Exascale
During the last 40 years, high performance computing solutions (HPC) have been able to sustain the growth of the simulation demand mainly thanks to the evolution of CPU technologies. Today’s challenges with expansion of simulation domains, management of data deluge driven by IoT and the Exascale era require reinforcement of the efficiency of supercomputing solutions. Atos, as a global leader in digital transformation, develops & deploys innovative technological solutions enabling convergence of HPC, Quantum & AI. Atos approach will be developed through presentation of several use cases in various sectors worldwide.
Seetha Nookala is Director of APJ for HPC & AI solutions at Intel and has vast experience in the design and architecture of large scale HPC systems and their operation in both public and private cloud environments.
Recipient of TATA business innovation award for building world’s fourth fastest system using Intel components with in-house HPC stack in 2007. He also received late President of India A.P.J. Abdul Kalam award for his contribution towards Indigenous supercomputing.
Challenges in the Era of Exascale Computing
Over the last few decades, there has been an exponential increase in the computing requirements across the segments ranging from edge to supercomputers. While computing systems have been shifting between distributed and centralised models, in today’s world the computing demand has been growing from edge to data centre. This demand on both sides is forcing the industry to innovate in both directions of distributed and central systems such as cloud.
The industry has an important role to play in the broad spectrum of computing to meet the converging needs and transform itself from a single segment player to address the spectrum of computing from edge to data centre. Intel has capitalised on this change to shift its focus from CPU and began its transformation journey to today’s data-centric company, making itself competitive by innovating across a wide spectrum of computing.
In this direction, Intel has defined six pillars of innovation and investments in those areas to address demands of computing convergence. To meet the AI, big data and HPC convergence, Intel has also invested into XE architecture. Exascale plan of Intel is based on these two fundamental innovations. The speaker will shed light on the direction of computing at Intel and share the path of moving towards exascale. Conquering exascale computing will imply considerable advances for both hardware and software to pave the road for faster discovery of solutions to the challenges of humankind.
Prof.dr.ir. Cees de Laat chairs the System and Network Engineering (SNE) laboratory at the University of Amsterdam. The SNE lab conducts research on leading-edge computer systems of all scales, ranging from global-scale systems and networks to embedded devices. Across these multiple scales our particular interest is on extra-functional properties of systems, such as performance, programmability, productivity, security, trust, sustainability and, last but not least, the societal impact of emerging systems-related technologies. For current activities and projects see: http://delaat.net/
ICT to Support the transformation of Science in the Roaring Twenties
The way how science is done is profoundly changing. Machine Learning and Artificial Intelligence are now applied in most of the sciences to process data and understand (or not) the observed phenomena. The recent research directions and results with respect to data and data exchange to feed the AI-ML layer will be addressed in this talk.
Senior Member of the Institute of Electrical and Electronics Engineers, Chief Technologist, Research Networks, CTO Group, Research Labs, Ciena Corporation
Mr. Wilson is responsible for Ciena’s leadership & global interactions with universities and the research community, including national research and education networks. Representing Ciena’s Technology Group, he orchestrates intersections between emerging technologies and Ciena research and development initiatives across the globe. Within Ciena he coordinates a program that extracts new technology innovation initiatives and helps prove viability through testbed trials and high performance demonstrations.
Prior to his Ciena roles, he was a senior advisor for the CTO at Nortel, and held other advanced technology roles during 13 years with the company, including director of broadband switching and development leader of Optical Ethernet. During his time with Gandalf Corporation, he served as VP corporate communications, Applications Engineer and director of Marketing. Prior to this, he served as lead network architect for the University of Toronto’s global bibliographic research service. At Bell Canada, he helped build Canada’s first packet network. Originally trained in Electrical Engineering at Ryerson University in Toronto Ontario, and is a graduate of the Executive Management school at Stanford University in Palo Alto California.
Mr. Wilson is active on a number of business and volunteer Boards. He is chairman of the Algonquin College Foundation Board of Directors; director, Institute of Electrical and Electronics Engineers, Communications Society NA; Industry representative for Polytechnics Canada and is a member of the Institute of Corporate Directors. In 2018, he was elected to the Board of ENCQOR, a $0.4b Public-Private partnership program to develop Canadian 5G wireless technology leadership. He is a recipient of the Queen Elizabeth II Jubilee medal for his service to Canada.
Geomesh Networks, Connecting Continents and Connecting Schools on a Global Scale
Mr. Wilson will highlight advances in highly scalable optical network transport systems that connect continents and connect research applications overland and under the sea. The focus will be on emerging photonic and software technologies that are speeding deployment of scalable diverse long reach systems.
Leader of Network Development Team of KREONET Center, KISTI
He is a Principal Researcher of Korea Institute of Science and Technology Information (KISTI) which is the national Supercomputing & Advanced Research Network Center. Especially, working for Dept. Advanced KREONET Center. Interesting research areas include in ScienceDMZ & PRP, Network QoS & Network Engineering, Software Defined Network & Future Internet, Cloud Computing & Network Virtualization, Remote Collaboration and so on. And also, He is a chair of APRP(Asia Pacific Research Platform) WG in APAN.
Networks & Communications Manager, King Abdullah University of Science and Technology (KAUST)
Kevin Sale is a network and security professional with 20 years in the telecommunications, education, finance and energy sectors. He currently serves as the Networks & Communications Manager at King Abdullah University of Science and Technology (KAUST) where he is responsible for strategic development of the network to ensure KAUST remains a global tier I research university. Prior to this he lead the Information Security Governance, Risk, Compliance and Awareness practice also at KAUST.
As the APRP and the GRP continue to expand their reach, the West Asia region including Saudi Arabia continues to be under-served. Too often the great science that is being conducted in the region is stifled by a lack of connectivity options and the difficulty of accessing data. As the model of global collaborative research continues to grow there is a widening gap between those who enjoy abundant connectivity and mature platforms and those who do not. This session will outline what the King Abdullah University of Science and Technology (KAUST) is doing in the region to bridge this gap.
Cloud Team Manager, National Computational Infrastructure Canberra Australia
Andrew has many decades of hands-on technical, diplomatic and logistics experience covering a
wide range of standard and bespoke technologies, languages and applications within Industry,
Government, Academia and Research nationally and internationally.
He chairs the judging panel of the SuperComputing Asia Data Mover Challenge and Co-Chairs the
Cloud Security Alliance: HPC Cloud Security, APAN Program Committee, APAN E-Culture and
APAN Asia Pacific Research Platform working groups.
His current role at the National Computational Infrastructure (NCI) involves working on High Performance
Networks, Computing and Cloud systems. He manages the NCI Cloud Team supporting both the NCI
private and the National Nectar Research Clouds and National Data collections.
Director, International Network, National Supercomputing Centre (NSCC) Singapore
Bu Sung Lee received his B.Sc. (Hons) and PhD from the Electrical and Electronics Department, Loughborough University of Technology. He is currently an Associate Professor with the School of Computer Science and Engineering, Nanyang technological University. He held concurrent position as Director of HP Lab Singapore from 2010-2012. He holds a consultant position as Director, International Networks at National SuperComputing Center (Singapore) since 2015.
Bu Sung Lee has been actively involved in research and education community. He is the founding President of Singapore Advanced Research and Education Network (SingAREN), 2003-2007 and currently the Vice-President of SingAREN. He is the Chair of the Trans EurAsia Information Network cooperation center(TEIN*CC) advisory committee, since 2018, which manages the [email protected] EU AID grant, Euro 20 million. Globally, he is a member of the Global Network Architecture Policy and Strategy WG.
Bu-Sung Lee has published widely in the field of computer network and distributed system. He is the co-author of a number of Best Paper. He hold a number of patents with NTU and HP.
Hiroyuki “HIRO” Itoh has been working for the R&D divisions of DAIKIN, including Boston Technology Office. He experienced the development of acoustics field analysis using boundary element method in 1980’s, advanced adaptive control of air-conditioners w/wo machine learning in early 90’s., and business dynamics for the environmental management by discrete-event simulation in 2000’s.
Currently, his mission is to establish the co-creative business environment with various entities. He is a member of the High-Performance Computing Infrastructure Planning & Promotion Committee of the Ministry of Education, Culture, Sports, Science and Technology, and a Fellow and the President of KANSAI Branch of the Japan Society of Mechanical Engineers.
Professor, High Performance Computing Systems Group Tokyo Institute of Technology
Prof Satoshi Matsuoka’s research is principally in system software for large scale supercomputers and similar infrastructures such as Clouds for HPC, and more recently, convergence of Big Data/AI with HPC, as well as investigating the Post-Moore Technologies towards 2025. Over the years he has been involved and led a number of large collaborative projects that worked on basic elements that are now significant for the current and more importantly future exascale systems, such as fault tolerance, low power, strong scalability, programmability, as well as large-scale I/O. Some of the major projects that he has led are scalable implementations of parallel object-oriented languages on 1000CPU-scale machines in the early 90’s (Fujitsu AP1000 and ETL EM-4), GridRPC/Ninf project in collaboration with ETL/AIST (1995-2005), the “Titech Campus Grid” project (2002-2006) which deployed a 1300 CPU grid-of-clusters in production mode within the Tokyo Tech campus, the NAREGI project (2003-2007) which is a national project to develop a deployable advanced grid middleware for national supercomputing centers, and Tsubame1 (2006-2010), a production supercomputer which was the #1 machine in Asia for 1.5 years. Some of the recent projects include Ultra Low Power HPC (2007-2013) which aims to achieve 1000-fold power/performance improvement in supercomputers, Info-Plosion (2005-2011) which aims to develop basic system software technologies for large-scale data and information processing, Tsubame2.0 (2010-2014) which became the 4th fastest supercomputer in the world and the “Greenest Production Supercomputer in the World” on the Green 500 two consecutive times, heavily utilizing GPUs and other power efficient technologies. The most recent awards are 1) the Billion-way fault tolerance in supercomputers (2011-2015) which will investigate the key technologies for reliable exascale in supercomputing; 2) Ultra Green Supercomputing (2011-2015) and it successor Post-Green Project (2015-2019) where we investigate low power control and cooling technologies to achieve exascale power requirements, the most recent result being design and construction of the oil-immersed Tsubame-KFC supercomputer that became No.1 in the world on the Green 500 two consecutive editions; and 3) Extreme Big Data (2013-2018), where we aim to converge HPC/AI and Big Data technologies in order to be able to handle extreme-scale data that can be only processed with supercomputers. Prof Matsuoka is in the process of designing a new 100 PetaFlops-scale AI-dedicated Cloud Supercomputer called ABCI for the AI Research Center hosted by AIST National Institute. Finally, they are starting research on post-Moore technologies, where they claim that, transition from FLOPS centric means of compute acceleration will be replaced by data or BYTE centric computing, and as such, the convergence of HPC and Big Data/AI are more fundamental, not just because of mutual needs.
Japan-Singapore Session 26 feb 2020, Wednesday
Fugaku — A Centerpiece for the Japanese Society 5.0
Fugaku is not only one of the first ‘exascale’ supercomputer of the
world, but also is slated to be the centerpiece for rapid realization of
the so-called Japanese ‘Society 5.0’ as defined by the Japanese S&T
national policy. Indeed, the computing capacity of Fugaku is massive,
almost equaling the compute capabilities of the aggregate of all the
servers (including those in the cloud) facilitated in Japan
(approximately 300,000 units), but at the same time, is a pinnacle of
the Arm ecosystem, being software compatible with billions of Arm
processors sold worldwide in smartphones to refrigerators, and will run
standard software stack as is the case for x86 servers. As such, Fugaku’
s immense power is directly applicable not only to traditional
scientific simulation applications, but can be a target of Society 5.0
applications that encompasses conversion of HPC & AI & Big Data as well
as Cyber (IDC & Network) vs. Physical (IoT) space, with immediate
societal impact. A series of projects and developments have started at
R-CCS and our partners to facilitate such Society 5.0 usage scenarios on
HPCI shared storage is a distributed file system that uses Gfarm and can be accessed from major Japanese supercomputers. In fact, it has been used in about 100 research projects and has accumulated about 10 PB of research results. HPCI shared storage is distributed across the University of Tokyo Kashiwa Campus and RIKEN Center for Computational Science(R-CCS). In HPCI shared storage, since 2018, data multiplexing has been promoted, aiming for a highly available system. HPCI shared storage has achieved continuous non-stop operation for 15 months since October 2018. In FY2018, the system was shut down only twice, and the system continued to operate even after natural disasters such as typhoons and lightning strikes. In this presentation, we will introduce the incidents and operations that occurred at the HPCI shared storage system R-CCS hub over the past 2 years. The details of the location where the hardware failure occurred, the parts that occurred, and the operation error are shown. In particular, HDDs have been regularly inspected for all HDDs, but the number of failures is small, and currently MTBF is higher than the venders’s nominal value. The network traffic of RCCS, One-site operation, which is the key technology of continuous operation, and detailed procedures for failover operation of master metadata server will be revealed in this poster presentation.
Deputy Head of Research Organization for Information Science & Technology (RIST) KOBE center
· Deputy Head of Research Organization for Information Science & Technology (RIST) KOBE center
– Planning and management of HPCI project and Post K computer resource management and user support
· In Fujitsu Ltd. as Executive Architect of Technical Computing Solutions Unit
– The FJ supervisor of national projects such as K computer project and National Grid projects and planning, marketing and support activities of HPC business
– Development of computational science and engineering applications, such as CFD, crash worthiness, and MD
– Project leader of Parallel Computing Center of Fujitsu Lab
· Ph.D., Information Science : Japan Advanced Institute of Science and Technology
· Master of Nuclear Engineering : Nagoya University
Japan-Singapore Session 26 feb 2020, Wednesday
Access from industry sector to Japanese HPCI
HPCI, High Performance Computing Infrastructure of Japan, is a shared use framework of world top class supercomputing resources in Japan.
The HPCI call is open to the world-wide HPC communities.
HPCI computing resources consist of Tier 0 flagship supercomputer, Tier 1 Universities and research organization supercomputer systems and large-scale shared storage. Fugaku, Tier 0 flagship system, will start its shared-use operation in FY2021.
Not only academic users, industry sector users can also access the HPCI resources for their R&D in each company/organization.
In my talk, I will present the outline of the HPCI and its industry use experience in K computer era, the former flagship system.
Director, Vanderbilt Center for Quantitative Sciences
Yu Shyr received his Ph.D. in biostatistics from the University of Michigan (Ann Arbor) in 1994 and subsequently joined the faculty at Vanderbilt University School of Medicine. At Vanderbilt, he has collaborated on numerous research projects; assisted investigators in developing clinical research protocols; collaborated on multiple grants funded through external peer-reviewed mechanisms; and developed biostatistical and bioinformatic methodologies for clinical trial design, high-dimensional data analysis, and experimental design.
Dr. Shyr is a Fellow of the American Statistical Association (ASA), an elected fellow of the American Association for the Advancement of Science (AAAS) and a US Food and Drug Administration (FDA) advisory committee voting member. He has published more than 470 peer-reviewed papers in a variety of journals (h-index = 100). Dr. Shyr was the member of the US National Academy of Medicine (IOM) Committee on Policy Issues in the Clinical Development of Biomarkers for Molecularly Targeted Therapies. He has served as a member of the US National Cancer Institute (NCI) Developmental Therapeutics Study Section, Cancer Immunopathology and Immunotherapy Study Section and the Population and Patient-oriented Training Study Section. Dr. Shyr was the co-course director for the AACR/ASCO Methods in Clinical Cancer Research Vail Workshop. He is the Associate Editor for JAMA Oncology, Journal of Thoracic Oncology (JTO), and the Statistical Advisory Board Member for PLoS ONE. In addition, Dr. Shyr is the principle investigator of the NCI U01 grant of Barrett’s esophagus translational research network coordinating center (BETRNetCC). Dr. Shyr’s current research interests focus on developing statistical bioinformatic methods for analyzing next-generation sequencing data based on single cell technology including a series of papers on estimating the sample size requirements for studies conducting DNA and RNA sequencing analysis.
HPC-AI in Health and Biomedical Sciences 25 feb 2020, Tuesday
Innovative Technologies and Industrialization Prospects of Data Science in Precision Health
The key concepts of precision medicine are prevention and treatment strategies that take individual molecular profile and clinical information into account. Single-cell next-generation sequencing technologies (NGS), liquid biopsy for circulating tumor DNA (ctDNA) analysis, microbiomics, radiomics, and other types of high-throughput assays have exploded in popularity in recent years, thanks to their ability to produce an enormous volume of data quickly and at relatively low cost. The emergence of these big data has advanced the goals of precision medicine; however, across the entire continuum of big data capture and utilization, many more challenges lie ahead—from analysis of high-throughput biomarkers to maximum exploitation of the electronic health record (EHR), to the ultimate goal of clinical guidance based on a patient’s genome.
In recent years, almost all top biomedical journals have published major findings using advanced data science technologies, including complex statistical modeling, machine learning, and AI. Interpreting these results for patients and applying them for clinical guidance, however, remain significant challenges.
In this presentation, I will introduce the US NIH All of Us Research Program, a historic effort to gather data from one million or more people living in the US to accelerate research and improve health, and a recently launched Amazon Care, a virtual medical clinic for the Amazon employees. In addition, I will offer some perspectives on the changing landscape for precision medicine, including the road map for choosing between statistical modeling and machine learning; the concept of treating unstructured text as quantitative data; the need for physicians to adapt their mindset around the explosive growth in information technology; machine learning; and the AI revolution. These areas present great opportunities for medical researchers to strengthen their role in precision medicine. I will finish up with some thoughts about future medical developments, including how to design and conduct pivotal trials, pragmatic trials, and real-world evidence studies in the precision medicine era.
AI Consultant, AI Industry Innovation, AI Singapore
Tern Poh is an AI Consultant at AI Singapore. He provides consulting services to enable teams to undertake the development and implementation of AI minimum viable models within their organisations. He is also on secondment to National AI Office to provide his technical expertise on AI.
Prior to these, he was also an instructor for the Data Analytics programme at the National University of Singapore School of Continuing and Lifelong Education.
Tern Poh started his career as a trainee trader at Bunge before becoming a Purchasing Manager at Procter & Gamble. Coming from a commercial background, he understands the importance of bridging stakeholders to translate analytics insights into impact at scale. He is excited about combining his AI knowledge and capability to create exponential value for businesses and society.
HPC-AI in Health and Biomedical Sciences 25 feb 2020, Tuesday
Improving Hospitalisation Risk Prediction Among Kidney Patients
Patients undergoing dialysis have high risks of hospitalisation. By the time they are hospitalised, their medical conditions usually have become full-blown and their mortality risks would have increased. The ability to predict hospitalisation risk will allow early medical intervention. Even though there is research done on key predictors of hospitalisation, the current process is fuzzy and is dependent on the experience of medical staff.
Through AI Singapore’s (AISG) flagship programme, 100 Experiments, AISG partnered with a regional kidney dialysis centre to develop a medical AI model that predicts the hospitalisation risk of hemodialysis patients. In this talk, AISG will provide some insights into the process and challenges of developing the medical AI model.
Andreas has thirteen years of work experience in higher education and research institutes in Germany, Ireland and Singapore. Working at the intersection of technology, biomedical and computer science he developed highly cited open-source software packages for the analysis of biomedical samples. In his role as Bioinformatics core team lead at the Genome Institute of Singapore, Andreas and his team were responsible for developing scalable computational workflows for analyzing genomics big data on hybrid compute platforms, including data from the National Precision Medicine pilot program. Andreas is based out of the Microsoft Singapore office, from where he supports academic institutions across Asia in their Azure cloud journey.
HPC-AI in Health and Biomedical Sciences 25 feb 2020, Tuesday
Cloud-Based High-Throughput Compute Solutions for Biomedical Research
Biomedical sciences span a range of data-intensive disciplines, including genomics and medical imaging. Continued technological progress and cost reductions have lead to sustained increase in data production. In this talk I will discuss a number of systems and services for storing and analyzing biomedical data-sets that not only scale, but also meet regulatory compliance and support reproducible research. I will discuss analysis workflow orchestration as well as container systems in the context of FAIR research and cloud burst capabilities. I will furthermore touch on data sharing challenges as well as accelerated data analysis services in the public cloud.
Head, Bioinformatics Core, Cancer Science Institute, NUS
Dr Henry Yang is currently the head of Bioinformatics Core at Cancer Science Institute of Singapore, NUS. His main research focuses are in analysis, integration and interpretation of the large-scale genomic/epigenomic data generated by individual PIs at CSI or campus-wide researchers or downloaded from publically available sites. To facilitate the analyses for collaborators, the group has also developed several analysis tools, including analysis pipelines for alternative splicing, RNA modification, and alternative polyadenylation, and NGS analysis portal.
HPC-AI in Health and Biomedical Sciences 25 feb 2020, Tuesday
Integrated analysis platform of various large-scale NGS data
Currently, numerous large-scale data such as next-generation sequencing are now available for biomedical research. Due to the complexity of biomedical systems, however, individual data sets describe biological/biomedical events from different aspects and alone can only provide information on a biomedical system from a limited viewpoint. integrated viewpoints from different disparate datasets are often required to better understand the complex biological regulatory mechanisms. In this presentation, a novel modular integration platform will be discussed which contains two steps: 1) individual data analysis portal and 2) modular data integration portal. Given a biological question, we can integrate different data at different stages module-wise to more effectively answer the question asked.
Rikky Purbojati is the lead of high-performance computing facility in SCELSE, a research centre of excellence hosted in NTU with focuses in complex microbial communities in environmental and engineered systems.
He has over 9 years of experience in managing HPC, and in parallel, over 11 years of experience working as Bioinformatics specialist, specifically environmental genomics. Previously he had worked at First Base Pte. Ltd and NUS for 2 years performing similar role.
One of his primary interest is to accelerate the time-to-result of data analysis, and enabling scientists, Linux-proficient or not, to conduct complex data analysis in a large HPC system. Based on that idea, he has guided the development of SCELSE’s HPC system, including establishing the pipelines and integration of the genomics and the imaging lab.
HPC-AI in Health and Biomedical Sciences 25 feb 2020, Tuesday
Petascale computing infrastructure for understanding genetic information and diversity in Asian population
GenomeAsia100K is a large-scale population genomics project that aims to sequence 100,000 genomes from Asian population, specifically south, east, and southeast Asian. It was established due to the fact that, while making-up a significant portion of human population, Asian genomes are currently underrepresented in genomics research across the world.
To tackle the high demand of the project, the GenomeAsia100K consortium is collaborating with the National Supercomputing Center of Singapore to utilize the ASPIRE’s computational resources needed for comprehensive end-to-end analysis. This talk describes several advances and architectural decisions that were made to enable a streamline processing of the genomes; from data acquisition, data generation, data processing, and finally the data dissemination to collaborators.
He was born in Los Angeles, California, United States. During his teenage years Miyahara began playing guitar and was impressed both by heavy metal and the burgeoning MTV culture of mainstream music in the 1980s. While Miyahara was still in school he developed his skills as a music producer. During this same period he traveled to Korea, where he became friends with other Asian-Americans of similar backgrounds and first began taking part in recording sessions as a producer. In 1999, he moved to Japan to start his full scale music career. In 2002, Miyahara released his first song as a producer, “Tsubasa wo kudasai”, which was used as the theme song for Japan’s national team in the 2002 FIFA World Cup, co-hosted by Japan and Korea. Miyahara’s breakthrough success came in August 2008 with the release of Spontania feat. JUJU’s “Kiminosubeteni”, which he produced. The song reached number 1 on the USEN chart, fourth in the yearly iTunes ranking and was downloaded over 5 million times. In 2009, another of Miyahara’s productions, JUJU feat. JAY’ED “Ashitagakurunara” ranked No.1 on the Chaku Uta hit charts.
Computational Humanities 27 feb 2020, Thursday
HPC for Entertainment and Education
From motion-capture for the creation of virtual characters, to the emerging use of VR, AR, A.I. and 5G, the potential applications of high-performance computing in the entertainment industry are proliferating. Come learn about (and experience) some of these ground-breaking initiatives. In addition, we will hear about Hitmaker University, a new Singapore-based educational institution, which promises to bring this intersection of entertainment and deep technology into a learning environment that can support the next generation of musicians and artists.
Howard Weiss is an expert on creating solutions for high speed storage utilizing HDDs and SSD/NVMes, job scheduling, interconnect technologies and container based virtualization. Howard has acted as the Vice President of APAC for global IT companies such as storage/Data Direct Networks, interconnect/Voltaire (now Mellanox), data protection BakBone Software (Quest/Dell). Howard was a co-founder of Cofio Software that has been acquired by Hitachi Data Systems (now Hitachi Vantara). Five years ago Howard Weiss incorporated Pacific Teck with a mission to provide cutting edge products to the APAC supercomputer, machine learning and high end enterprise market.
Howard has been a key influencer in the design and supports of many of the Top500 class supercomputers around APAC, including AIST ABCI, currently the 7th most powerful supercomputer in the world. He also helped improve the scheduling capabilities of the Tokyo Institute of Technology Tsubame 3 system. He also works on some of the largest NVMe solid state storage projects in APAC including Australia’s CSIRO 2PB all NVMe high speed storage project.
Howard is a graduate of the University of Michigan. Fluent in Japanese, Howard has been based in Tokyo for close to 30 years.
Industry Track 25 Feb 2020, Tuesday
Advanced High Performance Storage, Virtualization/Containers and Job Management for Powering Smart Cities for the AI and Machine Learning Era
For HPC to powering intelligent cities a key question is how to fully utilize the ever growing processing power of compute resources due to the increase in the number and power of GPUs, how to remove storage bottlenecks with parallel file systems and NVMe drives, and how to run multiple jobs virtualized in a Containers on a node in a parallel environment. Howard Weiss will speak on experiences in supporting the some of the largest supercomputers in Asia Pacific on implementing some of the fastest storage systems, complex job management and running of jobs in a container system that is designed for parallel workloads. The experience leveraged from these projects can enable improved efficiency of computing resources for environments large and small and will aid HPC to power intelligent cities.
Dr. Bill Nitzberg is the CTO of PBS Works at Altair and “acting” community manager for the PBS Pro Open Source Project (www.pbspro.org). With over 25 years in the computer industry, spanning commercial software development to high-performance computing research, Dr. Nitzberg is an internationally recognized expert in parallel and distributed computing. Dr. Nitzberg served on the board of the Open Grid Forum, co-architected NASA’s Information Power Grid, edited the MPI-2 I/O standard, and has published numerous papers on distributed shared memory, parallel I/O, PC clustering, job scheduling, and cloud computing. When not focused on HPC, Bill tries to improve his running economy for his long-distance running adventures
Industry Track 25 Feb 2020, Tuesday
Smart Cities are Built on HPC
Smart Cities encompass everything from greening traffic lights for emergency vehicles to fully autonomous, interconnected infrastructure that improves the productivity, health, and happiness of all citizens, 24x7x365. There’s a lot of hype. The real use cases being addressed today all leverage a core set of technologies — IoT, Data Analytics, and HPC — and Altair uniquely brings together expertise in all three domains. Building and deploying a new IoT device – SmartWorks makes it easier. Want to understand and visualize all the data – Knowledge Works makes it actionable. Need to process lots of calculations, really quickly – PBS Works makes it efficient.
Jeff Ohshima is a technology executive of KIOXIA, formerly Toshiba Memory Corporation, which started operation under its new corporate identity as of October 1st 2019, where he focuses on SSD development and application engineering.
He was previously VP Memory Technology Executive at Toshiba America Electronic Components, currently Kioxia America, Inc., where he focused on flash memory with an emphasis on SSDs. He was also Senior Manager R&D in the Advanced NAND Flash Memory Design Department, responsible for 70 nm, 56 nm, 43 nm, and 32 nm designs. He has been engaged in memory technology for over 30 years, including 20 years on DRAM where he acted as a design lead for application specific memories and technical marketing.
He has served as a Visiting Research Scientist at Stanford University.
Industry Track 25 Feb 2020, Tuesday
Meeting HPC Application Needs With Advanced Storage Technology
Further high performance, low latency, and high density requirements are growing for flash storage. Also, the form factor best suited for a system is increasingly being selected. With these evolutionary technologies, cost performance of computing systems are significantly improving.
A new flash architecture that is able to correspond next generation applications is essential to accommodate the wide range of storage needs of smartphones, mobile computing, and data centers. Kioxia offers leading edge SSD technology with a new storage architecture that is best suited for on-premise, cloud data centers, and especially HPC/supercomputing configuration that achieve high performance and excellent TCO.
Chin Guok joined ESnet as a network engineer in the Fall of 1997 after graduating from the University of Arizona with an M.S. in Computer Science. He was part of the core team that architected and deployed the ESnet4 backbone network in 2007 as well as the current ESnet5 network. He was the Principal Investigator of the ESnet On-demand Secure Circuits and Advanced Reservation System (OSCARS) project which received the R&D100 award in 2013, and the Department of Energy Secretary’s Honor Award in 2014. He currently serves as Head of the Planning and Architecture team, and lead architect for the next-generation ESnet6 network due to come online in 2021.
Industry Track 27 Feb 2020, Thursday
As data sets from DOE user facilities grow in both size and complexity, there is an urgent need for new capabilities to transfer, analyze, store and curate the data to facilitate scientific discovery. DOE supercomputing facilities have begun to expand services and provide new capabilities in support of experiment workflows via powerful computing, storage, and networking systems. The Superfacility concept introduces a framework for integrating experimental and observational instruments with computational and data facilities. The need to support large scale distributed science workflows is the main driver of this work to guide technical innovations in data management, scheduling, networking, and automation, facilitating new ways experimental scientists are accessing HPC facilities, and the implications for future system design.
Chief Marketing Officer, Huawei Cloud Asia Pacific Region
Teck Guan has over 29 years of experience in the ICT industries, with primary focus on Cloud Computing, Smart Nation/City Solutions, 5G, Data Centre and IT Outsourcing in Asia Pacific. Recent projects include support for Singapore Smart Nation 2025 initiatives, working with IMDA on Strategic Partners Program (SPP), Talent Development in the areas of 5G, AI and Cloud, 5G Development in the Region, and the setting up of Huawei AI Lab in Singapore. He is currently focusing on developing the Huawei Cloud branding and supporting Cloud business growth in Asia Pacific.
Industry Track 25 Feb 2020, Tuesday
The “New Confluence” Drives the Fully-connected, Intelligent City
5G, AI, Cloud and IoT will be the “New confluence” of technologies that will drive future innovations. Smart city is all about using these ICT technologies to build common city services to improve over efficiency, sustainability and safety of the city. In this session, Huawei will share the new confluence of technologies, the integrated ICT solutions and ecosystem development that bring together smart city platforms and solutions to help countries in the world to build smart and safe cities.
Managing Consultant, Huawei South Pacific, Huawei International Pte Ltd
Paul is a seasoned mobile telecommunications professional with 20+ years of international work experience in major telcos, consultancies, network and device vendors, covering business and network consulting, solution marketing, interoperability testing and validation, R&D, and wireless network planning and optimisation. In Huawei, Paul has provided consulting across South Pacific regional countries including Singapore, to enable telco client business success. Furthermore, his solution marketing engagements include executive presentations, strategy summits, 5G industry use cases, IoT ecosystem and Huawei AI Lab development.
Industry Track 25 Feb 2020, Tuesday
5G Gear Up
Everything is going Mobile in the evolution of communications. 5G is native for richer experiences and industries digitalisation. 5G will connect everything and benefit all walks of life by combining big data, cloud computing, artificial intelligence, and many other innovative technologies. 5G marks the start of a new golden era that will bring disruptive change across the industry. New applications and new business models will generate new revenue streams as market leaders move quickly to introduce innovative 5G products and services. Globally, 5G is gaining strong momentum in its commercial adoption and its ecosystem is developing much faster than expected.
Ashrut Ambastha is a Sr. Staff Architect at Mellanox responsible for defining network fabric for large scale Infiniband clusters and high-performance datacenter fabric. He is also a member of application engineering team that works on product designs with Mellanox silicon devices. Prior to Mellanox, he worked for Tata Computational Research Labs in India and was involved in architecting the Infiniband backbone for Tata’s HPC system “Eka” that was ranked #4 in Top500 list of SC07. Ashrut’s professional interests includes network topologies, routing algorithms and phy signal Integrity analysis/simulations. He holds an MTech in Electrical Engineering from Indian Institute of Technology-Bombay.
Industry Track 25 Feb 2020, Tuesday
InfiniBand In-Network Computing Technology and Roadmap
The latest generations of smart interconnects offload both the network functions from the CPU and selected data algorithms. This allowing users to actually run data algorithms on the data while the data is being transferred within the system interconnect, rather than waiting for the data to reach the CPU. This technology is referred to as In-Network Computing. The HDR 200G InfiniBand In-Network Computing technology empower the world’s leading supercomputers and is paving the road to Exascale computing. The session will discuss the InfiniBand In-Network Computing technology and performance results, and future plans.
Atul is HPC industry veteran with nearly 20 years of experience in building parallel filesystem software and storage solutions. In his current role, Atul is responsible for revenue generation in India and rest of Asia for DDN HPC as well as Enterprise business.
Industry Track 25 Feb 2020, Tuesday
Architecting AI, HPC and Big Data Storage Solutions for Next Gen Supercomputers
This talk will highlight storage technologies available in the market in near future and how these technologies can be leveraged along with intelligent software to address needs of next generation supercomputers. Talk will focus on convergence of HPC, AI and Big Data technologies in unified platform where, applications can be accelerated with help of software tuning on command hardware platform.
Harshit Sharma leads Lux Research’s research services for the digital transformation of heavy industries such as oil and gas, mining, and marine sectors and is based in Lux Research’s Singapore office. His research focuses on technological developments and market trends impacting the energy sector, ranging from digital oilfield to crude refining and petrochemicals. Harshit has deep expertise in key emerging technologies, such as digital twins, oilfield autonomy, crude-to-chemicals, and gas detection and capture.
Prior to joining Lux Research, Harshit designed and manufactured completion tools as an R&D Engineer at Halliburton. Specifically, his work led to new innovations in liner hangers and swell packers for open hole completions. Harshit received his M.S. from the National University of Singapore in Mechanical Engineering with a focus on offshore technology catered for the subsea market in Malaysia.
The Fourth Industrial Revolution for Every Industry: A Taxonomy of Industry 4.0
Industry 4.0 encompasses the digitization and interlinking of the entire value chain of a physical, product-based industry. Industry 4.0 use cases directly impact the product lifecycle, and are motivated by either top-line revenue growth or bottom-line savings.
In this presentation, Lux discusses:
An industry agnostics taxonomy for Industry 4.0 based on the product-centric value chain, spanning from product development to post-sales support.
Within each stage, we lay out key use cases, applications, and technologies that caqn be implemented today.
Additionally, we outline key considerations around ROI and ease of implementation when considering an Industry 4.0 project.
Dr. Seri Lee is currently the Chief Technology Officer of ERS. With over 30 years of experience, he held positions in several academic institutions and corporate organizations, including NTU as Associate Professor and Intel Corporation as Senior Staff Engineer.
He also served as the General Chair for the IEEE SemiTherm International Symposium in 1998 and, in 2004, he received the best paper award from ASME Journal of Heat Transfer Division. Dr. Lee has published more than 60 technical papers and holds 16 US and international patents covering a wide range of thermal-related subjects in electronics.
Leveraging on the power of HPC driven CFD tools and A*STAR’s IHPC CFD expertise, ERS was able to quickly bring KoolLogix, its new and innovative passive data-centre cooling solutions, to go-to-market. This presentation is to show case examples of how the CFD simulations were conducted to validate the early proof-of-concept designs and to assess the performance scalabilities of the KoolLogix system.
Research Engineer, Seiko Instruments Inc., Corporate Technology Division, Production Engineering Center, Production Engineering
Mr. Watanabe joined Seiko Instruments Inc. as a research engineer in Research and Development Division (HQ) after receiving a master’s degree from Chuo University, Japan in 1998. He has 6 years of experience in R&D of Ultrasonic Micro-motor and was then transferred to the Production Engineering Center (HQ) in 2004.
He is also responsible for providing supports in product development using piezoelectric-mechanical and CFD simulation technologies. Recently, he is focusing on the simulation in an Inkjet print head and watches.
Utilization of HPC for New Product Development in Seiko
This session will outline a successful collaborative project between Seiko Instruments Inc. (SII) and A*STAR’s Institute of High Performance Computing (IHPC), to develop a simulation method for inkjet head and so on, using open source software. Formation of micro-sized droplet, ejection efficiency, and power consumption are some of the main considerations when designing an inkjet head made of piezoelectric material.
In order to better understand the flow phenomenon involved and thus achieving an optimal design of inkjet head, SII and IHPC have jointly developed a 2-way FSI solver using open-source software. Simulation time was shortened by performing calculation on High Performance Computing (HPC) therefore improving the overall design turnaround time. This solver is expected to make a great contribution to new product development in near future.
Deputy Executive Director, Institute of High Performance Computing, A*STAR
Prof Zhang Yong Wei helms the role of Deputy Executive Director at A*STAR’s Institute of High Performance Computing (IHPC). He is an Adjunct Professor at both National University of Singapore and Singapore University of Technology and Design. His research interests focus on using theory, modelling and computation as tools to study materials design, growth, processing and manufacturing, mechanical-thermal coupling, and mechanical-electronic coupling, et al.
He has published over 450 refereed journal papers, and delivered over 80 invited/keynote/plenary talks and lectures. He serves as an Editorial Board Member for Advanced Theory and Simulation and International Journal of Applied Mechanics. He is listed as Global Highly Cited Researchers (2018 and 2019).
A Multiscale Integrated Digital Twin Platform for Powder-bed Fusion Additive Manufacturing
Supercomputing has played a pivotal role in many important fields, such as weather forecasting, oil and gas exploration, DNA sequence, design of rocket and aircraft, et al. In modern additive manufacturing (AM) technology, part property inconsistency and quality control are two major bottleneck issues that hinder its wide industry adoption. Many factors, such as powder quality, powder size and distribution, laser power, scan speed, scan strategy, hatch distance, can all affect part quality.
IHPC have developed an integrated digital twin platform for powder-bed fusion selected laser melting (SLM) with aim to predict the printing outcomes, such as porosity, grain and phase microstructures, residual stress distribution and distortion, surface roughness and mechanical properties for given printing conditions.
HPC Architect, National University of Singapore, Centre for Advanced 2D Materials (CA2DM)
Miguel has a background in computational statistical physics, having obtained his PhD from University of Porto, Portugal, in 2010.
He joined NUS CA2DM in 2012 to set up its Research Computing Support team, including both IT and HPC systems and services, and has been the centre’s “HPC guy” since.
Since 2017, he is also one of the maintainers of EasyBuild (https://easybuild.readthedocs.io/), a software build and installation framework that allows one to manage scientific software on HPC systems in an efficient way.
Optimized software environments and workflows across heterogeneous HPC sites and clouds: Building 2DMatpedia, an open computational database of two-dimensional materials
Data Science and AI promise to upend how computational scientific research is done, by exploiting patterns hidden within enormous amounts of data. However, aside from a few fields such as image processing, it is not always straightforward to generate a sufficiently large and coherent collection of data that can for instance serve as training set for machine learning algorithms.
In many cases, each single example in a potential training set needs to be obtained from a numerical simulation that by itself requires considerable HPC resources, such that the total computational effort often exceeds what is available at a single site, while coordinating tasks across multiple sites creates its own challenges.
In this talk, we describe how we used modern software management and workflow tools across multiple HPC sites and clouds in order to build 2DMatpedia, an open computational database two-dimensional materials.
Research Fellow, National University of Singapore (NUS)
Dr. Zhou Jun is a research fellow at the Department of Physics, National University of Singapore. He received Ph.D from National University of Singapore at 2016, working on First-principles study of complex oxide interfaces. Subsequently he has been working at National University of Singapore for data-driven two-dimensional materials discovery by high-throughput calculations.
2DMatPedia: A Library of 2D materials by top-down and bottom-up approaches
In an effort of systematic 2D materials discovery, Dr. Zhou and the team have used both the top-down and the bottom-up approaches to generate 2D structures. On one hand, monolayers are theoretically exfoliated from layered three-dimensional structures by a topology-based algorithm. On the other hand, new 2D materials are systematically generated by chemical substitution of elements in the top-down 2D compounds by similar elements from the same group in the periodic table.
High throughput first-principles calculations are carried out to study their physical properties. The progress in this project will be reported and some preliminary applications of the 2D materials database will be discussed.
Dr. Daniel Cheong is the Deputy Programme Director for A*STAR’s Industrial IoT Innovation (I³) programme, where he works closely with industry partners to drive the adoption of industrial IoT and to solve real industry pain points. He was previously a senior scientist at the A*STAR’s Institute of High Performance Computing (IHPC). He has also worked in the medical device sector, including a start-up developing surgical implants. He obtained his PhD in Chemical Engineering from Princeton University in 2006 and his MBA from Singapore Management University in 2014.
Leveraging the Industrial Internet-of-Things for Reliable Data Collection to Enable Predictive Maintenance
The ability to predict and forecast when a machine or equipment is going to break down is one of the many benefits that AI promises to deliver to the industry. Such a forecast allows the maintenance of these equipment to be scheduled before failure occurs, potentially increasing the overall operational efficiency and productivity. However, the realization of such predictive maintenance will depend on the collection of relevant equipment data in a reliable and cost efficient way to build the predictive model.
This talk will present how A*STAR’s Industrial IoT Innovation programme has been working with companies to address this “first mile of digitalization” – the acquisition and transmission of data from assets and equipment, including some of the typical challenges of such implementation, and the ongoing work to try to overcome these challenges.
Vice President of System Development Division, Next Generation Technical Computing Unit, Fujitsu Limited
Mr. Toshiyuki Shimizu is Vice President of System Development Division, Next Generation
Technical Computing Unit, at Fujitsu Limited. He has been deeply and continuously involved in
the development of scalar parallel supercomputers, large SMP enterprise servers, and x86
cluster systems. His primary research interest is in interconnect architecture, most recently
culminating in the development of the Tofu interconnect for the K computer and PRIMEHPC
series. Mr. Shimizu leads the development of Fujitsu’s high-end supercomputer PRIMEHPC series and
the Post-K supercomputer. Mr. Shimizu received his Masters of Computer Science degree from Tokyo Institute of
Technology in 1988.
Industry Track 26 Feb 2020, Wednesday
Achievements and status of Supercomputer “Fugaku”, formerly known as Post-K, Japan’s new national supercomputer, will be discussed. Fugaku targets up to 100 times higher application performance than that of K computer, with superior power efficiency. Fugaku employs the newly developed FUJITSU A64FX CPU featuring Armv8-A instruction set architecture and the Scalable Vector Extension (SVE) to widen application opportunities. Fugaku contributes to the Arm ecosystem for HPC applications as well as science and society.
Head of Security Engineering, APAC, Check Point Software Technologies
Gary has 22 years’ experience in the IT industry, 20 of which have been in the dedicated to the security field. His experience ranges from performing audits, digital forensics for law enforcement, penetration testing to complex design and implementation of financial industry network security. He regularly speaks at industry forums, online and on television now as a cybersecurity evangelist.
Brings a wealth of 17-year experience in various cyber security technology consultancy with C-level executives to help drive business outcomes to organizational initiatives; Spear heads cyber security projects with complex architecture designs for organizations’ team members to focus more on driving business value. Prior to Trend Micro, worked at several security vendors like McAfee, Sangfor, Mail & Web Marshal (TrustWave) and M.Tech
Ariel Levanon is an expert on cyber security and managing complex intelligence projects. Mr. Levanon currently is Mellanox VP Cyber Security, responsible for defining Mellanox Cyber security architecture, roadmap and solutions for Cloud Security. His areas of expertise include cyber threat intelligence as well as cyber security networks analysis. He specializes in Cyber security for complex systems and networks, including Hardware and Software technologies. Mr. Levanon brings with him 20 years of working experience in Cyber & Intelligence positions . Ariel served as a Major in the IDF’s intelligence unit for 17 years, where he led a number of Cyber & Intelligence Groups dealing with highly complex technological projects that received Special awards for Excellence. Mr. Levanon holds a B.S.C in Electrical and Software engineering and an M.B.A from Tel Aviv University, specializing in Technological Management & Entrepreneurship, Cyber-Security and Marketing Management.
Fellow: Singapore Computer Society & Vice Chair SGTech Cloud & Data
Raju Chellam is a Fellow of the Singapore Computer Society (SCS), Vice Chairman of the Cloud & Data Chapter at SGTech, Vice President of the Cloud Chapter at SCS & Vice President of New Technologies at Fusionex International. He’s also author of Organ Gold, published by the Straits Times Press, on the illegal trade in human organs on the Dark Web.
Joe is a client technical professional for IBM whose key charter is to help customers to adopt advanced technologies, translating into value for customers through IBM’s professional solutions, services and consulting businesses. The ever-challenging IT landscape has impelled him to work with the team to drive customer mind-share across an information centric security strategy to deal with the evolving cyber threat landscape. Prior to joining IBM, Joe worked in Symantec in a similar capacity and has worked with customers to help formulate and adopt solutions to address complex IT challenges within their business environment. Joe holds a Bachelor degree in Applied Science (Information Technology) from Royal Melbourne Institute of Technology (RMIT).
Quantum computing holds the promise of delivering new insights that could lead to medical breakthroughs and scientific discoveries across a number of disciplines. It could also become a double-edged sword, as quantum computing may also create new exposures, such as the ability to quickly solve the difficult math problems that are the basis of some forms of encryption. But while large-scale, fault-tolerant quantum computers are likely years if not decades away, data can be harvested today, stored and decrypted in the future with a powerful enough quantum compute., While the industry is still finalizing post-quantum cryptography standards, businesses and other organizations can start preparing today. IBM Research scientists have announced development of new “quantum safe” encryption techniques and has donated the quantum safe algorithms to OpenQuantumSafe.org for developing additional open standards and has submitted them to NIST for standardization. Learn how you can start the preparation today.
Rob has been architecting and implementing storage and data management solutions for more than
two decades for many large organisations. Rob has extensive experience architecting Petascale high
performance computer systems, high performance file systems, and large-scale persistent storage
environments. Rob is proficient at architecting high performance parallel file systems and
performance driven storage archive solutions that are aligned with customer workflow requirements.
Rob’s experience covers all aspects of the system life cycle from requirements analysis, infrastructure
design, capacity planning, continuity planning, integration planning and benchmarking for new
systems. Rob’s experience with ‘Big Data’ storage solutions has delivered real outcomes to
researchers, enabling them to focus on their science rather than the movement and management of
data. Rob has a complete understanding of the end-to-end data path, from data acquisition to data at
rest; this ensures any potential bottlenecks or data integrity issues are easily identified.
This knowledge is combined with more than a decade of real world customer experience within the
Science, Education and Government sector, working for the CSIRO and iVEC (Pawsey)
Supercomputing Centre designing and implementing their Petascale Storage initiative.
Prior to joining HPE (SGI), Rob worked for Data Direct Networks (DDN) as the primary systems design
engineer for ANZ, where he architected high performance parallel file system solutions for HPC
centres around the Asia Pacific region, focusing on Lustre and GPFS. Rob has a Diploma in Computer
Systems Engineering from Swinburne University.
Industry Track 26 Feb 2020, Wednesday
Managing and protecting exabytes of data whilst maintaining availability and performance.
HPC, AI and the internet of things are advancing at an unprecedented pace, creating and collecting huge quantities of data. This deluge of data is exacerbating the challenges of storing data, cost effectively, at scale. Come along and listen, as we explore the challenges storing exabytes of data, ensuring it is protected from loss or corruption and available for use when required.
Lim Chin Lee is currently a manager in Engineering Design, Basic Design. He joined Sembcorp Marine (SCM) in 2007 and is leading the role for design and analysis of floating structures and vessels. He studied mechanical engineering at the University of Leeds. He received a PhD degree on the topic “non-linear dynamic analysis of flexible rotor system using finite element method”.
Creating Next Generation Marine Vessels through Advanced Simulation and Artificial Intelligence
In the face of stiff competition, cost reduction and globalisation, marine industry today requires a quantum leap in the entire ship design-and-build process to keep abreast of and to differentiate itself in a marketplace that is fast moving towards digital technologies. Advance simulation and artificial intelligence can be applied effectively to shorten the product design processes and speed to market.
This presentation provides an overview of how Sembcorp Marine is embracing these latest technologies to create next generation innovative products and services with greater impact. It is envisioned that advanced simulation and artificial intelligence can transform the entire design, manufacturing and operation process into an integrated smart product through-life for ships in the near future.
Dr. Wang Yi is a Computer Scientist specialized in Knowledge-centric AI. He holds a PhD from the Hong Kong University of Science & Technology. He has over 15 years’ experience in researching, developing, and translating AI technologies to support decision making in various domains including engineering, healthcare, bioinformatics, and socioeconomics.
He is currently a technical lead in Rolls-Royce driving the development of disruptive AI capabilities to innovate the ways RR design, make, and service complex engineering products such as gas turbine engines and hybrid power systems.
Boosting Engineering Design with Artificial Intelligence: Opportunities and Challenges
Modern engineering design leans on physics-based simulations and field tests for verification and validation. Despite the advancement of High Performance Computing, they remain highly costly and the bottlenecks of design process.
In this talk, Yi Wang will share some thoughts on leveraging Artificial Intelligence as a complimentary means to enable better design at faster pace. He will discuss its use in transforming historical data into fast prediction capability, and more importantly, its potential in catalysing organizational engineering knowledge to support design decisions and harmonizing analyses and tests. Key challenges will be highlighted which call for joint efforts from academic and industrial players to make a step change.
After receiving a Ph.D. in Electronics and computer vision in 2016 from the Blaise Pascal University in France defending the dataflow model of computation for FPGA High Level Synthesis problematic in embedded machine learning application, Cedric Bourrasset is now AI Product Manager in the Atos Bull division, in charge of the Atos Codex AI Suite software solution.
Industry Track 26 Feb 2020, Wednesday
Atos Smart AI strategy for cities
Today cities have to face numerous challenges to become smart & safe requiring the deployment of large scale AI-based systems for video protection and edge computing solutions.
Atos, as a global leader in digital transformation, develops & deploys IT services and innovative technological products in many domains such as HPC, Big Data and Cyber Security to help cities moving towards integrating more AI based solution.
Chief Technology Officer of Atos Big Data & Security, also Distinguished Expert and member of the Atos Scientific Community
Philippe Duluc graduated from Ecole Polytechnique in Paris (France), initially working as a military engineer, for the French Ministry of Defense and for the Prime minister’s office. After 20 years of public service, he joined the private sector, first as Chief Security Officer for Orange Group, then as manager of Cybersecurity operations in Bull company. He is now CTO of Atos’ Big Data & Security Division. He has been an adviser to the European Network and Information Security Agency, and has a keen interest in scientific and technical domains involved in digital transformation: cryptography, cyber-defense, advanced computing, data science, artificial intelligence, and promising quantum technologies.
Industry Track 26 Feb 2020, Wednesday
Atos Quantum: an application-oriented program ready for NISQ era
Supercomputing will enter exaflopic realms in the next years after decades of regular increases of power thanks to chip technological improvements. However, we are reaching the boundaries of Moore’s law and must find new directions to provide more compute power. This is why Atos has launched its Quantum disruptive program, as described during SupercomputingAsia 2019, including the Atos Quantum Learning Machine [Atos QLM] offer, a complete development environment including coding, optimizing and testing through realistic simulation. New in 2020 is the progress of NISQ algorithms reaching quantum advantage and the imminent arrival of NISQ accelerators embedded in Hybrid Computing. To tackle these challenges, the Atos QLM has evolved. We will present these enhancements and how some of our Atos QLM customers are developing innovative NISQ strategies to fight global warming.
Senior Solutions Architect, ASEAN, Red Hat Asia Pacific Limited
Li Ming is a Senior Solutions Architect at Red Hat, ASEAN. At Red Hat, Li Ming is the chief architect for
public sector in Red Hat Singapore on Cloud, Container and Automation technologies.
Prior to joining Red Hat, Li Ming has worked in engineering, R&D, and technology leadership roles. He
spent the last 15 years in Open Source out of which 10 years of that time was actively in High Performance Computing (HPC) and Big Data.
He has also commanded various technology leadership roles from engineering, R&D, consulting and
Industry Track 26 Feb 2020, Wednesday
Why Use Containers, Kubernetes, and OpenShift for Artificial Intelligence/Machine Learning Workloads?
In this session, we will discuss the challenges of reproducible AI/ML workload and how Kubernetes is changing the game. Learn how data scientists ad data engineers are using Kubernetes to deploy ML frameworks and achieve true workload portability. We will share best practices about using DevOps, concepts to machine leaning (MLOps) on top of Red Hat OpenShift Container Platform, accelerating your workload with GPU to maximize your investment in hardware.
Leading the Dell Technologies high performance computing and artificial intelligence systems division in Asia Pacific. I work hand-in-hand with industrial, academic and government clients to build the computational foundations critical to their scientific advancement and global economic competitiveness.
Industry Track 27 Feb 2020, Thursday
Architecting Parallel File Systems with NVMe
This session will cover the best practices for architecting and designing parallel file system storage, for simulation and modelling workloads, leveraging modular building blocks with NVMe.
Dr. Mileidy Giraldo is the Global Lead for Genomics R&D at Lenovo HPC & AI. Dr. Giraldo has about 20 years of experience in HPC for Population-level Genomics, Next Generation Sequencing (NGS), and Biological Databases applied to infectious diseases, cancer, biomarker discovery, and vaccine design. Prior to joining Lenovo, Dr. Giraldo worked as a Bioinformatics scientist at NCBI, NIH and actively collaborated with researchers at NCI and NIAID. Currently, Dr. Giraldo leads Lenovo’s Genomics R&D Group focusing on evaluating the latest HPC technologies to bring innovation and reduce time to scientific insight. With a vertical team of researchers, engineers, and expanding partnerships, Dr. Giraldo’s group builds robust capabilities around NGS, Precision Medicine, and Healthcare AI.
Industry Track 27 Feb 2020, Thursday
Accelerating and Sizing HPC for Population-level Genomics
Given the new affordability of Next-Generation Sequencing methods and the increased computing and storage capacities of the last decade, large national genomics initiatives such as the “UK Biobank”, the “All of Us program”, etc. are emerging all around the world. The greatest challenge such population genomics efforts face is scale: scaling up in input data (from exomes to whole genomes), as well as scaling up production (from a handful to tens of thousands of samples), and the corresponding challenges it creates for the HPC infrastructure. In this talk, I will discuss a systematic study of the performance of genomics workflows against hundreds of permutations of hardware, system tunings, and software implementations. As a result, we reduced workflow execution from 150 to 5.5 hrs. for whole-genomes and from 4 hr. to 4 min for exomes (40X improvement). We captured the lessons from our benchmarking into a scaling Tool to size HPC for custom workloads. These findings are helping datacenters around the world accelerate workflows and plan HPC resources more effectively as they perform genomics at scale.
Dr. Timothy Costa is Product Manager for HPC Software at NVIDIA, responsible for C, C++ and Fortran Compilers, Math Libraries and Communication Libraries. Prior to joining NVIDIA Tim worked as a performance library architect and HPC application engineer, and owned enabling efforts for CFD applications. Dr. Costa obtained his Ph.D. in Mathematics from Oregon State University on the development and analysis of numerical methods for fluid flow in stochastic or evolving porous media.
Industry Track 26 Feb 2020, Wednesday
Evolution of NVIDIA’s HPC Software Architecture
NVIDIA opened the GPU to general-purpose programming with the release of CUDA in 2007. The CUDA programming model has steadily evolved to include support for C++, Unified Memory, Tensor Cores and many other features which expand the base of applications that can be accelerated, simplify GPU programming and offer even better performance. In addition to proving effective as a parallel programming model, CUDA served as a platform for building higher-level interfaces such as industry-standard math libraries, class library-based programming models like Thrust, and directive-based programming models like OpenACC. In this talk we’ll review the latest developments in the NVIDIA HPC Software Architecture that continue to open GPU computing to a wider audience of developers and users, including automatic acceleration and tensor core programmability in standard languages and novel libraries for compute and communication.
Executive IT Architect, IBM Systems Worldwide Client Experience Center
Clarisse Taaffe-Hedglin is an Executive IT Architect in the IBM Systems Worldwide Experience Center with expertise on application performance and benchmarking. As a senior member of the Artificial Intelligence Center of Competency, she works with clients across many industries to shape their computing strategy, define an architectural design and prototype solutions in High Performance Computing, Analytics, machine learning and deep learning to enhance insights from their data. Clarisse has many years’ experience evaluating client datasets to tune systems, applications and models for high performance and scalability. Her current focus is on bringing AI into the HPC workflow for increased productivity. She also works with universities to bring in hand-on learning opportunities through labs and hackathons and is an IBM Quantum Ambassador. She previously held cross-platform technical management roles in IBM and started her career as a Mainframe s/390 Vector Facility developer. Clarisse has a background in numerical analysis and parallel systems optimization with a Masters of Science degree in Mathematics from Purdue University.
Industry Track 25 Feb 2020, Tuesday
AI at Scale: Better Models, Bigger Clusters and Smarter Deployments
As the adoption of AI technologies increases and matures, the focus will shift to time to market and productivity. Scaling AI model development, providing a complete, collaborative platform and tools for rapid solution deployments are key focus areas for expending data scientist teams. The drive for performance, productivity acceleration, smarter infrastructure resource utilization and management efficiencies can be seen across all industries and modernization initiatives including intelligent cities. This talk will cover the challenges and innovations for AI at scale and explore how AI technologies such as visual inspection and video analytics engines can contribute to making our world smarter.
Christopher D. Maestas is a Senior Architect for IBM File and Object Storage Solutions with over 20 years of experience deploying and designing IT systems for clients in various spaces. He has experience scaling performance and availability with a variety of file systems technologies. He has developed benchmark frameworks to test out systems for reliability and validate research performance data. He also has led global enablement sessions online and face to face where discussing how best to position mature technologies like Spectrum Scale with emerging technologies in Cloud, Object or AI spaces. He resides in Albuquerque, New Mexico of the United States.
Industry Track 25 Feb 2020, Tuesday
The Secret [Weapon] Behind the World’s Smartest Supercomputers
IBM Spectrum Scale, the industry leading parallel file system behind the most powerful supercomputers in both the private and public sectors, continues to advance. Latest enhancements make it ideal for HPC, Big Data, AI, and other demanding workloads where there is no room for compromise in either performance or scalability. In this session we will cover the advances in the past year that have made IBM Spectrum Scale easier to use and more flexible on a variety of workloads and platforms. Plus, with the introduction of Erasure Code Edition, Spectrum Scale brings those same world-leading enterprise capabilities to storage rich servers with commodity hardware. We will highlight a handful of installations to explore usage patterns and outline.
Keith Strier is NVIDIA’s VP, Worldwide AI Initiatives, helping governments leverage accelerated computing and AI to advance their policy agenda, including industrial innovation, economic growth and citizen well-being. Previously, Keith was a Senior Partner and Global AI Leader for Ernst & Young, where he co-authored multiple national AI programs in Europe. In 2018, WSJ profiled Keith’s work advising the National CIO of Estonia on their AI program. In 2019, Keith addressed the United Nations on the emerging role of AI in crime and terrorism. Keith is a Special Advisor to the UN Centre for AI & Robotics at The Hague.
Industry Track 26 Feb 2020, Wednesday
The Rise of City-Scale AI: Why Cities Need Supercomputers
The world is moving downtown. 3M people migrate to cities now, every week, rapidly making cities the primary address for humanity. As cities swell, they struggle with congestion, in many forms, impacting the local economy but also the health and well-being of residents. Cities are starting to invest in digital connectivity, and artificial intelligence, to improve the quality and safety of urban life. This requires a new urban architecture, city-scale AI, that will inevitably put a supercomputer in the heart of every city.
With more than 2 decades of experience as a practitioner serving clients on transformation projects, Patrick has led multiple projects for a broad range of industries and geographies. His key skills lie in Technical Leadership, Architecture Methods (e.g., TOGAF, UML, UMF), Project Management Methodology, Dragon1 method for Visual Enterprise Architecture, Advanced design experience with enterprise IT environments, HPC and Security audits.
How to ensure the security of the HPC workloads on the IBM Cloud
IBM compute and infrastructure has been the brain behind many supercomputers for decades. Most of this was dedicated and fit for purpose. What could you do if you had additional compute capacity and time? Nowadays, clients also want to embrace the infinite scale-out capability of Cloud and to create and execute their HPC workloads quickly, efficiently and easily. In this 20 min session, we will explain the choices and best practices available for choosing the right software and infrastructure environment service on IBM Cloud for trending HPC needs. And since data-security and cybersecurity bring forward some key requirements, we will go into the mechanisms provided to fulfil these necessary conditions.
Head of Security, Networking and Enterprise Specialists (ML, Gaming, Media), Google Cloud APAC
Mark is a passionate security advocate who leads Google Cloud’s team of Security, Networking and Enterprise specialists (ML, GSuite, Gaming, Media) in Asia Pacific and Japan. With over 18 years of security domain expertise across the financial, government, telecommunications and education industries, he helps organisations adopt cloud solutions with a focus on managing appropriate risk and shared responsibility.
Hing Yan LEE has over 30 years of ICT working experience in both the public and private sectors. He was global director of the STAR program at the Cloud Security Alliance (CSA) for 6 months in 2017. Prior to that, he was Director of National Cloud Computing Office at Infocomm Development Authority (IDA) for 9+ years, where he was responsible for the national program for, inter alia, developing the cloud ecosystem, promoting cloud adoption by government agencies and private enterprises, and building a trusted environment (which included developing the Multi-Tier Cloud Security standards and Cloud Outage Incident Response guidelines).
He was previously Deputy Director of the National Grid Office at the Agency for Science, Technology & Research (A*STAR), Principal Scientist at the Institute for Infocomm Research (I2R), Director of Knowledge Lab and Deputy Director of Japan-Singapore Artificial Intelligence Centre at the Kent Ridge Digital Labs as well as Deputy Director at Information Technology Institute (the applied R&D arm of the National Computer Board). He oversaw and managed industry collaborations and applied R&D in machine language translation, spoken language dialogue, expert systems, knowledge discovery, data mining, data visualization, and other knowledge-driven efforts.
Hing Yan co-founded two high-tech companies – Language Tapestry and eXage. He was an adjunct associate professor at the National University of Singapore, served on the School of Digital Media & Infocomm Technology Advisory Committee at the Singapore Polytechnic, Engineering Accreditation Board team member (2014), co-chair of the National Infocomm Competency Framework Technical Committee on Cloud Computing as well as member of the Cloud Computing Standards Coordinating Task Force of the Singapore Infocomm Standards Committee. He was also a member of the NatSteel Corporate R&D Advisory Panel, an advisor/member to the Singapore National Archives Board, and the Australia-Singapore Joint ICT Council. Hing Yan is a Fellow and Vice President of the Cloud Chapter in the Singapore Computer Society.
He graduated from the University of Illinois at Urbana-Champaign with PhD and MS degrees in Computer Science. He previously studied at Imperial College London in UK where he obtained a BSc (Eng.) with 1st Class Honours in Computing and MSc in Management Science.
Head of New Services and Cybersecurity, NSCC & Co-Chair, HPC Cloud Security WG, CSA
ONG Guan Sin has over twenty years of diverse experience in the IT industry, spanning roles in IT management of end-user organisations, fast-paced startup businesses and IT vendors. The companies he has worked for include DBS Bank, National University of Singapore, i-Email.net/i-DNS.net, SCS Ltd (now NCS Pte Ltd) and Technology Reserve (Canada). He has experience working with the processes in IETF and SPEC. Currently serving as head of new services and cybersecurity in National Supercomputing Centre, Singapore, he is chartered to promote the adoption of high-performance computing (HPC) by the industry with the appropriate security provisions and standards. He also sees opportunity in using HPC for security applications. He believes that IT security must align with the business goals and the appetite for risks of the organisation concerned.
The Cloud Security Alliance’s High Performance Computing (HPC) Cloud Security Working Group (WG) was recently formed with the intention to develop a holistic security framework for cloud infrastructure architected for HPC needs. Due to the performance required by HPC workloads, ‘close to metal’ operations are often demanded. At the same time, security of workloads is also an area of concern. The WG seeks to work on a set of recommendations to secure HPC cloud, while retaining the performance HPC is known for. This session introduces the WG and scope, and CSA Research in general.
Vice President for Information, Kasetsart University, Bangkok, Thailand
Dr.Putchong Uthayopas is a Vice President for Information of Kasetsart University responsible for IT strategy, policy and regulation, IT Operation, Digital transformation, and HPC-AI research infrastructure. Dr. Putchong is also a member of the board of administration of Digital Government Agency (DGA), Strategic committee member of the National Innovation Agency (NIA), Data Integration Committee member of the Parliament of Thailand. He researches and works on Cloud and HPC technology for more than 20 years with numerous publications.
Jessie Yu is a Senior Product Lead at Alibaba Cloud, where she leads the product go-to-market strategy for International Market, including product lifecycle management, growth opportunities assessment, channel enablement and product roadmap planning. Prior to joining Alibaba Cloud, Jessie was a principle analyst in Frost & Sullivan, a global leading growth consulting firm with extensive coverage in information and communication technologies. She was a sensed, business strategy and technology consulting leader with 10+ years experiences in digital consulting advisory, customer experience strategy, market entry and advisory on technology backed business decisions. Jessie holds a master degree of social science from National University of Singapore. In her spare time, Jessie enjoys practicing for marathons, going on hikes, and planning for the next adventure with the husband and kids.
Haojie is Director of Research for APAC at the Cloud Security Alliance (CSA), where his current portfolio includes delivering initiatives from CSA Working Groups, CSA Corporate Members, standards developing organizations, governments, and other strategic partners. Prior to joining CSA, Haojie was Manager at the Intelligent Computing Lab, Technology Solutions Division of the Infocomm Media Development Authority (IMDA) of Singapore. During this tenure, he led the development of artificial intelligence strategic roadmaps, and collaborative projects with industry and research community partners to co-develop machine learning driven solutions, aimed at tackling productivity and sectoral challenges. When Haojie was with the National Cloud Computing Office under the former Infocomm Development Authority (IDA) of Singapore, he conceptualized & managed data-related initiatives for industry, and led awareness creation efforts for cloud computing in Singapore. Haojie obtained his B.Eng. (Hons) in Computer Engineering from the National University of Singapore, beginning his career with a brief stint as an events photographer, before getting involved in the research & design of networking protocols for multi-hop underwater networks at the Institute for Infocomm Research (I2R), Agency for Science, Technology & Research (A*STAR). Whenever possible, Haojie prefers inline skating to walking.
Senior Member of Technical Staff (AMD HPC Center of Excellence), AMD Boston Design Center, Boxborough MA, USA.
David has worked for AMD since 2002 and joined just before the AMD Opteron Processor was launched. At AMD he has helped grow the Opteron HPC business for many large machines high on the top500 list in USA and around the world and is now repeating this success with the AMD EPYC processors.
Before joining AMD he had almost 20 years of experience in start-up companies. Ha has worked on the Inmos transputer,
Parallel language compilers, internet security companies and an AI Machine Intelligence Startup that developed model that
successfully predicated the company would go out of business – this towards the end of the dot-com bubble.
He has also worked in the AMD Research group and is co-inventor of AMD 2019 patent “”Hybrid analog-digital floating
point number representation and arithmetic”” He has and MA from Cambridge University where he studied Natural Sciences and in Computer Science.
Explanation of nickname “Boris”
Boris – a nickname he picked up in the 1970’s when rock climbing.
The name comes from the Who song – “”Boris the Spider””.
The Who Live – https://www.youtube.com/watch?v=Vgx7abJH6uI”
Industry Track 25 Feb 2020, Tuesday
AMD EPYC progress in past year
A review of AMD EPYC 7002 series processors and features relating to power and performance that are attractive for HPC. The presentation will also include a some of the public AMD OEM partner wins in the past year and the reasons why customers selected machines based on the AMD EPYC processors. Typically customer procurement metrics look cost of ownership maximizing work done for at least overall cost. Relating to the conference theme we will touch on a novel holistic approach to using green power and use of “waste” heat at a partner site.
Senior Principal Research Scientist at Centre for Climate Research Singapore
Dr Xiang-Yu (Hans) Huang leads the Weather Branch of CCRS (Centre for Climate Research Singapore). He has extensive experience of data assimilation research, observation system design and assessment, and NWP (Numerical Weather Prediction) system developments. He was the lead scientist of the HIRLAM (High Resolution Limited Area Modelling) data assimilation system in Europe and the manager of the WRF (Weather Research and Forecasting) data assimilation system in the U.S. He was the MSS (Meteorological Service Singapore) lead of the multi-year project in collaboration with UK Met Office, which developed SINGV – the MSS operational convective-scale numerical weather prediction modelling and data assimilation system.
Urban & Environment 25 Feb 2020, Tuesday
Computing weather and climate
Challenges in convective-scale weather and climate modelling, especially over the deep tropics, have been long known to the research community. Efforts have been made internationally to push forward research in this direction. To join this effort, Meteorological Service Singapore (MSS) and UK Met Office started a collaborative project in 2013 to develop a convective-scale weather and climate modelling system, SINGV. Now SINGV is the key modelling system for convective-scale numerical weather prediction at MSS. It has also been used as an urban model to study the urban impact on weather and climate. Work has started to further develop SINGV as a regional climate model to be used in the next Singapore national climate change (projection) assessment due to be completed in 2022.
Senior Director (Special Projects), Surbana Jurong
Eugene joined the Surbana Jurong Group on 1 January 2017 as Senior Director, Group CEO’s office, to spearhead and oversee the Group’s special projects and strategic initiatives, in particular, the Group’s Key Account Management Office, Digital Management Office, and Sustainability and Resiliency Office. He is also a Director on the Boards of Threesixty Cost Management and Threesixty Contract Advisory, for which his knowledge and expertise help to fortify the business thrust of the Group.
Eugene graduated with a Bachelor of Science (BSc) (1st Class Hons) from the University of Reading in 1999 and a BSc in Technology Management and Computing (1st Class Hons) from the University of Portsmouth in 2002. He also completed a Master of Science (MSc) in Construction Law and Arbitration from the joint NUS and Kings College London programme, an MSc in Sustainable Building Design from the University of Nottingham, and the Senior Management Development Program from Harvard Business School
Apart from his extensive experience in Quantity Surveying, Eugene injects sustainability value management and green approaches into his work to achieve project efficiency. He is a Green Mark professional and is knowledgeable in the field of sustainable buildings. He is also an Accredited Adjudicator under the Singapore Mediation Centre and a registered Mediator in the Singapore Institute of Surveyors and Valuers.
Eugene works on enhancing productivity in Design and Construction and research on Computational BIM, xR and other productivity enhancement topics for the Surbana Jurong Group. His effort has won him several awards for SJ, including RICS Construction Professional of the Year (2019).
Passionate about teaching, Eugene is also an Adjunct Associate Professor at the Department of Building, School of Design and Environment, National University of Singapore.
Urban & Environment 25 Feb 2020, Tuesday
How can Skynet help in Productivity in Construction – a Perspective
For those of you who have watched through Terminator, you will understand what Skynet can do. It is an Artificial Intelligence that became self aware and could build , defend and grow itself . Taking the ideas from the movie, can there be programs that can help in our decisions in design and engineering, that can help us in our processes? This presentation discusses on using programing and computational BIM to help designers to increase productivity. It also discusses the different approaches to such programming.
Science Research Specialist, Department of Science and Technology – Advanced Science and Technology Institute (DOST-ASTI)
Mr. Matira currently leads the Technical Operations group of the DOST-ASTI’s Computing and Archiving Research Environment (COARE) Team. He holds a degree in Applied Physics, major in Instrumentation from the University of Santo Tomas. At present, he oversees the operations of the COARE Facility, as well as the provision of advanced technical support to COARE Facility users. He also collaborates with the Research & Development group of the COARE Team as he provides insights on research interests which include high-performance computing, cloud computing, data analytics, and automation.
Supercomputing Frontiers Asia (SCFA) 26 Feb 2020, Wednesday
Correcting Job Walltime in a Resource-Constrained Environment
The National Supercomputing Centre (NSCC) Singapore was established in 2015 and manages Singapore’s first national petascale facility with available high performance computing (HPC) resources to support science and engineering computing needs for academic, research and industry communities.