Dr Dario Gil is the Director of IBM Research, one of the world’s largest and most influential
corporate research labs. He is the 12th Director in its 74-year history.
IBM Research is a global organization with over 3,000 researchers at 19 locations on six
continents advancing the future of computing. Dr Gil leads innovation efforts at IBM, directing
research strategies in Quantum, AI, Hybrid Cloud, Security, Industry Solutions, and
Semiconductors and Systems.
Prior to his current appointment, Dr Gil served as Chief Operating Officer of IBM Research and
the Vice President of AI and Quantum Computing, areas in which he continues to have broad
responsibilities across IBM. Under his leadership, IBM was the first company in the world to
build programmable quantum computers and make them universally available through the
cloud. An advocate of collaborative research models, he co-chairs the MIT-IBM Watson AI Lab,
a pioneering industrial-academic laboratory with a portfolio of more than 50 projects focused
on advancing fundamental AI research to the broad benefit of industry and society.
A passionate advocate of scientific discovery and education, Dr Gil is a member of the
President’s Council of Science and Technology Advisors (PCAST) and is a Trustee of the New
York Hall of Science, which provides schools, families and underserved communities in the
New York City area with exposure to science, technology, engineering and math (STEM).
Dr Gil received his Ph.D. in Electrical Engineering and Computer Science from MIT.
Ushering in a new decade of computing for Smarter cities
Being a “smarter city” is a journey and not an overnight transformation. Government and the urban council must prepare for the changes that will be revolutionary, rather than evolutionary, as they put in place next generation systems which work in entirely new ways. This level of disruption led by AI, high performance computing and even Quantum requires a surrounding compute architecture which can make the entire setup work efficiently, and sustainably. Hear from the Global Head of IBM research how the future of tech is nearer than we think and is being is deployed across the different cities in the world, making it a better place to live.
Senior Computer Scientist, Director of the Urban Center for Computation and Data, USA
Charles Catlett is a Senior Computer Scientist at the U.S. Department of Energy’s Argonne National Laboratory and a Senior Fellow at the University of Chicago’s Mansueto Institute for Urban Innovation. His current research focuses on urban data analytics, urban modeling, and the design and use of sensing and “edge” computing and software technologies embedded in urban infrastructure. He is the principal investigator of the NSF-funded “Array of Things” (AoT), an experimental urban infrastructure to measure the city’s environment with sensors and embedded (“edge”), remotely programmable artificial intelligence hardware. Operating at over 130 locations in Chicago, AoT is an experimental instrument for measuring the urban environment, air quality, street activity, and other factors. He is a co-principal investigator of a new NSF-funded Mid-Scale Research Infrastructure project, “SAGE,” that extends the underlying cyberinfrastructure used with AoT, creating “software-defined sensors” for not only urban but also environmental and emergency management deployments.
Catlett has served as Argonne’s Chief Information Officer and before joining UChicago and Argonne in 2000, he was Chief Technology Officer at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign. From NCSA’s founding in 1985 he participated in the development of NSFNET, one of several early national networks that evolved into what we now experience as the Internet. During the exponential growth of the web following the release of NCSA’s Mosaic web browser, his team developed and supported NCSA’s scalable web server infrastructure. He is the founding director of the Urban Center for Computation and Data at the University of Chicago and is a Computer Engineering graduate of the University of Illinois at Urbana-Champaign.
Understanding Cities through HPC Supporting Embedded AI, Modeling, and Predictive Analytics
Urbanization is one of the great challenges and opportunities of this century, inextricably tied to global challenges ranging from climate change to sustainable use of energy and natural resources, and from personal health and safety to accelerating innovation and education. There is a growing science community—spanning nearly every discipline—pursuing research related to these challenges. For many urban questions, there is a need for measurements with greater spatial and temporal resolution than is currently available for understanding air quality, microclimate, vibration, noise, and other factors. Concurrently, many factors—from the flow of people through a public space to the impact of at-grade rail crossings on emergency response—require more sophisticated “measurements” involving embedded, or “edge” computing within the urban infrastructure. Ultimately, these and other data sources are also critical to effectively modeling individual urban processes and systems as well as their interactions—requiring high-performance computation. With exascale systems, such coupled models can provide exploratory capabilities through both high-fidelity models as well as ensembles, such as identifying buildings that are particularly vulnerable to extreme heat waves, or transportation investments and policies that are most likely to address congestion and associated heat and emissions challenges. Catlett will discuss initiatives and projects that provide a glimpse into how high-performance cyberinfrastructure will be increasingly central to capturing the opportunities, and addressing the challenges, of urbanization and climate change.
Director of the Australian National Computational Infrastructure (NCI)
Sean Smith is Director of the Australian National Computational Infrastructure (NCI) and conjointly Professor of computational nanomaterials science and technology at the Australian National University. He has extensive theoretical and computational research experience in chemistry, nanomaterials and nano-bio science and technology. He returned to Australia in 2014 at UNSW Sydney, founding and directing the Integrated Materials Design Centre to drive an integrated program of materials design, discovery and characterization. Prior to this, he directed the US Department of Energy funded Center for Nanophase Materials Sciences (CNMS) at Oak Ridge National Laboratory, one of five major DOE nanoscience research and user facilities in the US, through its 2011-2013 triennial phase. During his earlier career, he joined The University of Queensland as junior faculty in 1993 after post-doctoral research at UC Berkeley (1991-1993) and Universität Göttingen (Humboldt Fellow 1989-1991); became Professor and Director of the Centre for Computational Molecular Science 2002-2011; and built up the computational nanobio science and technology laboratory the Australian Institute for Bioengineering and Nanotechnology (AIBN) at UQ 2006-2011. He worked with colleagues in the ARC Center of Excellence for Functional Nanomaterials 2002-2011 as Program Leader (Computational Nanoscience) and Deputy Director(Internationalisation).
Professor, Applied Mathematics, Stony Brook University, USA
Yuefan Deng earned his BA (1983) in Physics from Nankai University and his Ph. D. (1989) in Theoretical Physics from Columbia University. He is currently a professor (since 1998) of applied mathematics and the associate director of the Institute of Engineering-Driven Medicine, at Stony Brook University in New York. Prof. Deng’s research covers parallel computing, molecular dynamics, Monte Carlo methods, and biomedical engineering. The latest focus is on the multiscale modeling of platelet activation and aggregation (funded by US NIH) on supercomputers, parallel optimization algorithms, and supercomputer network topologies. He publishes widely in diverse fields of physics, computational mathematics, and biomedical engineering. He has received 13 patents.
Fast and Accurate Multiscale Modeling of Platelets Aided by Machine Learning
Abstract: Multiscale modeling in biomedical engineering is gaining momentum because of progress in supercomputing, applied mathematics, and quantitative biomedical engineering. For example, scientists in various disciplines have been advancing, slowly but steadily, the simulation of blood including its flowing and the physiological properties of such components as red blood cells, white blood cells, and platelets. Platelet activation and aggregation stimulate blood clotting that results in heart attacks and strokes causing nearly 20 million deaths each year. To reduce such deaths, we must discover new drugs. To discover new drugs, we must understand the mechanism of platelet activation and aggregation. To model platelets’ dynamics involves setting the basic space and time discretization in huge ranges of 5-6 orders of magnitudes, resulting from the relevant fundamental interactions at atomic, to molecular, to cell, to fluid scales. To achieve the desired accuracy at the minimal computational costs, we must select the correct physiological parameters in the force fields as well as the spatial and temporal discretization, by machine learning. We demonstrate our results of speeding up a multiscale platelet aggregation simulation, while maintaining desired accuracies, by orders of magnitude, compared with traditional algorithm that uses the smallest of temporal and spatial scales in order to capture the finest details of the dynamics. We present our analyses of the accuracies and efficiencies of the representative modeling. We will also outline the general methodologies of multiscale modeling of cells at atomic resolutions guided by machine learning.
Gilad Shainer is an HPC evangelist that focuses on high-performance computing, high-speed interconnects, leading-edge technologies and performance characterizations. He serves as a board member in the OpenPOWER, CCIX, OpenCAPI and UCF organizations, a member of IBTA and contributor to the PCISIG PCI-X and PCIe specifications. Gilad is the Senior Vice President of Marketing in Mellanox as well as the President of UCF Consortium. He holds multiple patents in the field of high-speed networking. He is also a recipient of 2015 R&D100 award for his contribution to the CORE-Direct collective offload technology. Gilad holds an M.Sc. degree and a B.Sc. degree in Electrical Engineering from the Technion Institute of Technology.
Presentation Title: In-Network Computing – The Next Generation of Supercomputing
The latest revolution in HPC and AI is In-Network Computing. In-Network Computing is the result of a collaborative industry and academia effort to reach Exascale performance by taking a holistic system-level approach toward fundamental performance improvements. The latest generations of smart interconnects offload both the network functions from the CPU and selected data algorithms. This allowing users to actually run data algorithms on the data while the data is being transferred within the system interconnect, rather than waiting for the data to reach the CPU. This technology is referred to as In-Network Computing. In-Network Computing transforms the data center interconnect into a “distributed CPU”, and “distributed memory,” to overcome performance bottlenecks and enabling faster and more scalable data analysis.
Director, Strategy Planning, Cloud and AI, Futurewei Technologies, USA
Francis brings 20+ years of IT industry experience specialized in server systems design, HPC and cloud scale computing architecture. Before joining Futurewei Technologies as Director of Strategy Planning in 2019, Francis served in Huawei Enerprise USA as Director of Product Management and Hardware Architect. Francis is responsible of driving future direction of cloud and AI compute infrastructure.
Prior to joining Futurewei, Francis had 20+ years experiences with industry leading IT solution providers such as Hewlett-Packard, Sun Microsystems and Oracle.
Empowering Smart Cities with an Intelligent Computing Infrastructure
Smart cities drive the needs for a massive computing infrastructure, from the cloud, to the edge. New generation of broadband fibre and 5G networks connect and aggregate tremendous amount of data. An AI-native, distributed and cloud scale compute infrastructure is becoming essential to process, analyse and make decisions. High performance computing plays a critical role in transforming data centres in order to meet the needs of such a demanding intelligent future. HPC empowers and enables smart cities to function. This talk examines the key success factors and practical considerations in designing such an intelligent computing infrastructure, the backbone of smart cities.
Vice President & General Manager, HPC & AI Solution Segment, Hewlett Packard Enterprise, Singapore
Bill Mannel is Vice President and General Manager of High-Performance Computing (HPC) and Artificial Intelligence (AI), for Hybrid IT, Hewlett Packard Enterprise.
Bill joined HPE in 2014 and is a seasoned veteran of the servers and high performance computing industry. For Bill’s first three years at HPE, the HPC business grew significantly more than the overall market. HPE acquired Silicon Graphics International Corp. (SGI), a pure-play HPC company, closing the integration in November, 2017. At the Supercom-puting 2015 show, Bill was named “Person to Watch in 2016” by HPCWire.
Bill joined HPE from SGI, where he was the VP and GM for Compute and Storage products. Prior to SGI, Bill worked at NASA and the U.S. Air Force on aircraft programs such X-29, F-16, B-1B, and AFTI-111.
Bill holds a Bachelor’s degree in Mechanical from Duke University and an MBA from California State University.
Directions for the Exascale Era
An ocean of data is being generated from billions of connected devices at the Edge, as well as larger and larger HPC systems driving simulation and modeling. The world clearly needs new approaches to extract maximum value from all of this data. This sessions lays out the challenges and suggests solutions.
Corporate Vice President and CTO, AMD Datacenter Solutions, USA
Raghu Nambiar is the Corporate Vice President and CTO of Datacenter ecosystems and solution at AMD.
High Performance Computing with AMD
AMD is all about innovation and our mission is to deliver products that help to solve the world’s toughest challenges – in life sciences, earth science, energy, manufacturing, fundamental research, oil and gas, machine intelligence and many more.
Creating an inflection point with trailblazing performance and unprecedented scalability for today’s HPC workloads, AMD EPYC processors and AMD Radeon Instinct mark the next milestone in exascale computing. This session will cover our roadmap, ecosystem partnerships and solutions to address the performance and scalability needs of emerging demands of HPC workloads.
Prof.dr.ir. Cees de Laat chairs the System and Network Engineering (SNE) laboratory at the University of Amsterdam. The SNE lab conducts research on leading-edge computer systems of all scales, ranging from global-scale systems and networks to embedded devices. Across these multiple scales our particular interest is on extra-functional properties of systems, such as performance, programmability, productivity, security, trust, sustainability and, last but not least, the societal impact of emerging systems-related technologies. For current activities and projects see: http://delaat.net/
ICT to Support the transformation of Science in the Roaring Twenties
The way how science is done is profoundly changing. Machine Learning and Artificial Intelligence are now applied in most of the sciences to process data and understand (or not) the observed phenomena. The recent research directions and results with respect to data and data exchange to feed the AI-ML layer will be addressed in this talk.
Senior Member of the Institute of Electrical and Electronics Engineers, Chief Technologist, Research Networks, CTO Group, Research Labs, Ciena Corporation
Mr. Wilson is responsible for Ciena’s leadership & global interactions with universities and the research community, including national research and education networks. Representing Ciena’s Technology Group, he orchestrates intersections between emerging technologies and Ciena research and development initiatives across the globe. Within Ciena he coordinates a program that extracts new technology innovation initiatives and helps prove viability through testbed trials and high performance demonstrations.
Prior to his Ciena roles, he was a senior advisor for the CTO at Nortel, and held other advanced technology roles during 13 years with the company, including director of broadband switching and development leader of Optical Ethernet. During his time with Gandalf Corporation, he served as VP corporate communications, Applications Engineer and director of Marketing. Prior to this, he served as lead network architect for the University of Toronto’s global bibliographic research service. At Bell Canada, he helped build Canada’s first packet network. Originally trained in Electrical Engineering at Ryerson University in Toronto Ontario, and is a graduate of the Executive Management school at Stanford University in Palo Alto California.
Mr. Wilson is active on a number of business and volunteer Boards. He is chairman of the Algonquin College Foundation Board of Directors; director, Institute of Electrical and Electronics Engineers, Communications Society NA; Industry representative for Polytechnics Canada and is a member of the Institute of Corporate Directors. In 2018, he was elected to the Board of ENCQOR, a $0.4b Public-Private partnership program to develop Canadian 5G wireless technology leadership. He is a recipient of the Queen Elizabeth II Jubilee medal for his service to Canada.
Geomesh Networks, Connecting Continents and Connecting Schools on a Global Scale
Mr. Wilson will highlight advances in highly scalable optical network transport systems that connect continents and connect research applications overland and under the sea. The focus will be on emerging photonic and software technologies that are speeding deployment of scalable diverse long reach systems.
Leader of Network Development Team of KREONET Center, KISTI
He is a Principal Researcher of Korea Institute of Science and Technology Information (KISTI) which is the national Supercomputing & Advanced Research Network Center. Especially, working for Dept. Advanced KREONET Center. Interesting research areas include in ScienceDMZ & PRP, Network QoS & Network Engineering, Software Defined Network & Future Internet, Cloud Computing & Network Virtualization, Remote Collaboration and so on. And also, He is a chair of APRP(Asia Pacific Research Platform) WG in APAN.
Networks & Communications Manager, King Abdullah University of Science and Technology (KAUST)
Kevin Sale is a network and security professional with 20 years in the telecommunications, education, finance and energy sectors. He currently serves as the Networks & Communications Manager at King Abdullah University of Science and Technology (KAUST) where he is responsible for strategic development of the network to ensure KAUST remains a global tier I research university. Prior to this he lead the Information Security Governance, Risk, Compliance and Awareness practice also at KAUST.
As the APRP and the GRP continue to expand their reach, the West Asia region including Saudi Arabia continues to be under-served. Too often the great science that is being conducted in the region is stifled by a lack of connectivity options and the difficulty of accessing data. As the model of global collaborative research continues to grow there is a widening gap between those who enjoy abundant connectivity and mature platforms and those who do not. This session will outline what the King Abdullah University of Science and Technology (KAUST) is doing in the region to bridge this gap.
Cloud Team Manager, National Computational Infrastructure Canberra Australia
Andrew has many decades of hands-on technical, diplomatic and logistics experience covering a
wide range of standard and bespoke technologies, languages and applications within Industry,
Government, Academia and Research nationally and internationally.
He chairs the judging panel of the SuperComputing Asia Data Mover Challenge and Co-Chairs the
Cloud Security Alliance: HPC Cloud Security, APAN Program Committee, APAN E-Culture and
APAN Asia Pacific Research Platform working groups.
His current role at the National Computational Infrastructure (NCI) involves working on High Performance
Networks, Computing and Cloud systems. He manages the NCI Cloud Team supporting both the NCI
private and the National Nectar Research Clouds and National Data collections.
Howard Weiss is an expert on creating solutions for high speed storage utilizing HDDs and SSD/NVMes, job scheduling, interconnect technologies and container based virtualization. Howard has acted as the Vice President of APAC for global IT companies such as storage/Data Direct Networks, interconnect/Voltaire (now Mellanox), data protection BakBone Software (Quest/Dell). Howard was a co-founder of Cofio Software that has been acquired by Hitachi Data Systems (now Hitachi Vantara). Five years ago Howard Weiss incorporated Pacific Teck with a mission to provide cutting edge products to the APAC supercomputer, machine learning and high end enterprise market.
Howard has been a key influencer in the design and supports of many of the Top500 class supercomputers around APAC, including AIST ABCI, currently the 7th most powerful supercomputer in the world. He also helped improve the scheduling capabilities of the Tokyo Institute of Technology Tsubame 3 system. He also works on some of the largest NVMe solid state storage projects in APAC including Australia’s CSIRO 2PB all NVMe high speed storage project.
Howard is a graduate of the University of Michigan. Fluent in Japanese, Howard has been based in Tokyo for close to 30 years.
Advanced High Performance Storage, Virtualization/Containers and Job Management for Powering Smart Cities for the AI and Machine Learning Era
For HPC to powering intelligent cities a key question is how to fully utilize the ever growing processing power of compute resources due to the increase in the number and power of GPUs, how to remove storage bottlenecks with parallel file systems and NVMe drives, and how to run multiple jobs virtualized in a Containers on a node in a parallel environment. Howard Weiss will speak on experiences in supporting the some of the largest supercomputers in Asia Pacific on implementing some of the fastest storage systems, complex job management and running of jobs in a container system that is designed for parallel workloads. The experience leveraged from these projects can enable improved efficiency of computing resources for environments large and small and will aid HPC to power intelligent cities.
Dr. Bill Nitzberg is the CTO of PBS Works at Altair and “acting” community manager for the PBS Pro Open Source Project (www.pbspro.org). With over 25 years in the computer industry, spanning commercial software development to high-performance computing research, Dr. Nitzberg is an internationally recognized expert in parallel and distributed computing. Dr. Nitzberg served on the board of the Open Grid Forum, co-architected NASA’s Information Power Grid, edited the MPI-2 I/O standard, and has published numerous papers on distributed shared memory, parallel I/O, PC clustering, job scheduling, and cloud computing. When not focused on HPC, Bill tries to improve his running economy for his long-distance running adventures
Jeff Ohshima is a technology executive of KIOXIA, formerly Toshiba Memory Corporation, which started operation under its new corporate identity as of October 1st 2019, where he focuses on SSD development and application engineering.
He was previously VP Memory Technology Executive at Toshiba America Electronic Components, currently Kioxia America, Inc., where he focused on flash memory with an emphasis on SSDs. He was also Senior Manager R&D in the Advanced NAND Flash Memory Design Department, responsible for 70 nm, 56 nm, 43 nm, and 32 nm designs. He has been engaged in memory technology for over 30 years, including 20 years on DRAM where he acted as a design lead for application specific memories and technical marketing.
He has served as a Visiting Research Scientist at Stanford University.
Meeting HPC Application Needs With Advanced Storage Technology
Further high performance, low latency, and high density requirements are growing for flash storage. Also, the form factor best suited for a system is increasingly being selected. With these evolutionary technologies, cost performance of computing systems are significantly improving.
A new flash architecture that is able to correspond next generation applications is essential to accommodate the wide range of storage needs of smartphones, mobile computing, and data centers. Kioxia offers leading edge SSD technology with a new storage architecture that is best suited for on-premise, cloud data centers, and especially HPC/supercomputing configuration that achieve high performance and excellent TCO.
Chin Guok joined ESnet as a network engineer in the Fall of 1997 after graduating from the University of Arizona with an M.S. in Computer Science. He was part of the core team that architected and deployed the ESnet4 backbone network in 2007 as well as the current ESnet5 network. He was the Principal Investigator of the ESnet On-demand Secure Circuits and Advanced Reservation System (OSCARS) project which received the R&D100 award in 2013, and the Department of Energy Secretary’s Honor Award in 2014. He currently serves as Head of the Planning and Architecture team, and lead architect for the next-generation ESnet6 network due to come online in 2021.
As data sets from DOE user facilities grow in both size and complexity, there is an urgent need for new capabilities to transfer, analyze, store and curate the data to facilitate scientific discovery. DOE supercomputing facilities have begun to expand services and provide new capabilities in support of experiment workflows via powerful computing, storage, and networking systems. The Superfacility concept introduces a framework for integrating experimental and observational instruments with computational and data facilities. The need to support large scale distributed science workflows is the main driver of this work to guide technical innovations in data management, scheduling, networking, and automation, facilitating new ways experimental scientists are accessing HPC facilities, and the implications for future system design.
Chief Marketing Officer, Huawei Cloud Asia Pacific Region
Teck Guan has over 29 years of experience in the ICT industries, with primary focus on Cloud Computing, Smart Nation/City Solutions, 5G, Data Centre and IT Outsourcing in Asia Pacific. Recent projects include support for Singapore Smart Nation 2025 initiatives, working with IMDA on Strategic Partners Program (SPP), Talent Development in the areas of 5G, AI and Cloud, 5G Development in the Region, and the setting up of Huawei AI Lab in Singapore. He is currently focusing on developing the Huawei Cloud branding and supporting Cloud business growth in Asia Pacific.
The “New Confluence” Drives the Fully-connected, Intelligent City
5G, AI, Cloud and IoT will be the “New confluence” of technologies that will drive future innovations. Smart city is all about using these ICT technologies to build common city services to improve over efficiency, sustainability and safety of the city. In this session, Huawei will share the new confluence of technologies, the integrated ICT solutions and ecosystem development that bring together smart city platforms and solutions to help countries in the world to build smart and safe cities.
Managing Consultant, Huawei South Pacific, Huawei International Pte Ltd
Paul is a seasoned mobile telecommunications professional with 20+ years of international work experience in major telcos, consultancies, network and device vendors, covering business and network consulting, solution marketing, interoperability testing and validation, R&D, and wireless network planning and optimisation. In Huawei, Paul has provided consulting across South Pacific regional countries including Singapore, to enable telco client business success. Furthermore, his solution marketing engagements include executive presentations, strategy summits, 5G industry use cases, IoT ecosystem and Huawei AI Lab development.
5G Gear Up
Everything is going Mobile in the evolution of communications. 5G is native for richer experiences and industries digitalisation. 5G will connect everything and benefit all walks of life by combining big data, cloud computing, artificial intelligence, and many other innovative technologies. 5G marks the start of a new golden era that will bring disruptive change across the industry. New applications and new business models will generate new revenue streams as market leaders move quickly to introduce innovative 5G products and services. Globally, 5G is gaining strong momentum in its commercial adoption and its ecosystem is developing much faster than expected.
Ashrut Ambastha is a Sr. Staff Architect at Mellanox responsible for defining network fabric for large scale Infiniband clusters and high-performance datacenter fabric. He is also a member of application engineering team that works on product designs with Mellanox silicon devices. Prior to Mellanox, he worked for Tata Computational Research Labs in India and was involved in architecting the Infiniband backbone for Tata’s HPC system “Eka” that was ranked #4 in Top500 list of SC07. Ashrut’s professional interests includes network topologies, routing algorithms and phy signal Integrity analysis/simulations. He holds an MTech in Electrical Engineering from Indian Institute of Technology-Bombay.
InfiniBand In-Network Computing Technology and Roadmap
The latest generations of smart interconnects offload both the network functions from the CPU and selected data algorithms. This allowing users to actually run data algorithms on the data while the data is being transferred within the system interconnect, rather than waiting for the data to reach the CPU. This technology is referred to as In-Network Computing. The HDR 200G InfiniBand In-Network Computing technology empower the world’s leading supercomputers and is paving the road to Exascale computing. The session will discuss the InfiniBand In-Network Computing technology and performance results, and future plans.
Dr. Richard Graham is Senior Director, HPC Technology at Mellanox Technologies, Inc.
His primary focus is on HPC network software and hardware capabilities for current and future HPC technologies. Prior to moving to Mellanox, Rich spent thirteen years at Los Alamos National Laboratory and Oak Ridge National Laboratory, in computer science technical and administrative roles, with a technical focus on communication libraries and application analysis tools. He is cofounder of the Open MPI collaboration, and was chairman of the MPI 3.0 standardization efforts.
In-Network Computing and IPUs Technology for Compute Intensive Applications
The ever increasing demands for higher computation performance drive the creation of new datacenter accelerators and processing units. Previously CPUs and GPUs were the main sources for compute power. The exponential increase in data volume and in problems complexity, drove the creation of a new processing unit – the I/O processing unit or IPU. IPUs are interconnect elements that include specific and generic In-Network Computing engines that can analyse application data as it being transferred within the data center, or at the edge. The combination of CPUs, GPUs, and IPUs, creates the next generation of data center and edge computing architectures.
Harshit Sharma leads Lux Research’s research services for the digital transformation of heavy industries such as oil and gas, mining, and marine sectors and is based in Lux Research’s Singapore office. His research focuses on technological developments and market trends impacting the energy sector, ranging from digital oilfield to crude refining and petrochemicals. Harshit has deep expertise in key emerging technologies, such as digital twins, oilfield autonomy, crude-to-chemicals, and gas detection and capture.
Prior to joining Lux Research, Harshit designed and manufactured completion tools as an R&D Engineer at Halliburton. Specifically, his work led to new innovations in liner hangers and swell packers for open hole completions. Harshit received his M.S. from the National University of Singapore in Mechanical Engineering with a focus on offshore technology catered for the subsea market in Malaysia.
The Fourth Industrial Revolution for Every Industry: A Taxonomy of Industry 4.0
Industry 4.0 encompasses the digitization and interlinking of the entire value chain of a physical, product-based industry. Industry 4.0 use cases directly impact the product lifecycle, and are motivated by either top-line revenue growth or bottom-line savings.
In this presentation, Lux discusses:
An industry agnostics taxonomy for Industry 4.0 based on the product-centric value chain, spanning from product development to post-sales support.
Within each stage, we lay out key use cases, applications, and technologies that caqn be implemented today.
Additionally, we outline key considerations around ROI and ease of implementation when considering an Industry 4.0 project.
Dr. Seri Lee is currently the Chief Technology Officer of ERS. With over 30 years of experience, he held positions in several academic institutions and corporate organizations, including NTU as Associate Professor and Intel Corporation as Senior Staff Engineer.
He also served as the General Chair for the IEEE SemiTherm International Symposium in 1998 and, in 2004, he received the best paper award from ASME Journal of Heat Transfer Division. Dr. Lee has published more than 60 technical papers and holds 16 US and international patents covering a wide range of thermal-related subjects in electronics.
Leveraging on the power of HPC driven CFD tools and A*STAR’s IHPC CFD expertise, ERS was able to quickly bring KoolLogix, its new and innovative passive data-centre cooling solutions, to go-to-market. This presentation is to show case examples of how the CFD simulations were conducted to validate the early proof-of-concept designs and to assess the performance scalabilities of the KoolLogix system.
Research Engineer, Seiko Instruments Inc., Corporate Technology Division, Production Engineering Center, Production Engineering
Mr. Watanabe joined Seiko Instruments Inc. as a research engineer in Research and Development Division (HQ) after receiving a master’s degree from Chuo University, Japan in 1998. He has 6 years of experience in R&D of Ultrasonic Micro-motor and was then transferred to the Production Engineering Center (HQ) in 2004.
He is also responsible for providing supports in product development using piezoelectric-mechanical and CFD simulation technologies. Recently, he is focusing on the simulation in an Inkjet print head and watches.
Utilization of HPC for New Product Development in Seiko
This session will outline a successful collaborative project between Seiko Instruments Inc. (SII) and A*STAR’s Institute of High Performance Computing (IHPC), to develop a simulation method for inkjet head and so on, using open source software. Formation of micro-sized droplet, ejection efficiency, and power consumption are some of the main considerations when designing an inkjet head made of piezoelectric material.
In order to better understand the flow phenomenon involved and thus achieving an optimal design of inkjet head, SII and IHPC have jointly developed a 2-way FSI solver using open-source software. Simulation time was shortened by performing calculation on High Performance Computing (HPC) therefore improving the overall design turnaround time. This solver is expected to make a great contribution to new product development in near future.
Deputy Executive Director, Institute of High Performance Computing, A*STAR
Prof Zhang Yong Wei helms the role of Deputy Executive Director at A*STAR’s Institute of High Performance Computing (IHPC). He is an Adjunct Professor at both National University of Singapore and Singapore University of Technology and Design. His research interests focus on using theory, modelling and computation as tools to study materials design, growth, processing and manufacturing, mechanical-thermal coupling, and mechanical-electronic coupling, et al.
He has published over 450 refereed journal papers, and delivered over 80 invited/keynote/plenary talks and lectures. He serves as an Editorial Board Member for Advanced Theory and Simulation and International Journal of Applied Mechanics. He is listed as Global Highly Cited Researchers (2018 and 2019).
A Multiscale Integrated Digital Twin Platform for Powder-bed Fusion Additive Manufacturing
Supercomputing has played a pivotal role in many important fields, such as weather forecasting, oil and gas exploration, DNA sequence, design of rocket and aircraft, et al. In modern additive manufacturing (AM) technology, part property inconsistency and quality control are two major bottleneck issues that hinder its wide industry adoption. Many factors, such as powder quality, powder size and distribution, laser power, scan speed, scan strategy, hatch distance, can all affect part quality.
IHPC have developed an integrated digital twin platform for powder-bed fusion selected laser melting (SLM) with aim to predict the printing outcomes, such as porosity, grain and phase microstructures, residual stress distribution and distortion, surface roughness and mechanical properties for given printing conditions.
HPC Architect, National University of Singapore, Centre for Advanced 2D Materials (CA2DM)
Miguel has a background in computational statistical physics, having obtained his PhD from University of Porto, Portugal, in 2010.
He joined NUS CA2DM in 2012 to set up its Research Computing Support team, including both IT and HPC systems and services, and has been the centre’s “HPC guy” since.
Since 2017, he is also one of the maintainers of EasyBuild (https://easybuild.readthedocs.io/), a software build and installation framework that allows one to manage scientific software on HPC systems in an efficient way.
Optimized software environments and workflows across heterogeneous HPC sites and clouds: Building 2DMatpedia, an open computational database of two-dimensional materials
Data Science and AI promise to upend how computational scientific research is done, by exploiting patterns hidden within enormous amounts of data. However, aside from a few fields such as image processing, it is not always straightforward to generate a sufficiently large and coherent collection of data that can for instance serve as training set for machine learning algorithms.
In many cases, each single example in a potential training set needs to be obtained from a numerical simulation that by itself requires considerable HPC resources, such that the total computational effort often exceeds what is available at a single site, while coordinating tasks across multiple sites creates its own challenges.
In this talk, we describe how we used modern software management and workflow tools across multiple HPC sites and clouds in order to build 2DMatpedia, an open computational database two-dimensional materials.
Dr. Daniel Cheong is the Deputy Programme Director for A*STAR’s Industrial IoT Innovation (I³) programme, where he works closely with industry partners to drive the adoption of industrial IoT and to solve real industry pain points. He was previously a senior scientist at the A*STAR’s Institute of High Performance Computing (IHPC). He has also worked in the medical device sector, including a start-up developing surgical implants. He obtained his PhD in Chemical Engineering from Princeton University in 2006 and his MBA from Singapore Management University in 2014.
Leveraging the Industrial Internet-of-Things for Reliable Data Collection to Enable Predictive Maintenance
The ability to predict and forecast when a machine or equipment is going to break down is one of the many benefits that AI promises to deliver to the industry. Such a forecast allows the maintenance of these equipment to be scheduled before failure occurs, potentially increasing the overall operational efficiency and productivity. However, the realization of such predictive maintenance will depend on the collection of relevant equipment data in a reliable and cost efficient way to build the predictive model.
This talk will present how A*STAR’s Industrial IoT Innovation programme has been working with companies to address this “first mile of digitalization” – the acquisition and transmission of data from assets and equipment, including some of the typical challenges of such implementation, and the ongoing work to try to overcome these challenges.
Vice President of System Development Division, Next Generation Technical Computing Unit, Fujitsu Limited
Mr. Toshiyuki Shimizu is Vice President of System Development Division, Next Generation
Technical Computing Unit, at Fujitsu Limited. He has been deeply and continuously involved in
the development of scalar parallel supercomputers, large SMP enterprise servers, and x86
cluster systems. His primary research interest is in interconnect architecture, most recently
culminating in the development of the Tofu interconnect for the K computer and PRIMEHPC
series. Mr. Shimizu leads the development of Fujitsu’s high-end supercomputer PRIMEHPC series and
the Post-K supercomputer. Mr. Shimizu received his Masters of Computer Science degree from Tokyo Institute of
Technology in 1988.
Head of Security Engineering, APAC, Check Point Software Technologies
Gary has 22 years’ experience in the IT industry, 20 of which have been in the dedicated to the security field. His experience ranges from performing audits, digital forensics for law enforcement, penetration testing to complex design and implementation of financial industry network security. He regularly speaks at industry forums, online and on television now as a cybersecurity evangelist.
Brings a wealth of 17-year experience in various cyber security technology consultancy with C-level executives to help drive business outcomes to organizational initiatives; Spear heads cyber security projects with complex architecture designs for organizations’ team members to focus more on driving business value. Prior to Trend Micro, worked at several security vendors like McAfee, Sangfor, Mail & Web Marshal (TrustWave) and M.Tech
Ariel Levanon is an expert on cyber security and managing complex intelligence projects. Mr. Levanon currently is Mellanox VP Cyber Security, responsible for defining Mellanox Cyber security architecture, roadmap and solutions for Cloud Security. His areas of expertise include cyber threat intelligence as well as cyber security networks analysis. He specializes in Cyber security for complex systems and networks, including Hardware and Software technologies. Mr. Levanon brings with him 20 years of working experience in Cyber & Intelligence positions . Ariel served as a Major in the IDF’s intelligence unit for 17 years, where he led a number of Cyber & Intelligence Groups dealing with highly complex technological projects that received Special awards for Excellence. Mr. Levanon holds a B.S.C in Electrical and Software engineering and an M.B.A from Tel Aviv University, specializing in Technological Management & Entrepreneurship, Cyber-Security and Marketing Management.
Rob has been architecting and implementing storage and data management solutions for more than
two decades for many large organisations. Rob has extensive experience architecting Petascale high
performance computer systems, high performance file systems, and large-scale persistent storage
environments. Rob is proficient at architecting high performance parallel file systems and
performance driven storage archive solutions that are aligned with customer workflow requirements.
Rob’s experience covers all aspects of the system life cycle from requirements analysis, infrastructure
design, capacity planning, continuity planning, integration planning and benchmarking for new
systems. Rob’s experience with ‘Big Data’ storage solutions has delivered real outcomes to
researchers, enabling them to focus on their science rather than the movement and management of
data. Rob has a complete understanding of the end-to-end data path, from data acquisition to data at
rest; this ensures any potential bottlenecks or data integrity issues are easily identified.
This knowledge is combined with more than a decade of real world customer experience within the
Science, Education and Government sector, working for the CSIRO and iVEC (Pawsey)
Supercomputing Centre designing and implementing their Petascale Storage initiative.
Prior to joining HPE (SGI), Rob worked for Data Direct Networks (DDN) as the primary systems design
engineer for ANZ, where he architected high performance parallel file system solutions for HPC
centres around the Asia Pacific region, focusing on Lustre and GPFS. Rob has a Diploma in Computer
Systems Engineering from Swinburne University.
Managing and protecting exabytes of data whilst maintaining availability and performance.
HPC, AI and the internet of things are advancing at an unprecedented pace, creating and collecting huge quantities of data. This deluge of data is exacerbating the challenges of storing data, cost effectively, at scale. Come along and listen, as we explore the challenges storing exabytes of data, ensuring it is protected from loss or corruption and available for use when required.
The National Supercomputing Centre (NSCC) Singapore was established in 2015 and manages Singapore’s first national petascale facility with available high performance computing (HPC) resources to support science and engineering computing needs for academic, research and industry communities.