SUPERCOMPUTING ASIA 2023

POSTER PRESENTATION SESSIONS

POSTER PRESENTATION SESSIONS

CALL FOR POSTERS (CLOSED)

Attention!!

SCA2023/HPC Asia 2023 is a fully in-person event; there are no online platforms such as Zoom or Gather.town. All accepted poster authors must attend on-site and have a presentation in the poster session.

Important Dates

1-page Extended Abstract Submission Due:
20 January 2023 (AoE)

Notice of Acceptance:
27 January 2023 (AoE)

Poster Draft Due:
14 February 2023 (AoE)

Conference:
27 February - 2 March 2023 (SGT)

Poster presentations will be done in-person at the Singapore Expo Convention & Exhibition Centre.

High performance computing is a key technology to solve large problems in science, engineering, and business by utilizing continuously evolving computing power. HPCAsia, which is an international conference series on HPC technologies in the Asia Pacific region, has been held several times in various countries in Asia, to discuss issues in HPC and to exchange information on research and development results. For the first time ever, HPC Asia 2023 (HPCA23) will be co-located and held in conjunction with the international SupercomputingAsia 2023 (SCA23) conference. The SCA conference is co-organised by supercomputing centres of the region including those in Australia, Japan, Singapore and Thailand and anchored by the National Supercomputing Centre (NSCC) Singapore. SCA incorporates a number of important supercomputing and allied events that together aim to promote a vibrant and shared high-performance computing (HPC) ecosystem, for both the public and private sectors, in Asia. Like HPCA23, SCA23 also seeks to nurture an exchange of ideas, case studies, and research results related to all issues of HPC. 


The HPC Asia 2023 consists of four tracks:

  • APPLICATIONS & ALGORITHMS
  • PROGRAMMING MODELS & SYSTEM SOFTWARE
  • DATA, STORAGE & VISUALISATION
  • ARCHITECTURES & NETWORKS

Details for topic of interest include, but are not limited to:

APPLICATIONS & ALGORITHMS 

  • High performance applications (high speed, low memory, low power simulations)
  • Computational science
  • Numerical linear algebra and its applications
  • High performance library and software framework for applications
  • Parallel and vectorization algorithms
  • Hybrid/heterogeneous/accelerated algorithms
  • Fault-tolerant algorithms
  • Graph algorithms


PROGRAMMING MODELS & SYSTEM SOFTWARE

  • Programming languages and compilation techniques
  • Tools and libraries for performance and productivity
  • Performance portability
  • System management, resource management and scheduler
  • Optimization for communication and memory
  • Techniques for testing, debugging, reproducibility and determinism
  • Techniques for fault tolerance and energy efficiency


DATA, STORAGE & VISUALISATION

  • Big data processing with emerging hardware
  • Parallel and distributed file systems
  • Storage networks
  • Storage systems
  • Visualization and image processing
  • Reliability and fault tolerance
  • Scalable data management
  • Transaction processing
  • Integration of non-volatile memory
  • I/O performance tuning, benchmarking and evaluation
  • Provenance
  • Experience and application studies on large-scale storage architectures


ARCHITECTURES & NETWORKS

  • Memory architectures
  • Interconnect/Network architectures
  • Acceleration technologies (e.g., GPUs, FPGAs)
  • Power/Energy-aware high-performance computing
  • Dependable high-performance computing
  • Architectures for emerging device technologies
 
Poster Presentation Format (Tentative)
  • The PDF file of 1-page extended abstract of each accepted submission will be uploaded to the conference website but NOT included in the conference proceedings that will be published by ACM.
  • The PDF file of each poster will be also uploaded to the conference website.
  • Each poster is given a number and designated space with a display board. The number of each poster will be on its display board in the posters area to help you locate your space.
  • Pushpins are provided for affixing all poster materials to display boards.
  • Student poster presentation award is planned:
    • Eligibility: a poster presenter who is the first author of the poster and a student in BS, MS, and Ph. D courses.
    • Evaluation: member of the award committee will comprehensively evaluate the 1-page extended abstract, the poster draft, and the discussion through Poster session core time.
Poster Submission

Authors are invited to submit 1-page extended abstract (including references) of their posters. We welcome any posters that contain interesting themes you want to discuss in HPC Asia 2023. (A poster on research that was already reported in other journals and/or conferences is also acceptable, but authors must solve copyright issues by themselves.) Each abstract has to be formatted in the double-column style as in the following sample and submitted through Linklings. The review process is single-blind.

 

Templates
Notice
  • At least the e-mail address of the presenter and corresponding author(s) should be described in the abstract.
Contact

Please contact the Poster Chairs at [email protected] for any questions/clarifications about Call for Posters.

For queries on Posters – please indicate ‘<POSTERS>’ at the start of your Subject heading of your email. 

STUDENT POSTER AWARD

1st place

Dimitrios Lialios (Barcelona Supercomputing Center)

Poster [#115] : “Coupling of poro-aniso-hyperelastic and solute transport finite element models in a High-Performance Computing framework, for the study of Intervertebral Disc Degeneration” / Dimitrios Lialios (Barcelona Supercomputing Center), Mariano Vazquez (Barcelona Supercomputing Center) , Beatriz Eguzkitza (Barcelona Supercomputing Center), Eva Casoni (Barcelona Supercomputing Center)

2nd place

Xuan Yang (Kogakuin University)

Poster [#120]: “Applying Automatic Tuning to Hyper-parameter Optimization of Machine Learning Programs for Super-Resolution” / Xuan Yang (Kogakuin University), Sorataro Fujika (Kogakuin University), Yuga Yajima (Kogakuin University), Akihiro Fujii (Kogakuin University), Teruo Tanaka (Kogakuin University), Kazutoshi Akita (Toyota Technological Institute), Norimichi Ukita (Toyota Technological Institute), Satoshi Ohshima (Nagoya University)

3rd place

Ryo Sagayama (Kogakuin University)

Poster [#117]: TSC method using semi-implicit method for spring mass simulation / Ryo Sagayama (Kogakuin University), Akihiro Fujii (Kogakuin University), Teruo Tanaka (Kogakuin University), Takumi Washio (Tokyo University), Takeshi Iwashita (Hokkaido University)

Attention!!

SCA2023/HPC Asia 2023 is a fully in-person event; there are no online platforms such as Zoom or Gather.town. All accepted poster authors must attend on-site and have a presentation in the poster session.

Important Dates

1-page Extended Abstract Submission Due:
20 January 2023 (AoE)

Notice of Acceptance:
27 January 2023 (AoE)

Poster Draft Due:
14 February 2023 (AoE)

Conference:
27 February – 2 March 2023 (SGT)

Poster presentations will be done in-person at the Singapore Expo Convention & Exhibition Centre.

High performance computing is a key technology to solve large problems in science, engineering, and business by utilizing continuously evolving computing power. HPCAsia, which is an international conference series on HPC technologies in the Asia Pacific region, has been held several times in various countries in Asia, to discuss issues in HPC and to exchange information on research and development results. For the first time ever, HPC Asia 2023 (HPCA23) will be co-located and held in conjunction with the international SupercomputingAsia 2023 (SCA23) conference. The SCA conference is co-organised by supercomputing centres of the region including those in Australia, Japan, Singapore and Thailand and anchored by the National Supercomputing Centre (NSCC) Singapore. SCA incorporates a number of important supercomputing and allied events that together aim to promote a vibrant and shared high-performance computing (HPC) ecosystem, for both the public and private sectors, in Asia. Like HPCA23, SCA23 also seeks to nurture an exchange of ideas, case studies, and research results related to all issues of HPC. 


The HPC Asia 2023 consists of four tracks:

  • APPLICATIONS & ALGORITHMS
  • PROGRAMMING MODELS & SYSTEM SOFTWARE
  • DATA, STORAGE & VISUALISATION
  • ARCHITECTURES & NETWORKS

Details for topic of interest include, but are not limited to:

APPLICATIONS & ALGORITHMS 

  • High performance applications (high speed, low memory, low power simulations)
  • Computational science</li style=”color: #ffffff;”>
  • Numerical linear algebra and its applications
  • High performance library and software framework for applications
  • Parallel and vectorization algorithms
  • Hybrid/heterogeneous/accelerated algorithms
  • Fault-tolerant algorithms
  • Graph algorithms


PROGRAMMING MODELS & SYSTEM SOFTWARE

  • Programming languages and compilation techniques
  • Tools and libraries for performance and productivity
  • Performance portability
  • System management, resource management and scheduler
  • Optimization for communication and memory
  • Techniques for testing, debugging, reproducibility and determinism
  • Techniques for fault tolerance and energy efficiency


DATA, STORAGE & VISUALISATION

  • Big data processing with emerging hardware
  • Parallel and distributed file systems
  • Storage networks
  • Storage systems
  • Visualization and image processing
  • Reliability and fault tolerance
  • Scalable data management
  • Transaction processing
  • Integration of non-volatile memory
  • I/O performance tuning, benchmarking and evaluation
  • Provenance
  • Experience and application studies on large-scale storage architectures


ARCHITECTURES & NETWORKS

  • Memory architectures
  • Interconnect/Network architectures
  • Acceleration technologies (e.g., GPUs, FPGAs)
  • Power/Energy-aware high-performance computing
  • Dependable high-performance computing
  • Architectures for emerging device technologies
 
Poster Presentation Format (Tentative)
  • The PDF file of 1-page extended abstract of each accepted submission will be uploaded to the conference website but NOT included in the conference proceedings that will be published by ACM.
  • The PDF file of each poster will be also uploaded to the conference website.
  • Each poster is given a number and designated space with a display board. The number of each poster will be on its display board in the posters area to help you locate your space.
  • Pushpins are provided for affixing all poster materials to display boards.
  • Student poster presentation award is planned:
    • Eligibility: a poster presenter who is the first author of the poster and a student in BS, MS, and Ph. D courses.
    • Evaluation: member of the award committee will comprehensively evaluate the 1-page extended abstract, the poster draft, and the discussion through Poster session core time.
Poster Submission

Authors are invited to submit 1-page extended abstract (including references) of their posters. We welcome any posters that contain interesting themes you want to discuss in HPC Asia 2023. (A poster on research that was already reported in other journals and/or conferences is also acceptable, but authors must solve copyright issues by themselves.) Each abstract has to be formatted in the double-column style as in the following sample and submitted through Linklings. The review process is single-blind.

NOTE: HPC Asia 2023 does not include the poster manuscript in the proceedings, but instead includes:
– 1-page Abstract
– a pdf file of Poster

Templates

Attention!!

SCA2023/HPC Asia 2023 is a fully in-person event; there are no online platforms such as Zoom or Gather.town. All accepted poster authors must attend on-site and have a presentation in the poster session.

Important Dates

1-page Extended Abstract Submission Due:
20 January 2023 (AoE)

Notice of Acceptance:
27 January 2023 (AoE)

Poster Draft Due:
14 February 2023 (AoE)

Conference:
27 February – 2 March 2023 (SGT)

Poster presentations will be done in-person at the Singapore Expo Convention & Exhibition Centre.

High performance computing is a key technology to solve large problems in science, engineering, and business by utilizing continuously evolving computing power. HPCAsia, which is an international conference series on HPC technologies in the Asia Pacific region, has been held several times in various countries in Asia, to discuss issues in HPC and to exchange information on research and development results. For the first time ever, HPC Asia 2023 (HPCA23) will be co-located and held in conjunction with the international SupercomputingAsia 2023 (SCA23) conference. The SCA conference is co-organised by supercomputing centres of the region including those in Australia, Japan, Singapore and Thailand and anchored by the National Supercomputing Centre (NSCC) Singapore. SCA incorporates a number of important supercomputing and allied events that together aim to promote a vibrant and shared high-performance computing (HPC) ecosystem, for both the public and private sectors, in Asia. Like HPCA23, SCA23 also seeks to nurture an exchange of ideas, case studies, and research results related to all issues of HPC. 


The HPC Asia 2023 consists of four tracks:

  • APPLICATIONS & ALGORITHMS
  • PROGRAMMING MODELS & SYSTEM SOFTWARE
  • DATA, STORAGE & VISUALISATION
  • ARCHITECTURES & NETWORKS

Details for topic of interest include, but are not limited to:

APPLICATIONS & ALGORITHMS 

  • High performance applications (high speed, low memory, low power simulations)
  • Computational science</li style=”color: #ffffff;”>
  • Numerical linear algebra and its applications
  • High performance library and software framework for applications
  • Parallel and vectorization algorithms
  • Hybrid/heterogeneous/accelerated algorithms
  • Fault-tolerant algorithms
  • Graph algorithms


PROGRAMMING MODELS & SYSTEM SOFTWARE

  • Programming languages and compilation techniques
  • Tools and libraries for performance and productivity
  • Performance portability
  • System management, resource management and scheduler
  • Optimization for communication and memory
  • Techniques for testing, debugging, reproducibility and determinism
  • Techniques for fault tolerance and energy efficiency


DATA, STORAGE & VISUALISATION

  • Big data processing with emerging hardware
  • Parallel and distributed file systems
  • Storage networks
  • Storage systems
  • Visualization and image processing
  • Reliability and fault tolerance
  • Scalable data management
  • Transaction processing
  • Integration of non-volatile memory
  • I/O performance tuning, benchmarking and evaluation
  • Provenance
  • Experience and application studies on large-scale storage architectures


ARCHITECTURES & NETWORKS

  • Memory architectures
  • Interconnect/Network architectures
  • Acceleration technologies (e.g., GPUs, FPGAs)
  • Power/Energy-aware high-performance computing
  • Dependable high-performance computing
  • Architectures for emerging device technologies
 
Poster Presentation Format (Tentative)
  • The PDF file of 1-page extended abstract of each accepted submission will be uploaded to the conference website but NOT included in the conference proceedings that will be published by ACM.
  • The PDF file of each poster will be also uploaded to the conference website.
  • Each poster is given a number and designated space with a display board. The number of each poster will be on its display board in the posters area to help you locate your space.
  • Pushpins are provided for affixing all poster materials to display boards.
  • Student poster presentation award is planned:
    • Eligibility: a poster presenter who is the first author of the poster and a student in BS, MS, and Ph. D courses.
    • Evaluation: member of the award committee will comprehensively evaluate the 1-page extended abstract, the poster draft, and the discussion through Poster session core time.
Poster Submission

Authors are invited to submit 1-page extended abstract (including references) of their posters. We welcome any posters that contain interesting themes you want to discuss in HPC Asia 2023. (A poster on research that was already reported in other journals and/or conferences is also acceptable, but authors must solve copyright issues by themselves.) Each abstract has to be formatted in the double-column style as in the following sample and submitted through Linklings. The review process is single-blind.

NOTE: HPC Asia 2023 does not include the poster manuscript in the proceedings, but instead includes:
– 1-page Abstract
– a pdf file of Poster

Templates
Notice
  • At least the e-mail address of the presenter and corresponding author(s) should be described in the abstract.
Contact

Please contact the Poster Chairs at [email protected] for any questions/clarifications about Call for Posters.

For queries on Posters – please indicate ‘<POSTERS>’ at the start of your Subject heading of your email. 

Notice
  • At least the e-mail address of the presenter and corresponding author(s) should be described in the abstract.
Contact

Please contact the Poster Chairs at [email protected] for any questions/clarifications about Call for Posters.

For queries on Posters – please indicate ‘<POSTERS>’ at the start of your Subject heading of your email. 

Attention!!

SCA2023/HPC Asia 2023 is a fully in-person event; there are no online platforms such as Zoom or Gather.town. All accepted poster authors must attend on-site and have a presentation in the poster session.

Important Dates

1-page Extended Abstract Submission Due:
20 January 2023 (AoE)

Notice of Acceptance:
27 January 2023 (AoE)

Poster Draft Due:
14 February 2023 (AoE)

Conference:
27 February – 2 March 2023 (SGT)

Poster presentations will be done in-person at the Singapore Expo Convention & Exhibition Centre.

High performance computing is a key technology to solve large problems in science, engineering, and business by utilizing continuously evolving computing power. HPCAsia, which is an international conference series on HPC technologies in the Asia Pacific region, has been held several times in various countries in Asia, to discuss issues in HPC and to exchange information on research and development results. For the first time ever, HPC Asia 2023 (HPCA23) will be co-located and held in conjunction with the international SupercomputingAsia 2023 (SCA23) conference. The SCA conference is co-organised by supercomputing centres of the region including those in Australia, Japan, Singapore and Thailand and anchored by the National Supercomputing Centre (NSCC) Singapore. SCA incorporates a number of important supercomputing and allied events that together aim to promote a vibrant and shared high-performance computing (HPC) ecosystem, for both the public and private sectors, in Asia. Like HPCA23, SCA23 also seeks to nurture an exchange of ideas, case studies, and research results related to all issues of HPC. 

The HPC Asia 2023 consists of four tracks:

  • APPLICATIONS & ALGORITHMS
  • PROGRAMMING MODELS & SYSTEM SOFTWARE
  • DATA, STORAGE & VISUALISATION
  • ARCHITECTURES & NETWORKS

Details for topics of interest include, but are not limited to:

APPLICATIONS & ALGORITHMS

  • High performance applications (high speed, low memory, low power simulations)
  • Computational science
  • Numerical linear algebra and its applications
  • High performance library and software framework for applications
  • Parallel and vectorization algorithms
  • Hybrid/heterogeneous/accelerated algorithms
  • Fault-tolerant algorithms
  • Graph algorithms
PROGRAMING MODELS & SYSTEMS SOFTWARE

  • Programming languages and compilation techniques
  • Tools and libraries for performance and productivity
  • Performance portability
  • System management, resource management and scheduler
  • Optimization for communication and memory
  • Techniques for testing, debugging, reproducibility and determinism
  • Techniques for fault tolerance and energy efficiency
DATA, STORAGE & VISUALISATION

  • Big data processing with emerging hardware
  • Parallel and distributed file systems
  • Storage networks
  • Storage systems
  • Visualization and image processing
  • Reliability and fault tolerance
  • Scalable data management
  • Transaction processing
  • Integration of non-volatile memory
  • I/O performance tuning, benchmarking and evaluation
  • Provenance
  • Experience and application studies on large-scale storage architectures
ARCHITECTURES & NETWORKS

  • Memory architectures
  • Interconnect/Network architectures
  • Acceleration technologies (e.g., GPUs, FPGAs)
  • Power/Energy-aware high-performance computing
  • Dependable high-performance computing
  • Architectures for emerging device technologies
Poster Presentation Format (Tentative)
  • The PDF file of 1-page extended abstract of each accepted submission will be uploaded to the conference website but NOT included in the conference proceedings that will be published by ACM.
  • The PDF file of each poster will be also uploaded to the conference website.
  • Each poster is given a number and designated space with a display board. The number of each poster will be on its display board in the posters area to help you locate your space.
  • Pushpins are provided for affixing all poster materials to display boards.
  • Student poster presentation award is planned:
    • Eligibility: a poster presenter who is the first author of the poster and a student in BS, MS, and Ph. D courses.
    • Evaluation: member of the award committee will comprehensively evaluate the 1-page extended abstract, the poster draft, and the discussion through Poster session core time.
Poster Submission

Authors are invited to submit 1-page extended abstract (including references) of their posters. We welcome any posters that contain interesting themes you want to discuss in HPC Asia 2023. (A poster on research that was already reported in other journals and/or conferences is also acceptable, but authors must solve copyright issues by themselves.) Each abstract has to be formatted in the double-column style as in the following sample and submitted through Linklings. The review process is single-blind.

NOTE: HPC Asia 2023 does not include the poster manuscript in the proceedings, but instead includes:
– 1-page Abstract
– a pdf file of Poster

Templates
Notice
  • At least the e-mail address of the presenter and corresponding author(s) should be described in the abstract.
Contact

Please contact the Poster Chairs at [email protected] for any questions/clarifications about Call for Posters.

For queries on Posters – please indicate ‘<POSTERS>’ at the start of your Subject heading of your email. 

Poster presentations

  • Speakers with * (asterisk) at the beginning are eligible for the Student Poster Award. The winner will be announced at 17:10-17:30 “Awarding & Closing” on March 1st.

 

[#101] Introduction of Open OnDemand to Supercomputer Fugaku

  • Authors: Masahiro Nakao (RIKEN R-CCS), Shin’ichi Miura (RIKEN R-CCS), Keiji Yamamoto (RIKEN R-CCS)
  • Abstract: Since there is a lot of pre-requisite knowledge to use HPC clusters, the learning cost is high for beginners. In order to use HPC resources easily, we introduce Open OnDemand to the supercomputer Fugaku, which is the flagship supercomputer in Japan. Open OnDemand allows users to access HPC resources from a web browser instead of SSH. Furthermore, Dismiss interactive operations of GUI applications running on the compute nodes of the HPC cluster can be easily executed. Open OnDemand supports various schedulers, but Fujitsu TCS, a job scheduler on Fugaku, does not be supported. Therefore, we develop an adapter to use Fujitsu TCS in Open OnDemand.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)
 

[#102] Quantification of Nonlocality in Quantum Information with Massively Parallel Computing

  • Author: Hoon Ryu (Korea Institute of Science and Technology Information), Junghee Ryu (Korea Institute of Science and Technology Information)
  • Abstract: Entanglement is a key resource that is essential to make some of quantum information applications be advantageous against their classical counterparts. The motivation of entanglement quantification is thus obvious as it can be used to computationally explore the potential practicality of certain quantum circuits or states. The marginal operational quasiprobability (OQ) function is one of the computational methods that can characterize the entanglement strength of quantum states by quantifying their nonlocality. OQ is advantageous against the well-known full-state tomography method as it only involves directly measurable operators and generally requires a less number of measurements for verification. Its computing cost, however, is still nonnegligible and exponentially increases as the quantum-bit (qubit) size of target states grows, so the utilization of high performance computing resources must be pursued. In this work, we present the details of how the computational process of OQ can be easily parallelized with Massage Passing Interface (MPI), and show a case study with a focus on the operational verification of the entanglement swapping protocol. The parallel efficiency of OQ is also tested for verification of well-known N-qubit GHZ states with up to 2,048 computing nods in the NURION supercomputer.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)
 

[#103] Real-time Local Weather Forecasting Using CReSS on GPU Clusters

  • Author: Naoya Nomura (Mitsubishi Electric Corporation), Masato Gocho (Mitsubishi Electric Corporation), Takuya Uesugi (Mitsubishi Electric Corporation), Kei Akama (Mitsubishi Electric Corporation), Tetsutaro Yamada (Mitsubishi Electric Corporation), Hiroshi Sakamaki (Mitsubishi Electric Corporation)
  • Abstract: In this study, we developed and evaluated a weather forecasting system on a single graphics processing unit (GPU) for real-time local weather forecasting on a GPU cluster. Local weather forecasting systems are required to construct safety systems for industrial and transportation infrastructure, logistics, and the unmanned traffic management systems (UTMs). In such a case, it is important to obtain data recorded by accurate sensors, such as weather radars and water vapor LIDARs, and forecasting simulators to predict the rainy / windy conditions in the future. However, weather forecasting on such simulators have high computational time. To overcome this problem, we developed a novel cloud-resolving storm simulator (CReSS) on GPU clusters. In this study, we evaluate the execution time of the prototype version of our developed CReSS for a single GPU system.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)
 

[#104] Designing a Cloud an HPC Based Model&Simulation Platform to Investigate the IVD Disease Mechanisms

  • Author: *Maria Paola Ferri (Barcelona Supercomputing Center), Laia Codó Tarraubella (Barcelona Supercomputing Center), Josep LLui Gelpi Buchaca (Barcelona Supercomputing Center)
  • Abstract: The development of an automated and specialized platform for multi-factorials diseases can represent the best hybrid technology for healthcare data management and computational environments’ exploitation, given the use of Cloud infrastructures on healthcare facilities. Though rendering automatic not only the database, but prediction and simulation models in a user-friendly integrated system, may facilitate a difficult diagnosis and forward therapy, especially considering the various forces at play in a multi-omics data analysis of its kind.

    Based on the European Open Science Cloud (EOSC) vision, and within the HORIZON MSCA Disc4All project, the expected platform would be a Cloud-based one, furnished with data management and access provider components, hosting data and tools analysis specifics for Models and Simulations (M&D) in Intervertebral Disc Degeneration (IVD), furnished with a front-end, to guarantee reproducibility, accessibility and easy use for experts and non-experts.

    The 2D/3D simulations tools, as well as ML/AI and image analysis tools, would be granted special access to HPC facilities through a specific technology, to insure the necessary resources for simulations in the range of hundreds simultaneously.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)
 

[#105] MEX: CXL-Based Memory EXpander With Hardware Acceleration

  • Author: Seon Young Kim (Electronics and Telecommunications Research Institute), Hoo Young Ahn (Electronics and Telecommunications Research Institute), Yoo Mi Park (Electronics and Telecommunications Research Institute)
  • Abstract: As data-intensive applications represented by artificial intelligence have great attention, the amount of memory required for computing systems is rapidly increasing, especially in HPC (High-Performance Computing) systems. According to the study that analyzed HPL (High-Performance Linpack) performance in an HPC system, the memory capacity per CPU core to obtain the theoretical performance of HPL tends to increase in proportion to the number of CPU cores constituting the system. The result of the study suggests that as the number of CPU cores constituting the system increases, the memory capacity required for the system increases more steeply.

    Though the required memory capacity is increasing day by day, there is a limit to the memory capacity of a computing node depending on hardware characteristics such as the number of CPU memory channels. To overcome this limitation, various memory expanders are being unveiled by major memory vendors such as Samsung that allow a computing node to utilize additional expanded memory beyond its own local memory. These memory expanders provide expanded memory that can be accessed in a cache-coherent manner through the state-of-the-art interconnects such as CXL. In addition, they also provide hardware acceleration through the built-in accelerator.

    In this poster, we introduce our ongoing research to develop our own CXL-based memory expander called MEX (Memory EXpander). MEX not only provides expanded memory thorough the CXL, but also provides hardware acceleration for the K-NN (K-Nearest Neighbor), a key operation for the similarity search.
  • 1-page extended abstract (PDF file)
 
[#106] Software Development for a Full-stack Quantum Computer
  • Author: Inho Jeon (Korea Institute of Science and Technology Information), Jieun Choi (Korea Institute of Science and Technology Information), Hoon Ryu (Korea Institute of Science and Technology Information)
  • Abstract: Quantum computing (QC) is getting huge attention due to its strong potential for significant computational advantages over classical digital computers, e.g., the parallel efficiency that can be naturally driven by the superposition property of quantum bits (qubits). Being well-represented with the IBM Quantum in the United States, fully programmable circuit-based quantum computers are already available via cloud-based services. The European Union is putting massive effort in developing quantum computers through the European Quantum Initiative program that has been launched since 2018. Recognizing the promise of QC for revolutionization of high performance computing, the Republic of Korea (ROK) have also participated in the race to develop a full-stack gate-based quantum computer under the support of the Ministry of Science and ICT. In this work, we briefly discuss the national flagship QC project of ROK with a particular focus on the software development that is being carried by the Korea Institute of Science and Technology Information (KISTI).
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)
 
[#107] ADIOS2: Performance Benefits over MPIIO and HDF5?
  • Author: *Shrey Bhardwaj (EPCC)
  • Abstract: The I/O bottleneck is a key problem that will need to be solved to achieve the era of exascale computation in HPC. ADIOS2 presents itself as a new I/O library designed to give scalable I/O performance. In this work, ADIOS2 and its proprietary formats BP4 and BP5 were benchmarked against established I/O libraries such as HDF5 and MPIIO. Strong and weak scaling tests were used to test the I/O bandwidths from the different I/O libraries. A c-based benchmarking tool, benchmark_c was created to obtain a comparison between the libraries using the same amount of data to write to disk. ARCHER2, the UK national supercomputing service was used to conduct these tests. To analyse the results from benchmarking the different I/O libraries, the DARSHAN I/O profiler was used by adding the file path of the profiler to the job submission to ARCHER2. Using a custom plotting tool developed in python, the file-write sizes and frequencies were compared from each I/O library. It was observed that the ADIOS2 formats BP4 and BP5 were able to scale better as compared to MPIIO and HDF5 in strong and weak scaling tests. Using the DARSHAN results, it was found that the ADIOS2 formats were consistently writing smaller file write sizes and writing much more frequently as compared to the other I/O libraries. Lastly, data was written to the burst buffer on ARCHER2 to obtain the speedup of writing to the burst buffer as compared to the disk.
  • 1-page extended abstract (PDF file)
 
[#108] An Evaluation of Reducing Power Consumption in Taiwania 3 Supercomputer
  • Author: Kuan-Chih Wang (National Center for High-performance Computing)
  • Abstract: As a result of the ongoing compound global energy and recession-inflation crises, the rising electricity cost presents an unforeseen challenge for HPC system operators like NCHC.
    Built in late 2020, Taiwania 3 is NCHC’s current in-service HPC system consisting of 900 CPU compute nodes. The average system utilization is about 75% of the maximum capacity. However, we observe that the system utilization exhibits distinct temporal variability of both diurnal and seasonal scales. Furthermore, even when compute nodes are idle, the CPUs still operate at the all-core turbo frequency, which unnecessarily wastes energy. This finding motivates us to investigate and pursue additional opportunities in reducing energy consumption without disrupting our users.
    We implement the energy saving measures from two aspects: 1. Advanced BIOS Configuration Tuning; 2. Enabling System Sleep. Results show that as much as 65% reduction in idle power consumption can be achieved. Implementation details and additional analysis are presented in the poster.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)
 
[#109] High Performance Object Storage for HPC, AI and Analytics
  • Author: Madhu Thorat (IBM)
  • Abstract: Today a new world of modern application workloads has emerged as adoption of High Performance Computing (HPC), Artificial intelligence (AI) and Analytics solutions has accelerated. Applications in these domains are generating tremendous amounts of unstructured data. In fact, a study shows that 80% of data generated worldwide will be unstructured by 2025. This has led to the urgency of solving two big problems:
    • It has resulted in requirement of storage systems which can store and manage unstructured data.
    • In addition, the storage systems should be able to provide access to huge amounts of data at high speed and be scalable.
  • These problems can be addressed by High Performance Object Storage systems, which can store large volumes of unstructured data and meet the demands for high speed, low latency, and scale from storage. Moreover, object storage systems provide low-cost storage for large capacity of data and are suitable for HPC, AI, Analytics and Cloud native applications. Also, in the past few years’ adoption and management of object storage has been simplified with Amazon S3 API which has become the standard protocol for leveraging object storage.

    This poster will give information about High Performance Object Storage and its typical features which can help with the growing storage demands for HPC, AI and Analytics applications.

  • 1-page extended abstract (PDF file)
  • Poster (PDF file)
 
[#110] Visualization of Computational Fluid Dynamics OpenFOAM Free Surface Flow Simulations
  • Author: *Yun Hang Cho (University of Sheffield, A*STAR Singapore)
  • Abstract: With climate change increasing flood risk, it is critical to improve our understanding of how waterway designs interact with the flow turbulence, pollutant transportation and overall flow carrying capacity. One common type of waterway used to transport ground water is the open channel. Computational Fluid Dynamics (CFD) simulation software such as OpenFOAM are a good tool to help simulate open channel flows but does not include a graphic user interface or any visualization tools. Whilst this gives the user more freedom in visualization, it also makes it more difficult for beginners to learn how to display and interact with their work. Visualization is a vital part of the CFD workflow as a single image of your setup or results can indicate a critical error that may not be clearly evident in a thousand line of code. The most used visualization tool with OpenFOAM is ParaView. For supercomputer users, it also presents additional challenges that are not normally covered in the online manuals. This poster will presents some of the challenges facing OpenFOAM users with visualizing their work and the solution. The topics have been selected based on the likelihood of the problem arising as well as the difficulty in finding a suitable answer through online forums. For HPC Asia, the topics are also focused around the author’s experience with deploying OpenFOAM simulations on supercomputing facilities such as the National Supercomputing Center’s Aspire-1 supercomputer.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)
 
[#111] An FPGA-based Sound Field Renderer for High-Precision Sound Field Auralization
  • Author: YIYU TAN (Iwate University), Xin Lu (Iwate University), Guanghui Liu (Iwate University), Peng Chen (National Institute of Advanced Industrial Science and Technology), Truong Thao Nguyen (National Institute of Advanced Industrial Science and Technology), Yusuke Tanimuran (National Institute of Advanced Industrial Science and Technology)
  • Abstract: In sound field auralization, highly accurate room impulse response is critical to achieve precise auralization results. To date, geometrical methods are widely applied in real-time sound field auralization because of their low computational load at price of accuracy. In contrast, wave-based methods provide high accuracy, but require much higher computational capability since spatial grids are oversampled to suppress dispersion errors. In this work, an FPGA-based sound field renderer was investigated for high-precision real-time sound field auralization, in which the hardware-oriented FDTD algorithm was applied to obtain room impulse response accurately, and dedicated hardware was developed to speed up computation. Compared with the software simulation performed on a desktop machine with 512 GB DRAMs and an Intel Xeon Gold 6212U 24-core processor running at 2.4 GHz, the FPGA-based sound field renderer achieved by 11 times in computational performance gain.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)

 

[#112] Passerelle Forêt-Climat: a Data Portal for the Largest Academic Forest in the World
  • Author: Julie Faure-Lacroix (Université Laval)
  • Abstract: Data stewardship tactically coordinates and implements data management to ensure the accessibility of high-quality data in an organization. It is now an area of focus at Laval University, which has been overseeing forestry and research operations for over 60 years at Montmorency forest, the largest experimental forest in the world, located 75km north of Québec city, Canada. We created an online portal (Passerelle Forêt-Climat) to address three targeted needs: 1. Fast and reliable accessibility and sharing of data with collaborators within the institution, country, and internationally, 2. Spatial, temporal, and real-time data visualization, 3. Access to on-premises data analysis and a network of high-performance computing (HPC) facilities. The online portal runs on a local Dell server (4x A100, 4x A40, 4x Xeon Gold) and is powered by Esri ArcGIS Enterprise. Data sharing, visualization, and analysis can be done directly in the web interface or locally on the user’s personal computer using ArcGIS Pro. Heavy workloads can be sent to any HPC facility in Canada or the world that are connected to high performance networks such as RISQ and CANARIE. Along with the online data platform, we filled the technological gap in the research done at Forêt Montmorency by acquiring sensors and equipment that will improve our understanding of the long-term ecological and societal impacts of forest harvesting. The introduction of real-time data will broaden the scope of research that can be done in a forested environment and will benefit research projects focusing on machinery, artificial intelligence, and digital twins.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)
 
[#113] Ultrasonic Wavefront Computing for Fourier Analysis
  • Author: Kevin Tshun Chuan Chai (IME)
  • Abstract: Fourier transform (FT) is a ubiquitous tool found in engineering and science. FT can be computed with hardware like multicores, graphic processors, digital signal processors and etc., but these computing hardware become inefficient for arbitrary large dataset. Alternative like an optical processor which uses optics to produce FT naturally is comparatively more efficient but suffers with complex amplitude due to the difficulty in phase measurement with typical camera sensors. On the order hand, the acoustic method (ultrasound) which also operates by the principles of wave mechanics can be operated at compatible CMOS frequencies (GHz) to acquire the phase information required for complex FT. We have developed a compact model of our ultrasound wavefront computing (WFC) machine to investigate its performance (latency and power consumption) for combinations of different sized FT data and WFC arrays. From our simulations, we showed that our WFC method can perform on par or magnitudes better for large FT computations. Our WFC method are able to compute FT more efficiently as the computations are naturally formed in highly parallelized manner as compared to fully digital electronic systems which rely on radix decimation. Consequently, as the dataset becomes larger, the benefits of WFC’s become more apparent.
  • 1-page extended abstract (PDF file)
 
 [#114] 2.5D Photonic Interposer for High-Performance Computing
  • Author: Wen Lee (Institute of Microelectronics), Nanxi Li (Institute of Microelectronics), Hong Cai (Institute of Microelectronics), Feng Xu (Institute of Microelectronics), Ting-Ta Chi (Institute of Microelectronics), Landobasa Tobing (Institute of Microelectronics), Zhenyu Li (Institute of Microelectronics), King Jien Chui (Institute of Microelectronics), Ser Choong Chong (Institute of Microelectronics), Teck Guan Lim (Institute of Microelectronics), Lennon Yao Ting Lee (Institute of Microelectronics)
  • Abstract: The demands of high-performance computing (HPC) with photonic connected data servers are growing to fulfil heavy computing and data transferring loading requirements. This work aims at creating a photonic engine capable of integration with on-chip laser sources, multiple electronic integrated circuits (EIC), photonic integrated circuits (PIC), fiber array unit (FAU), interposer, and 2.5D HPC chip-on-wafer-on-substrate (CoWoS) package
  • 1-page extended abstract (PDF file)
 
[#115] A Poro-aniso-hyperelastic Model Coupled with Solute Transfer Model for the In-silico Study of Intervertebral Disc Degeneration, within an HPC Framework
  • Author: *Dimitrios Lialios (Barcelona Supercomputing Center), Mariano Vazquez (Barcelona Supercomputing Center), Beatriz Eguzkitza (Barcelona Supercomputing Center), Eva Casoni (Barcelona Supercomputing Center)
  • Abstract: The main objectives of the current work are the design of a poromechanical Finite Element (FE) solver and it s coupling with a solute transfer FE solver, within an HPC framework. This new and complex methodology is implemented in a high performance computational mechanics FE code, Alya, which is a multi-physics, parallel and highly scalable code. This endeavour targets the study of the intervertebral disc (IVD) and its degeneration. IVD degeneration is likely dictated by intricate spatio-temporal events that require the aggregation of multi-physics models. The simulation of such models is computationally expensive, hence large-scale computing infrastructures would be mandatory for insilico cohort simulations.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)
 
[#116] Digital Pathology Annotation & Image Quality Assessment Platform for Developing and Training Various AI Cancer Models and Realizing the Intelligent Clinical Assessment
  • Author: Kok Haur Ong (BII), Laurent Gole (IMCB), Longjie Li (BII), Xinmi Huo (BII), Kah Weng Lau (NUH), David M.Young (IMCB), Susan Swee Shan Hue (NUH), Char Loo Tan (NUH), Gabriel Pik Liang Marini (BII), Hao Han (IMCB), Malay Singh (BII), Haoda Lu (BII), Soo Yong Tan (NUH), Weimiao Yun (BII) 
  • Abstract: AI-driven computational pathology diagnosis is an emerging but rapidly growing field. It applies computational algorithms such as artificial intelligence and machine learning solutions to classify cancer and other diseases from digital pathology images. A high-quality database of digital pathology images and annotations are fundamental to the development of AI-based solutions. Annotations requires proper training by experienced pathologists because most successful models are based on supervised learning. Specific regions must be annotated to describe the disease, and then a learning model can be built. However, the limited number of trained pathologists and the high clinical workload make it difficult to obtain extensive digital pathology image annotation datasets to develop accurate supervised learning models. To make matters worse, the quality of digital pathology images is almost impossible to quantitatively assess with the naked eye. To address this, we have developed AimagQC, a fully automated Histology Image Quality Assesment tool. Histology Laboratory can easily integrate AimagQC within their workflow in between image acquisition and analysis. To facilitate advances in computational pathology, cloud-based structural annotation platforms (A!HistoNotes) were developed to enable pathologists to select high-quality digital pathology images for annotation. Based on this cloud-based solution, we are building an annotation dataset for prostate cancer, linking annotations with ontology information to enable accurate pathological diagnosis based on artificial intelligence. As a result, clinical validation and decision-making by pathologists can be faster, easier, and more accurate. Here, we demonstrate this platform to facilitate prostate cancer diagnosis using H&E images.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)

 

 [#117] TSC Method using Semi-Implicit Method for Spring Mass Simulation
  • Author: *Ryo Sagayama (Kogakuin University), Akihiro Fujii (Kogakuin University), Teruo Tanaka (Kogakuin University), Takumi Washio (Tokyo University), Takeshi Iwashita (Hokkaido University)
  • Abstract: There are two possible ways to parallelize time evolution simulations: in the time direction and in the spatial direction. TSC method parallelizes the simulation in the time direction. It approximately calculates Jacobi matrix of the whole time step equations and updates all variables over all-time steps iteratively. However, the approximated Jacobi matrix often becomes unstable especially when the number of time steps becomes large.Semi-implicit method was proposed to increase the time step width in molecular dynamics simulations. It reveals the method to stabilize the Jacobi matrix by neglecting the negative eigenvalue parts of the elemental Hessians. We incorporate the stabilized Jacobi matrix calculation of Semi-implicit method to TSC method, and evaluate the performance.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)
 
 [#118] Efficient Allreduce Algorithm for Large-Scale Deep Learning on Distributed Loop Networks
  • Author: Truong Thao Nguyen (National Institute of Advanced Industrial Science and Technology), Peng Chen (National Institute of Advanced Industrial Science and Technology), Yusuke Tanimura (National Institute of Advanced Industrial Science and Technology)
  • Abstract: Training a Deep Learning model on High-Performance Computing systems is becoming a de-facto standard in deep learning. One of the key factors limiting the growth of large-scale training is the collective communication overhead between processing elements (PEs), e.g., an Allreduce operation. With the continual increase in model sizes, e.g., 100s GB, and the number of PEs, e.g., 1,000s of GPUs, communication becomes a major bottleneck. In this context, we aim at finding a network topology and its corresponding collective algorithm that features bandwidth optimality in O(log(𝑃)) steps with a minimal (or without) network contention. Our prior work proposed the use of a family of network topologies that exploit small-world network models, e.g, Distributed Loop Network (DLN) and collective algorithms named Shifted Halving-Doubling (SHD) which improve the utilization of all the inter-switches links of the DLN topology. In this study, we generalize the SHD algorithm and propose 2-D DLN which considers both the communication performance and the cost when implemented in a server room.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)
 
 [#119] Designing a Monitoring System for and Collection Statistics for HPCI Shared Storage
  • Author: Hidetomo Kaneyama (RIKEN R-CCS)
  • Abstract: The HPCI Shared Storage is a large-scale network storage system for the High Performance Computing Infrastructure (HPCI), which enables researchers from Japanese institutions to access and utilize supercomputers. HPCI Shared Storage offers 50 PB of logical storage space for the storage and management of research data generated by HPCI supercomputers in Japan, as of January 2023. To ensure stable operation of this storage system, the monitoring environment has been updated since 2020 to enhance information collection and update the automatic alerting system. We have also developed a mechanism to provide user usage information for each user and group. This monitoring environment has enhanced the collection of statistical information and has been able to collect stored data information and other information at a high frequency. The statistical information shows that there are many files that have not been accessed for more than 3years (dark-files). These dark-files may have already been forgotten due to members leaving the research group. Our goals is to encourage users to delete or use such data. Some users may also be storing large amounts of small data. The data transfer of a large number of small files may cause transfer delays due to metadata access load and other factors. Also, for huge data, replication, etc., may take a long time to read, and the data may remain unused after writing. To address this issue for users, we are currently implementing compression and split transfers. This enhanced collection of statistical information promotes active use and improvement.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)
 
 [#120] Applying Automatic Tuning to Hyper-parameter Optimization of Machine Learning Programs for Super-Resolution
  • Author: *Xuan Yang (Kogakuin University), Sorataro Fujika (Kogakuin University), Yuga Yajima (Kogakuin University), Akihiro Fujii (Kogakuin University), Teruo Tanaka (Kogakuin University), Kazutoshi Akita (Toyota Technological Institute), Norimichi Ukita (Toyota Technological Institute), Satoshi Ohshima (Nagoya University)
  • Abstract: To optimize hyperparameters for better learning results of machine learning programs, we applied our developed software auto-tuning tool “DSICE” (d-spline Iterative Collinear Exploration). In order to reduce the execution time, we propose a two-step learning method of pre-learning and fine-tuning, and run multiple jobs in parallel on the GPU cluster. In this research, we applied re-learning and fine-tuning to reduce each execution time to 1/8. We used the parallel processing environment of GPU cluster supercomputer, which reduced execution time to 1/27. Overall, the execution time was reduced to 1/216 of the original.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)


[#121] Towards an Easy-to-Use Visualization Environment on the Fugaku

  • Author: Jorji Nonaka (RIKEN R-CCS), Masaaki Terai (RIKEN R-CCS), Masahiro Nakao (RIKEN R-CCS), Keiji Yamamoto (RIKEN R-CCS), Hitoshi Murai (RIKEN R-CCS), Fumiyoshi Shoji (RIKEN R-CCS)
  • Abstract: We have worked on a large data visualization and analysis environment for the Fugaku taking into consideration some lessons learned from the K computer. Regarding the K computer, it is worth noting that the auxiliary post-processing system was a GPU-less system, and the Mesa 3D graphics library, necessary for running traditional OpenGL-based graphics applications, was not initially provided for the compute nodes (SPARC64 VIIIfx CPU). It is also worth mentioning the non-existent direct connection to the pre/post-processing servers as well as the compute nodes despite the considerable interest in using client/server-based distributed visualization such as the PBVR Remote Visualization System developed by the Japan Atomic Energy Agency, and HIVE developed at RIKEN. This poster will describe the visualization environment developed on the Fugaku, and some initial impressions regarding the software deployment tool (Spack) and services (Fugaku VPN and Fugaku Open OnDemand) that were utilized for improving and ameliorating this visualization environment.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)

 

[#122] Parallel Performance Evaluation of MITgcm

  • Author: Rin IRIE (Nippon Telegraph and Telephone Corporation)
  • Abstract: MITgcm is an ocean-atmosphere general circulation model that has non-hydrostatic capabilities and can be used to study both small-scale phenomena such as convection and large-scale phenomena such as general global circulation. MITgcm supports parallel computation using multiple processors by dividing the computation area in the horizontal direction. However, no previous studies have evaluated MITgcm model scaling performance for different numbers of processors. In this paper, we measure model computing time as the number of processors is varied for a physical model benchmarking case (barotropic ocean gyre). Overall, MITgcm is found to exhibit strong scaling behavior up until the number of processors exceeds the number of cores in a single CPU. Potential causes for this scaling behavior, as well as scaling behavior for varying numbers of grid points over the simulation domain, are discussed.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)

 

[#123] Acceleration of Kinetic Monte Carlo Simulation of Thin Film Deposition

  • Author: Yang Hao Lau (Institute of High Performance Computing), Bharathi Madurai Srinivasan (Institute of High Performance Computing), Gang Wu (Institute of High Performance Computing), Fong Yew Leong (Institute of High Performance Computing), Ramanarayan Hariharaputran (Institute of High Performance Computing)
  • Abstract: Kinetic Monte Carlo (KMC) has seen widespread use in simulating deposition and annealing of thin films for electronic semiconductor devices, optical coatings etc. While such simulation can shed light on how the final microstructure depends on process conditions, it may often require too much computational time to reach the time and length scales relevant to the film manufacturing process. As such, various strategies are employed to accelerate KMC simulation.

    One class of such strategies employs coarse graining of space, atoms or atomic events and suffers from loss in resolution and sometimes accuracy when simplifying approximations are made. For coarse-grained simulations, it is also often unclear how input parameters should be derived since such parameters are typically available only for non-coarse-grained simulations.

    Another class of strategies involves spatial parallelization, which is difficult and often sub-optimal because of KMC’s inherent asynchronous nature, where neighboring domains must often wait for each other to execute events. To circumvent this issue, we instead parallelize computation of event rates to obtain higher parallel efficiency.

    In addition to parallelization, we apply various optimizations to minimize computation and accelerate the KMC simulation. We measure the impact of these acceleration strategies on run-time and show how they scale with number of cores. The accelerated code allows simulation of larger domains over longer durations, approaching the scales necessary to guide the manufacturing process to optimize film quality.

  • 1-page extended abstract (PDF file)
  • Poster (PDF file)
 
 [#124] Towards Optimization of Parallelized Mining of Subgraphs Sharing Common Items Using a Task-Parallel Language
  • Author: *Jing Xu (Kyoto University), Tasuku Hiraishi (Kyoto Tachibana University), Shingo Okuno (Kyoto University), Masahiro Yasugi (Kyushu Institute of Technology), Keiichiro Fukazawa (Kyoto University)
  • Abstract: This presentation proposes parallel implementations of a highly complex graph mining problem to extract Closed Common Itemset connected subGraphs (CCIGs) from a graph whose vertices are labeled by their own sets of items, or itemsets in short, each of which satisfies that the cardinality of its common itemset, i.e., the intersection of the itemsets of all its vertices, is not less than a given threshold. An efficient sequential algorithm and its implementation called COPINE were proposed for this problem. It employs a backtracking search algorithm with pruning to reduce the search space. Later, a parallel extension of COPINE was proposed. However, in the existing implementations, search space expands compared to a sequential search, mainly because a worker traversing a search tree cannot prune subtrees on the “left” side. There is another problem that checking and updating an information table for pruning at every search step brings considerable overheads. To alleviate these issues, we propose two mechanisms for the parallel COPINE. First, we allow a worker to prune a subtree on the “left” side of a search tree node when certain conditions are satisfied. With this new algorithm, workers can prune sub-search trees more aggressively and we can expect the search space can be reduced. Second, let a worker check and update an information table only when a certain custom condition is satisfied rather than at every search step. This mechanism enlarges the search space but the overhead for the table reference can be reduced.
  • 1-page extended abstract (PDF file)
  • Poster (PDF file)
 
 [#125] System Composability and Behavior Prediction of Genetic Networks from Characterized Models of Subsystems 
  •  Author: Vipul Singhal (Genome Institute of Singapore, California Institute of Technology)
  • Abstract: Composition of computational models of systems, such that the combined models are predictive of real world systems, is an important paradigm across traditional engineering disciplines. In the field of synthetic biology, which aims to bring engineering ideas to the design of biological systems, such composability has remained elusive, partly because of the phenomenon of structural non-identifiability of the parameters involved. We demonstrate tools for identifying joint distributions of system parameters, and for composing subsystem models in the presence of non-identifiability, such that the resulting composed models are predictive of the corresponding systems’ real-world behaviour.
  • 1-page extended abstract (PDF file)

Committee

Dr Jin Hongmei
(Co-Chair)


A*STAR Institute of High Performance Computing (IHPC), Singapore

Dr Zhang Gang
(Co-Chair)


A*STAR Institute of High Performance Computing (IHPC), Singapore

    Name*

    Email*

    Country*

    Organisation*

    Industry*