Software for High Performance Computing

Overview

High-Performance Computing Systems (i.e., Supercomputers) provide massive computing performance to solve diverse problems in science and engineering. At our laboratory, we research and develop software to fully utilize the performance of state-of-the-art HPC systems. We develop massively parallel scientific applications in close collaboration with domain scientists. Furthermore, we design and implement middleware for large-scale HPC systems.

Research Topics

Dynamic Interconnect Control using SDN

Modern HPC systems are based on the cluster architecture where multiple computers are connected through a high-performance network (i.e., interconnect). In a cluster, the total performance of applications heavily relies on the communication performance over the interconnect. However, it is known that hotspots occur in the interconnect and degrade the communication performance when many computers communicate simultaneously. We apply Software-Defined Networking (SDN), a network paradigm that allows one to dynamically manage the network as a software resource, to control the traffic in the interconnect and mitigate hotspots by considering the communication patterns of applications. Please see here for details.

Research on In-situ Scientific Workflows

The gap between the computational performance and the storage performance (bandwidth, capacity, etc.) in HPC systems is rapidly widening. One of the approaches to tackle the storage bottleneck problem is in-situ visualization and analysis. In an in-situ workflow, the output from the simulations is directly sent over to visualization and analysis applications without going through the parallel filesystem. We are working on optimizing the resource allocation in in-situ workflows and realizing workflows that span across multiple heterogeneous HPC systems. Please see here for details.

Optimization and Parallelization of Empirical Dynamic Modeling

Empirical Dynamic Modeling (EDM) is a nonlinear time series analysis framework based on the Takens' Embedding Theorem on state space reconstruction. EDM is used for predicting the behavior of dynamical system, evaluating the nonlinearity of a system and finding the causal relationships between variables in diverse fields including ecology, sociology and neuroscience. Despite its wide applicability, EDM has traditionally been applied to relatively small datasets only because of its high computational cost. We are developing an implementation of EDM targeted for modern GPU-centric HPC systems to allow processing large-scale datasets. Please see here for details.

Keichi Takahashi
Keichi Takahashi
Visiting Assistant Professor

Dr. Keichi Takahashi is an assistant professor of computer science at SDLab. His research mainly focuses on high-performance computing and networking.