The First International Workshop on Coarse-Grained Reconfigurable Architectures for High-Performance Computing (CGRA4HPC)

Introduction

With the end of Dennard scaling and the impending termination of Moore's law, researchers are actively searching for alternative forms of computing to continue providing better, faster, and less power-hungry systems in the future. Today, several potential architectures are emerging to fill the widening void arising from the end of Moore's law, including radical (and intrusive) systems such as quantum- and neuromorphic computers. However, out of the many proposed architectures, perhaps none is as salient an alternative as Coarse-Grained Reconfigurable Architectures/Arrays (CGRAs).

CGRAs belong to the programmable logic device family of architectures, where the architectures aspire to provide some form of plasticity or reconfigurability. Such reconfigurability allows the silicon to be specialized towards a particular application in order to reduce data movement and improve performance and energy efficiency. Unlike their cousins, the Field-Programmable Gate Arrays (FPGAs), CGRAs provide reconfigurable Arithmetic Logic Units (ALUs) and a highly specialized yet versatile data path. This ``coarsening'' of reconfiguration allows CGRAs to achieve a significant (custom ASIC-like) reduction in power consumption and increase in operating frequency compared to FPGAs. At the same time, they remedy and overcome the expensive von Neumann (instruction-decoding) overhead that traditional general-purpose processors (CPUs) suffer from. In short, CGRAs strike a seemingly perfect balance between the reconfigurability of FPGAs and the performance of CPUs, with power-consumption characteristics closer to custom ASICs.

CGRAs have long research lineages that date back to their inception some 25 years ago (with theory dating back to the 1970s). However, they are recently garnering renewed interest and importance in High-Performance Computing (HPC). Today, we see an explosion in the number of custom-built AI-accelerators intended for use in data centers and the IoT-Edge-Cloud continuum. Many of these accelerators are CGRAs, such as those built by SambaNova or Cerebras. More importantly, there is an active and growing effort to use these AI accelerators to accelerate scientific applications on supercomputers, and many HPC centers are already including these CGRAs in their testbeds (e.g., Cerebras-1 in ORNL or EPCC).

This workshop provides the first focused interdisciplinary forum for both CGRA hardware researchers and HPC/distributed computing researchers from academia or industry to come together to discuss state-of-the-art CGRA research for use in emerging HPC systems.

Important Dates

Tentative Program

Topics of Interest and Paper submission

We will welcome authors to contribute full-length research papers subject to the topics of interest described above. Contributions should be unpublished and not for consideration in other venues. Papers should not exceed eight single-spaced pages, including all figures, tables, and references. We will adopt a full peer-review, single-blind process, and at least three program committee members will review each contribution. Topics of interest include (but are not limited to):

CGRA Hardware and Architectures

Programming Models, Compilers, and Middleware

Use-Cases and Experiments

Organization

CGRA4HPC Organizers

CGRA4HPC Program Committee