The Fourth International Workshop on Coarse-Grained Reconfigurable Architectures for High-Performance Computing and AI (CGRA4HPCA)

Introduction

With the end of Dennard scaling and the impending termination of Moore's law, researchers are actively searching for alternative forms of computing to continue providing better, faster, and less power-hungry systems in the future. Today, several potential architectures are emerging to fill the widening void arising from the end of Moore's law, including radical (and intrusive) systems such as quantum- and neuromorphic computers. However, out of the many proposed architectures, perhaps none is as salient an alternative as Coarse-Grained Reconfigurable Architectures/Arrays (CGRAs).

CGRAs belong to the programmable logic device family of architectures, where the architectures aspire to provide some form of plasticity or reconfigurability. Such reconfigurability allows the silicon to be specialized towards a particular application in order to reduce data movement and improve performance and energy efficiency. Unlike their cousins, the Field-Programmable Gate Arrays (FPGAs), CGRAs provide reconfigurable Arithmetic Logic Units (ALUs) and a highly specialized yet versatile data path. This ``coarsening'' of reconfiguration allows CGRAs to achieve a significant (custom ASIC-like) reduction in power consumption and increase in operating frequency compared to FPGAs. At the same time, they remedy and overcome the expensive von Neumann (instruction-decoding) overhead that traditional general-purpose processors (CPUs) suffer from. In short, CGRAs strike a seemingly perfect balance between the reconfigurability of FPGAs and the performance of CPUs, with power-consumption characteristics closer to custom ASICs.

CGRAs have long research lineages that date back to their inception some 25 years ago (with theory dating back to the 1970s). However, they are recently garnering renewed interest and importance in High-Performance Computing (HPC). Today, we see an explosion in the number of custom-built AI-accelerators intended for use in data centers and the IoT-Edge-Cloud continuum. Many of these accelerators are CGRAs, such as those built by SambaNova or Cerebras. More importantly, there is an active and growing effort to use these AI accelerators to accelerate scientific applications on supercomputers, and many HPC centers are already including these CGRAs in their testbeds (e.g., Cerebras-1 in ORNL or EPCC).

This workshop provides a focused interdisciplinary forum for both CGRA hardware researchers and HPC/distributed computing researchers from academia or industry to come together to discuss state-of-the-art CGRA research for use in emerging HPC systems and Artifical Intelligence (AI).

Important Dates

Invited talks

Will be announced at a later stage.

Workshop Program

CGRA4HPCA 2025 will be held in conjunction with IPDPS 2025 in Milano, Italy, on June 3rd.

Workshop program will be announced closer to the beginning date of the conference.

Call for Papers

Topics of Interest

Topics of interest include (but is not limited) to the following:

Paper Submission

We welcome authors to contribute full-length research papers subject to the topics of interest described below. Contributions should be unpublished and not for consideration in other venues. Papers should not exceed eight (8) single-spaced pages, formatted in the double-column pages using 10-point size font on 8.5x11 inch pages (IEEE conference style). We adopt a single-blind review process. Accepted papers will be included in the workshop proceedings, that will be distributed at the conference and are submitted for inclusion in the IEEE Xplore Digital Library after the conference. We also welcome presentations on new and emerging CGRA technologies from industry and startups. These will be presented at a special lightning session in the workshop. Please contact the workshop organizers (Send mail here) if you are interested in participating in this event.

Submit your paper HERE

Organization

CGRA4HPC Organizers

CGRA4HPC Program Committee

Will be announced at a later stage.