Software Stack for Hardware Accelerators Workshop (SSHAW)

Workshop Format:

The Workshop program will be a combination of talks presenting peer-reviewed papers submitted to the workshop and reviewed by the program committee, and abstract-only presentations that are selected based only on the submission of a title and abstract. Abstract-only presentations are not required to be original work and may have been presented in other conferences and may be reporting recent work that will be submitted to a peer-reviewed conference in the future.

The Software Stack for Hardware Accelerators Workshop (SSHAW) will be held in conjunction with the International Conference on Parallel Processing in Edmonton, Alberta, Canada.

The wide use of deep learning in industry propelled the development of new hardware accelerators that aim to speedup the mathematical computations required for training and for inference in neural networks. As they become mode widely available, these accelerators start to also be used in other application areas where numerical computing performance is also critical.

The acceleration of matrix multiplication and convolution operations via hardware specialization follows on the footsteps of the wide adoption of Graphics Processing Units (GPUs) for general processing. Like in GPGPUs, the use of software-managed caches is widespread in neural accelerators and thus the staging of data movement and the selection and scheduling of instructions is critical for performance.

The goal of this workshop is to explore many topics related to the construction of the software stack that is necessary for the efficient computation using these hardware accelerators. Topics that will be discussed include:

  • Hardware Design of Hardware Accelerators
  • Instruction Set Architecture for Accelerators
  • Software Managed Memory Hierarchy
  • Programming Models for Deep Learning
  • Linear-Algebra Libraries
  • Construction of Optimizing Compilers
  • Program Analysis
  • Performance Evaluation
  • Intermediate Representations
  • Use of Neural Accelerators for High-Performance Computing

Important Dates:

  • For Peer-reviewed papers:
  • Paper Submission: April 13, 2020, Anywhere on Earth
  • Author Notification: May 22, 2020
  • Camera Ready: June 08, 2020
  • For Abstract-Only talks:
  • Title/Abstract Submission:March 18, 2020
  • Notification of Acceptance: March 25, 2020
  • Workshop Date: August 17, 2020

Paper Submission

Submission Method to be Announced

Talk Submission

Submission Method to be Announced

  • Title of the talk
  • Authors and their affiliation
  • Presenter
  • Abstract for the talk

Steering/Program Committee:

  • Tor Aamot, University of British Columbia
  • J. Nelson Amaral, University of Alberta
  • Guido Araujo, University of Campinas
  • Christophe Dubach, McGill University
  • Yaoqing Gao, Huawei Canada
  • Ondřej Lhoták, University of Waterloo
  • Gennady Pekhimenko, University of Toronto
  • Fernando Magno Quintao Pereira, Universidade Federal de Minas Gerais
  • Arrvindh Shriraman, Simon Frase University
  • Aaron Smith, Microsoft Research
  • Peng Wu, Futurewei Technologies