Rosa M. Badia
Manager of the Workflows and Distributed Computing
Research Group, Barcelona Supercomputing
Center (BSC), Spain
Department Head for Computer Science
Lawrence Berkeley National Laboratory, USA
Wednesday, May 15 from 5:45 - 6:30 pm
Moderated by Daniel Reed
REINVENTING HPC WITH SPECIALIZED ARCHITECTURES AND NEW APPLICATIONS WORKFLOWS
With the slowing of Moore's Law, the historical improvements in performance offered by successive generations of HPC systems is waning while costs for each new chip generation is growing. In the near term, the most practical path to continued performance growth will be architectural specialization in the form of many kinds of accelerators. New software implementations, and in many cases, new mathematical models and algorithmic approaches, are necessary to advance the science that can be done with these specialized architectures, but applications are just part of the HPC ecosystem, which is embedded in advanced scientific workflows. As application workflows become increasingly complex, including data analytics, AI, and HPC modeling and simulation – the demand for new programming models and tools will also grow. Each application and workflow may present unique requirements that will be difficult to address with a monolithic solution. Furthermore, AI's transformative role in these workflows, whether used in concert with traditional simulations (e.g., self-driving laboratories) or as surrogate models for traditional mechanistic models, brings ethical concerns that must be addressed, such as bias, explainability, and reproducibility. These emerging requirements will drive a new era of scientific and technological innovation.
The scientific community is conscious of the underlying technological transformations and is eager to deploy these advances to accelerate scientific discovery but needs to learn how to integrate these new technologies with their existing research. Thus, they are turning to HPC expertise for innovative, pragmatic solutions. This keynote presentation, with a panel format moderated by Daniel Reed and contributed by Rosa Badia and John Shalf, will focus on the emerging challenges and groundbreaking solutions at the forefront of reinventing HPC for discoveries.
Rosa M. Badia manages the Workflows and Distributed Computing research group at the Barcelona Supercomputing Center (BSC), and she is involved in several notable European projects - AI-Sprint, CALESTIS, ICOS, CEEC CoE, PerMedCoE, and DT-GEO. She is the PI of the EuroHPC eFlows4HPC project. Her current research interest is programming models for complex platforms (from multicore GPUs to Grid/Cloud).
She has published over 200 papers on her research topics at international conferences and journals. She received the Euro-Par Achievement Award 2019 for her contributions to parallel processing, the DonaTIC award, category Academia/Researcher in 2019, and the HPDC Achievement Award 2021 for her innovations in parallel task-based programming models, workflow applications and systems, and leadership in the HPC research community. In 2023, she was nominated as a member of the Institut d'Estudis Catalans.
John Shalf is the Department Head for Computer Science at Lawrence Berkeley National Laboratory. Behind the scenes, Shalf has lent his expertise to lay the groundwork for executing the US government’s exascale ambition since 2009. He also formerly served as the Deputy Director for Hardware Technology on the US Department of Energy (DOE)-led Exascale Computing Project (ECP) before he returned to his department head position at LBNL.
He has co-authored over 100 peer-reviewed publications in parallel computing software and HPC technology, including the widely cited report “The Landscape of Parallel Computing Research: A View from Berkeley” (with David Patterson and others). Before coming to Berkeley Laboratory, John worked at the National Center for Supercomputing Applications and the Max Planck Institute for Gravitation Physics/Albert Einstein Institute (AEI), where he co-created the Cactus Computational Toolkit.