Dr. Anne C. Elster is a Professor and the Director of Heterogeneous and Parallel Computing Lab (HPC-Lab) at the Dept. of Computer Science, Norwegian Univ. of Science and Technology (NTNU), a HPC Leader at the Center for Geophysical Forecasting at NTNU, and a Senior Research Fellow / Visitor at The Oden Institute at The University of Texas at Austin.
Born in Norway, she received her Bachelor of Computer Systems Engineering from UMass Amherst, and her Master and PhD degrees in Electrical Engineering from Cornell University. Before joining NTNU in 2001 she worked at Schlumberger Austin Research as well as an Adjunct faculty member at UT Austin where she taught courses in Algorithms and Operating Systems and a graduate course in Partial Differential Equations, and a related course at WPAFB through her company Acenor, Inc.
Dr. Elster´s current research includes developing methods and tools for parallelizing, optimizing and auto-tuning codes for heterogeneous computing systems. She and her research group are especially known for work on GPU accelerations dating back to 2006.
Anne has been the main advisor for over 100 master students as well as several PhDs and Postdocs in the area of parallel and GPU computing. She has served on PhD evaluation committees in the Czech Republic, Denmark, Finland, Italy, Saudi Arabia (KAUST), and Spain. She is also credited for her Linear Bit-reversal algorithm and served on the original MPI Standards Committee. She is an Associate Editor of IEEE CiSE and has served on numerous program committees, including recently on the Sid Fernbach and Test-of-Time Awards committees, SIAM Career/Early Career Prize committee and the ACM Thesis Awards committee. Anne is an IEEE Computer Society Distinguished Contributor Charter Member and was a Distinguished speaker for IEEE CS (2019-2022).
Norwegian Univ. of Science and Technology
Center for Geophysical Forecasting
University of Texas at Austin
Parallel Computing and Geophysical Forecasting
Geosciences have been central applications for parallel computing for many years, but like the parallel technologies, the need for speed and processing has not waned.
Geophysical forecasting offers the opportunity to leverage some of the cutting-edge technologies from the oil and gas sector to improve, for instance, geohazard monitoring and forecasting sudden events along roads and railways. This also includes the use of new methods for monitoring and mapping life and geophysical events at sea and near the seabed. Modern seismic sensors and DAS (Distributed Acoustic Sensing) systems also generate vast datasets we will need both AI and parallel computing techniques to fully make use of. These tasks thus offer many interesting research challenges related to parallel and distributed computing over the next several years.
This talk will highlight some of the ongoing work my group and colleagues are involved in at The Center for Geophysical Forecasting at NTNU. This includes discussing some of our work related to utilizing AI and HPC techniques such as autotuning and combining real experimentation with modeling and vice versa, and how these can impact applications.
Jack Dongarra specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced computer architectures, programming methodology, and tools for parallel computers. He holds appointments at the University of Manchester, Oak Ridge National Laboratory, and the University of Tennessee, where he founded the Innovative Computing Laboratory. In 2019 he received the ACM/SIAM Computational Science and Engineering Prize. In 2020 he received the IEEE-CS Computer Pioneer Award. He is a Fellow of the AAAS, ACM, IEEE, and SIAM; a foreign member of the British Royal Society and a member of the U.S. National Academy of Sciences and the U.S. National Academy of Engineering. Most recently, he received the 2021 ACM A.M. Turing Award for his pioneering contributions to numerical algorithms and software that have driven decades of extraordinary progress in computing performance and applications.
Professor Jack Dongarra
University of Tennessee
Oak Ridge National Laboratory
University of Manchester
An Overview of High Performance Computing and Responsibly Reckless Algorithms
In this talk, we examine how high-performance computing has changed over the last 10 years and look at trends in the future. These changes have had and will continue to impact our software significantly. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile-time and run-time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder.
Mixed precision numerical methods are paramount for increasing the throughput of traditional and artificial intelligence (AI) workloads beyond riding the wave of the hardware alone. Reducing precision comes at the price of trading away some accuracy for performance (reckless behavior) but in noncritical segments of the workflow (responsible behavior) so that the accuracy requirements of the application can still be satisfied.
Frank Würthwein is the director of SDSC. Würthwein leads Distributed High-Throughput Computing at SDSC, and he is a faculty member in the UC San Diego Department of Physics, as well as a founding faculty member of the Halicioğlu Data Science Institute on campus. His research focuses on experimental particle physics, and in particular, the Compact Muon Solenoid experiment at the Large Hadron Collider. He continues to serve, as he has for many years, as executive director of the Open Science Grid, the premier national cyberinfrastructure for distributed high-throughput computing.
University of California San Diego
San Diego Supercomputer Center
Halicioglu Data Science Institute
AI Infrastructure for All
There are 20 Million students attending college in the USA, across about 3800 institutions.
Given current trends, we estimate that around 4-10M of these students will need access to compute and data infrastructure in the classroom.
However, less than 5% of these institutions have the scale of computing needs that would warrant hiring system administrators, cybersecurity experts, and research computing support personnel to operate infrastructure, and train the educators on how to use it in the classroom.
The solution to this problem lies in aggregation of Human Resources to enable widespread ownership of AI infrastructure across all colleges in the USA. We describe how to accomplish this, and how to scale it out towards sustainability to make AI infrastructure available to all for little more than a beer a year per student.