Abstract:
The evolution of the cyberinfrastructure which supports scientific research in the US cannot be explained solely by observation of advances in computing hardware. Rather it represents a complex interaction among many factors, which include advances in hardware, but also include new opportunities in deployment options, new tools and protocols and advances in programming tools and techniques. It takes an unusual confluence of factors to prompt a significant number of user communities to simultaneously consider abandoning an established set of codes in favor of a potentially risky new development project. Many of the factors involved in these decisions are well understood and frequently discussed. The evolution of processor speeds, new types of processors, new deployment methods utilizing distributed or virtualized resources, the general movement in the focus of research technology away from intense computation and toward massive data; these are among the most common areas of discussion in HPC circles. Much less commonly discussed, and the topic of this paper, are the types of algorithms used in research applications. Specifically, we will look at the most frequently used HPC algorithms, along with a collection of bio-inspired computing algorithms which so far are not in routine use in HPC. Our questions are whether these bio-inspired algorithms, in combination with ongoing sweeping changes in hardware, deployment and data-centricity, will contribute to a potential solution which will offer a sufficiently attractive opportunity to improve the efficiency and flexibility of scientific codes. We will also consider, should these algorithms become a part of the next wave of scientific code development, what will be the implications for the kind of infrastructure that will be in demand to support the new codes.