Browsing by Author "Cockerill, Tim"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Jetstream – performance, early experiences, and early results(2016-07-17) Stewart, Craig A.; Hancock, David Y.; Vaughn, Matthew; Fischer, Jeremy; Cockerill, Tim; Liming, Lee; Merchant, Nirav; Miller, Therese; Lowe, John Michael; Stanzione, Daniel C.; Taylor, James; Skidmore, EdwinJetstream is a first-of-a-kind system for the NSF - a distributed production cloud resource. The NSF awarded funds to create Jetstream in November 2014. Here we review the purpose for creating Jetstream, present the acceptance test results that define Jetstream’s key characteristics, describe our experiences in standing up an OpenStack-based cloud environment, and share some of the early scientific results that have been obtained by researchers and students using this system. Jetstream offers unique capability within the XSEDE-supported US national cyberinfrastructure, delivering interactive virtual machines (VMs) via the Atmosphere interface developed by the University of Arizona. As a multi-region deployment that operates as a single integrated system, Jetstream is proving effective in supporting modes and disciplines of research traditionally underrepresented on larger XSEDE-supported clusters and supercomputers. Already, researchers in biology, network science, economics, earth science, and computer science have used Jetstream to perform research – much of it research in the “long tail of science.”Item Jetstream: A self-provisioned, scalable science and engineering cloud environment(2015-07-26) Stewart, Craig A.; Cockerill, Tim; Foster, Ian; Hancock, David Y.; Merchant, Nirav; Skidmore, Edwin; Stanzione, Daniel; Taylor, James; Tuecke, Steven; Turner, George; Vaughn, Matthew; Gaffney, Niall I.Item TeraGrid: Analysis of Organization, System Architecture, and Middleware Enabling New Types of Applications(IOS Press, 2008) Catlett, Charlie; Allcock, William E.; Andrews, Phil; Aydt, Ruth; Bair, Ray; Balac, Natasha; Banister, Bryan; Barker, Trish; Bartelt, Mark; Beckman, Pete; Berman, Francine; Bertoline, Gary; Blatecky, Alan; Boisseau, Jay; Bottum, Jim; Brunett, Sharon; Bunn, Julian; Butler, Michelle; Carver, David; Cobb, John; Cockerill, Tim; Couvares, Peter F.; Dahan, Maytal; Diehl, Diana; Dunning, Thom; Foster, Ian; Gaither, Kelly; Gannon, Dennis; Goasguen, Sebastien; Grobe, Michael; Hart, Dave; Heinzel, Matt; Hempel, Chris; Huntoon, Wendy; Insley, Joseph; Jordan, Christopher; Judson, Ivan; Kamrath, Anke; Karonis, Nicholas; Kesselman, Carl; Kovatch, Patricia; Lane, Lex; Lathrop, Scott; Levine, Michael; Lifka, David; Liming, Lee; Livny, Miron; Loft, Rich; Marcusiu, Doru; Marsteller, Jim; Martin, Stuart; McCaulay, D. Scott; McGee, John; McGinnis, Laura; McRobbie, Michael; Messina, Paul; Moore, Reagan; Moore, Richard; Navarro, J.P.; Nichols, Jeff; Papka, Michael E.; Pennington, Rob; Pike, Greg; Pool, Jim; Reddy, Raghu; Reed, Dan; Rimovsky, Tony; Roberts, Eric; Roskies, Ralph; Sanielevici, Sergiu; Scott, J. Ray; Shankar, Anurag; Sheddon, Mark; Showerman, Mike; Simmel, Derek; Singer, Abe; Skow, Dane; Smallen, Shava; Smith, Warren; Song, Carol; Stevens, Rick; Stewart, Craig A.; Stock, Robert B.; Stone, Nathan; Towns, John; Urban, Tomislav; Vildibill, Mike; Walker, Edward; Welch, Von; Wilkins-Diehr, Nancy; Williams, Roy; Winkler, Linda; Zhao, Lan; Zimmerman, AnnTeraGrid is a national-scale computational science facility supported through a partnership among thirteen institutions, with funding from the US Na- tional Science Foundation [1]. Initially created through a Major Research Equip- ment Facilities Construction (MREFC [2]) award in 2001, the TeraGrid facility began providing production computing, storage, visualization, and data collections services to the national science, engineering, and education community in January 2004. In August 2005 NSF funded a five-year program to operate, enhance, and expand the capacity and capabilities of the TeraGrid facility to meet the growing needs of the science and engineering community through 2010. This paper de- scribes TeraGrid in terms of the structures, architecture, technologies, and services that are used to provide national-scale, open cyberinfrastructure. The focus of the paper is specifically on the technology approach and use of middleware for the purposes of discussing the impact of such approaches on scientific use of compu- tational infrastructure. While there are many individual science success stories, we do not focus on these in this paper. Similarly, there are many software tools and systems deployed in TeraGrid but our coverage is of the basic system middleware and is not meant to be exhaustive of all technology efforts within TeraGrid. We look in particular at growth and events during 2006 as the user population ex- panded dramatically and reached an initial “tipping point” with respect to adoption of new “grid” capabilities and usage modalities.