|
|
|
|
|
Keynote Speech 1 |
|
Dan Reed |
|
Scalable and Multicore Computing Strategist, Microsoft |
|
|
|
"Computing at the Crossroads" |
|
|
|
Abstract |
|
Scientific and technical computing are at an inflection point, where system scale, complexity and operating costs, together with the explosive growth of data and the availability of high-speed networks, are changing the ways we will deliver computing services. Where is the technology going and what are the research implications? What architectures and policies are appropriate for different user bases? How do we build scalable infrastructure? How do we develop and support software? What is the ecosystem of components in which they will operate? How do we optimize performance, power and reliability? Finally, how do we define appropriate cost and service level agreements (SLAs). Let’s take a look at the future of clouds, multicore, services and large-scale systems. |
|
|
|
Bio |
|
Daniel A. Reed is Microsoft’s Scalable and Multicore Computing Strategist, responsible for re-envisioning the data center of the future. Previously, he was the Chancellor’s Eminent Professor at UNC Chapel Hill, as well as the Director of the Renaissance Computing Institute (RENCI) and the Chancellor’s Senior Advisor for Strategy and Innovation for UNC Chapel Hill. Dr. Reed has served as a member of the U.S. President’s Council of Advisors on Science and Technology (PCAST) and as a member of the President’s Information Technology Advisory Committee (PITAC). He recently chaired a review of the U.S. networking and IT research portfolio, and he recently completed a term as chair of the board of directors of the Computing Research Association.
He was previously Head of the Department of Computer Science at the University of Illinois at Urbana-Champaign (UIUC). He has also been Director of the National Center for Supercomputing Applications (NCSA) at UIUC, where he also led National Computational Science Alliance. He was also one of the principal investigators and chief architect for the NSF TeraGrid. He received his PhD in computer science in 1983 from Purdue University. |
|
|
|
|
|
|
|
Keynote Speech 2 |
|
Bill Dally |
|
Chief Scientist and VP of Research, NVIDIA
Bell Professor of Engineering, Stanford University |
|
|
|
"The End of Denial Architecture and the Rise of Throughput Computing " |
|
|
|
Abstract |
|
Most modern processors are in denial about two critical aspects of machine organization: parallel execution and hierarchical memory organization. These processors present an illusion of sequential execution and uniform, flat memory. The evolution of these sequential, latency-optimized processors is at an end, and their performance is increasing only slowly over time. In contrast, the performance of throughput-optimized processors, like GPUs, continues to scale at historical rates. Throughput processors embrace, rather than deny, parallelism and memory hierarchy to realize their performance and efficiency advantage compared to conventional processors. Throughput processors have hundreds of cores today and will have thousands of cores by 2015. They will deliver most of the performance, and most of the user value, in future computer systems.
This talk will discuss some of the challenges and opportunities in the architecture and programming of future throughput processors. In these processors, performance derives from parallelism and efficiency derives from locality. Parallelism can take advantage of the plentiful and inexpensive arithmetic units in a throughput processor. Without locality, however, bandwidth quickly becomes a bottleneck. Communication bandwidth, not arithmetic is the critical resource in a modern computing system that dominates cost, performance, and power. This talk will discuss exploitation of parallelism and locality with examples drawn from the Imagine and Merrimac projects, from NVIDIA GPUs, and from three generations of stream programming systems. |
|
|
|
Bio |
|
Bill is Chief Scientist and Senior Vice President of Research at NVIDIA and the Willard R. and Inez Kerr Bell Professor of Engineering at Stanford University. Bill and his group have developed system architecture, network architecture, signaling, routing, and synchronization technology that can be found in most large parallel computers today. While at Bell Labs Bill contributed to the BELLMAC32 microprocessor and designed the MARS hardware accelerator. At Caltech he designed the MOSSIM Simulation Engine and the Torus Routing Chip which pioneered wormhole routing and virtual-channel flow control. While a Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology his group built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanisms from programming models and demonstrated very low overhead synchronization and communication mechanisms. At Stanford University his group has developed the Imagine processor, which introduced the concepts of stream processing and partitioned register organizations. Bill has worked with Cray Research and Intel to incorporate many of these innovations in commercial parallel computers, with Avici Systems to incorporate this technology into Internet routers, co-founded Velio Communications to commercialize high-speed signaling technology, and co-founded Stream Processors, Inc. to commercialize stream processor technology. He is a Member of the National Academy of Engineering, a Fellow of the IEEE, a Fellow of the ACM, and a Fellow of the American Academy of Arts and Sciences. He has received numerous honors including the IEEE Seymour Cray Award and the ACM Maurice Wilkes award. He currently leads projects on computer architecture, network architecture, and programming systems. He has published over 200 papers in these areas, holds over 50 issued patents, and is an author of the textbooks, Digital Systems Engineering and Principles and Practices of Interconnection Networks. |
|
|
|
|
|
Keynote Speech 3 |
|
Justin R. Rattner |
|
Senior Fellow and Vice President
Intel Chief Technology Officer |
|
|
|
"Bringing Supercomputing to the Masses" |
|
|
|
Abstract |
|
After the explosive growth of the Internet, the computer industry is poised for a second transformation, in which supercomputing technology is headed to corporate data centers and desktop PCs, supplying them with unprecedented compute power. The technology that was once used by government research labs, top universities and a few huge corporations will soon be used to transform everyday computing. So what’s holding us back from connecting advances in graphics processing like physics and the spread of high-speed internet connections to create more immersive online worlds with realistic graphics and personalized information spaces? “What if logging onto the Web meant joining a global-scale physical simulation already in progress?”
In his keynote Intel Corporate Chief Technology Officer Justin Rattner will talk about how we can usher in this new revolution by deftly combining the energy efficiency and performance gains in hardware with advances in software and programming techniques. He will talk about Intel’s research programs that are striving to bring this massive scale of computing to the masses. He will address hardware challenges with scaling multi-core architectures to integrate programmable cores and fixed-function accelerators, flexible cache and memory hierarchy, and high bandwidth on-die networks to ensure high throughput and much more. The talk will then complement hardware with research on parallel programming tools, thread-aware execution environments and showcase future applications like integrated real-time physics and visualization and non-textual media mining. And finally he will share the challenges of moving to Exa-Scale machines by 2018. |
|
|
|
Bio |
|
Justin Rattner, 59, is vice president and chief technology officer (CTO). He is also an Intel Senior Fellow and head of Intel Labs. In the latter role, he directs Intel's global research efforts in microprocessors, systems, and communications including the company's disruptive research activity. In 1989, Rattner was named Scientist of the Year by R&D Magazine for his leadership in parallel and distributed computer architecture. In December 1996, Rattner was featured as Person of the Week by ABC World News for his visionary work on the Department of Energy ASCI Red System, the first computer to sustain one trillion operations per second (one teraFLOPS) and the fastest computer in the world between 1996 and 2000. In 1997, Rattner was honored as one of the Computing 200, the 200 individuals having the greatest impact on the U.S. computer industry today, and subsequently profiled in the book Wizards and Their Wonders from ACM Press.
Rattner has received two Intel Achievement Awards for his work in high performance computing and advanced cluster communication architecture. He is a member of the executive committee of the Intel's Research Council and serves as the Intel executive sponsor for Cornell University where he is a member of the External Advisory Board for the School of Engineering. Rattner is also a trustee of the Anita Borg Institute for Women and Technology.
Rattner joined Intel in 1973. He was named its first Principal Engineer in 1979 and its fourth Intel Fellow in 1988. Prior to joining Intel, Rattner held positions with Hewlett-Packard Company and Xerox Corporation. He holds B.S. and M.S. degrees from Cornell University in electrical engineering and computer science.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|