|
|
|
Keynote Speech 1 |
|
Michael J. Flynn |
|
Affiliation: Maxeler Corporation and
Stanford University, USA |
|
Title: The future is parallel but it may not
be easy |
|
Download Keynote |
|
|
|
Abstract |
|
Processor performance scaling by improving
clock frequency has now hit power limits.
The new emphasis on multi core architectures
comes about from the failure of frequency
scaling not because of breakthroughs in
parallel programming or architecture.
Progress in automatic compilation of serial
programs into multi tasked ones has been
slow. A look at parallel projects of the
past illustrates problems in performance and
programmability. Solving these problems
requires both an understanding of underlying
issues such as parallelizing control
structures and dealing with the memory
bottleneck. For many applications
performance comes at the price of
programmability and reliability comes at the
price of performance. |
|
|
|
Bio |
|
Michael Flynn is Senior Advisor to the
Maxeler Corporation, an acceleration
solutions company based in London. He
received his Ph.D. from Purdue University
and joined IBM working there for ten years
in the areas of computer organization and
design. He was design manager System 360
Model 91 Central Processing Unit. Between
1966 and 1974 Prof. Flynn was a faculty
member of Northwestern University and the
Johns Hopkins University. From 1975 until
2000, he was a Professor of Electrical
Engineering at Stanford University and
served as the Director of the Computer
Systems Laboratory from 1977 to 1983. He was
founding chairman of both the ACM Special
Interest Group on Computer Architecture and
the IEEE Computer Society's Technical
Committee on Computer Architecture. Prof.
Flynn was the 1992 recipient of the ACM/IEEE
Eckert-Mauchley Award for his technical
contributions to computer and digital
systems architecture. He was the 1995
recipient of the IEEE-CS Harry Goode
Memorial Award in recognition of his
outstanding contribution to the design and
classification of computer architecture. In
1998 he received the Tesla Medal from the
International Tesla Society (Belgrade), and
an honorary Doctor of Science from Trinity
College (University of Dublin), Ireland. He
is the author of three books and over 250
technical papers, and he is also a fellow of
the IEEE and the ACM. |
|
|
|
Keynote Speech 2 |
|
David Keyes |
|
Affiliation: Fu Foundation Professor |
|
Applied Physics and Applied Mathematics,
Columbia University, USA
Acting Director, Institute for Scientific
Computing Research, LLNL, USA |
|
Title: Petaflop/s, Seriously |
|
Download Keynote |
|
|
|
Abstract |
|
Sustained floating-point rates on real
applications, as tracked by the Gordon Bell
Prize, have increased by over five orders of
magnitude from 1988, when 1 Gigaflop/s was
reported on a structural simulation, to
2006, when 200 Teraflop/s were reported on a
molecular dynamics simulation. Various
versions of Moore's Law over the same
interval provide only two to three orders of
magnitude of improvement for an individual
processor; the remaining factor comes from
concurrency, which is of order 100,000 for
the BlueGene/L computer, the platform of
choice for the majority of recent Bell Prize
finalists. As the semiconductor industry
begins to slip relative to its own roadmap
for silicon-based logic and memory,
concurrency will play an increasing role in
attaining the next order of magnitude, to
arrive at the long-awaited milepost of 1
Petaflop/s sustained on a practical
application, which should occur around 2009.
Simulations based on Eulerian formulations
of partial differential equations can be
among the first applications to take
advantage of petascale capabilities, but not
the way most are presently being pursued.
Only weak scaling can get around the
fundamental limitation expressed in Amdahl's
Law and only optimal implicit formulations
can get around another limitation on scaling
that is an immediate consequence of Courant-Friedrichs-Lewy
stability theory under weak scaling of a PDE.
Many PDE-based applications and other
lattice-based applications with petascale
roadmaps, such as quantum chromodynamics,
will likely be forced to adopt optimal
implicit solvers. However, even this narrow
path to petascale simulation is made
treacherous by the imperative of dynamic
adaptivity, which drives us to consider
algorithms and queueing policies that are
less synchronous than those in common use
today. Drawing on the SCaLeS report
(2003-04), the latest ITRS roadmap, some
back-of-the-envelope estimates, and
numerical experiences with PDE-based codes
on recently available platforms, we will
attempt to project the pathway to Petaflop/s
for representative applications. |
|
|
|
Bio |
|
David E. Keyes is the Fu Foundation
Professor of Applied Mathematics in the
Department of Applied Physics and Applied
Mathematics at Columbia University, an
affiliate of the Computational Science
Center (CSC) at Brookhaven National
Laboratory, and Acting Director of Institute
for Scientific Computing Research (ISCR) at
Lawrence Livermore National Laboratory.
Keyes graduated summa cum laude with a B.S.E.
in Aerospace and Mechanical Sciences and a
Certificate in Engineering Physics from
Princeton University in 1978. He received
his Ph.D. in Applied Mathematics from
Harvard University in 1984. He then post-doc'ed
in the Computer Science Department at Yale
University and taught there for eight years,
as Assistant and Associate Professor of
Mechanical Engineering, prior to joining Old
Dominion University and the Institute for
Computer Applications in Science &
Engineering (ICASE) at the NASA Langley
Research Center in 1993. At Old Dominion,
Keyes was the Richard F. Barry Professor of
Mathematics & Statistics and founding
Director of the Center for Computational
Science. Author or co-author of over 100
publications in computational science and
engineering, numerical analysis, and
computer science, Keyes has co-edited 10
conference proceedings concerned with
parallel algorithms and has delivered over
200 invited presentations at universities,
laboratories, and industrial research
centers in over 20 countries and 35 states
of the U.S. With backgrounds in engineering,
applied mathematics, and computer science,
and consulting experience with industry and
national laboratories, Keyes works at the
algorithmic interface between parallel
computing and the numerical analysis of
partial differential equations, across a
spectrum of aerodynamic, geophysical, and
chemically reacting flows. Newton-Krylov-Schwarz
parallel implicit methods, introduced in a
1993 paper he co-authored at ICASE, are now
widely used throughout engineering and
computational physics, and have been scaled
to thousands of processors.
more » |
|
|
|
Keynote Speech 3 |
|
Vipin Kumar |
|
Affiliation: William Norris Professor; Head
of the Computer Science and Engineering
Department, University of Minnesota, USA |
|
Title: High Performance Data Mining -
Application for Discovery of Patterns in the
Global Climate System |
|
Download Keynote |
|
|
|
Abstract |
|
Advances in technology and high-throughput
experiment techniques have resulted in the
availability of large data sets in
commercial enterprises and in a wide variety
of scientific and engineering disciplines.
Data in terabytes range are not uncommon
today and are expected to reach petabytes in
the near future for many application domains
in science, engineering, business,
bioinformatics, and medicine. This has
created an unprecedented opportunity to
develop automated data-driven techniques of
extracting useful knowledge. Data mining, an
important step in this process of knowledge
discovery, consists of methods that discover
interesting, non-trivial, and useful
patterns hidden in the data. This talk will
provide an overview of a number of data
mining research in our group for
understanding patterns in global climate
system and computational challenges in
addressing them. |
|
|
|
Bio |
|
Vipin Kumar is currently William Norris
Professor and Head of Computer Science and
Engineering at the University of Minnesota.
His research interests include High
Performance computing and data mining. He
has authored over 200 research articles, and
co-edited or coauthored 9 books including
the widely used text book "Introduction to
Parallel Computing", and "Introduction to
Data Mining" both published by
Addison-Wesley. Kumar has served as
chair/co-chair for over a dozen
conferences/workshops in the area of data
mining and parallel computing. Currently, he
serves as the chair of the steering
committee of the SIAM International
Conference on Data Mining, and is a member
of the steering committee of the IEEE
International Conference on Data Mining.
Kumar is founding co-editor-in-chief of
Journal of Statistical Analysis and Data
Mining, editor-in-chief of IEEE Intelligent
Informatics Bulletin, and series editor of
Data Mining and Knowledge Discovery Book
Series published by CRC Press/Chapman Hall.
Kumar is a Fellow of the AAAS, ACM and IEEE.
He received the 2005 IEEE Computer Society's
Technical Achievement Award for
contributions to the design and analysis of
parallel algorithms, graph-partitioning, and
data mining. |
|
|
|
Keynote Speech 4 |
|
Yale Patt |
|
Affiliation: Professor of Electrical and
Computer Engineering,
Ernest Cockrell, Jr. Centennial Chair in
Engineering, University of Texas at Austin,
USA |
|
Title: The Transformation Hierarchy in the
Era of Multi-Core |
|
Download Keynote |
|
|
|
Abstract |
|
The transformation hierarchy is the name I
have given to the mechanism that converts
problems stated in natural language
(English, Spanish, Hindi, Japanese, etc.) to
the electronic circuits of the computer that
actually does the work of producing a
solution. The problem is first transformed
from a natural language description into an
algorithm, and then to a program in some
mechanical language, then compiled to the
ISA of the particular processor, which is
implemented in a microarchitecture, built
out of circuits. At each step of the
transformation hierarchy, there are choices.
These choices enable one to optimize the
process to accomodate some optimization
criterion. Usually, that criterion is
microprocessor performance. Up to now,
optimizations have been done mostly within
each of the layers, with artifical barriers
in place between the layers. It has not been
the case (with a few exceptions) that
knowledge at one layer has been leveraged to
impact optimization of other layers. I
submit, that with the current growth rate of
semiconductor technology, this luxury of
operating within a transformation layer will
no longer be the common case. This growth
rate (now more than a billion trnasistors on
a chip is possible) has ushered in the era
of the chip multiprocessor. That is, we are
entering Phase II of Microprocessor
Performance Improvement, where improvements
will come from breaking the barriers that
separate the transformation layers. In this
talk, I will suggest some of the ways in
which this will be done. |
|
|
|
Bio |
|
Yale Patt is a teacher at The University of
Texas at Austin, where he also directs the
research of nine PhD students, while
enjoying an active consulting practice with
several microprocessor manufacturers. He
teaches the required freshman intro to
computing course to 400 first year students
every other fall, and the advanced graduate
course to PhD students in microrchitecture
every other spring. His research ideas (HPS,
branch prediction, etc.) have been adopted
by almost every microprocessor manufacturer
on practically every high end chip design of
the past ten years. Yale Patt has earned the
appropriate degrees from reputable
universities and has received more than his
share of prestigious awards for his research
and teaching. More detail on his interests
and accomplishments can be obtained from his
web site:
www.ece.utexas.edu/~patt |
|
|
|
Keynote Speech 5 |
|
Prabhakar Raghavan |
|
Affiliation: Head, Yahoo! Research
Consulting Professor, Computer Science
Department, Stanford University, USA |
|
Title: Web Search: bridging information
retrieval and microeconomic modeling |
|
Download Keynote |
|
|
|
Abstract |
|
Web search has come to dominate our
consciousness as a convenience we take for
granted, as a medium for connecting
advertisers and buyers, and as a
fast-growing revenue source for the
companies that provide this service.
Following a brief overview of the state of
the art and how we got there, this talk
covers a spectrum of technical challenges
arising in web search- ranging from spam
detection to auction mechanisms. |
|
|
|
Bio |
|
Prabhakar Raghavan has been Head of Yahoo!
Research since 2005. His research interests
include text and web mining, and algorithm
design. He is a Consulting Professor of
Computer Science at Stanford University and
Editor-in-Chief of the Journal of the ACM.
Raghavan received his PhD from Berkeley and
is a Fellow of the ACM and of the IEEE.
Prior to joining Yahoo, he was Chief
Technology Officer at Verity; before that he
held a number of technical and managerial
positions at IBM Research. |
|
|
|
TOP ^ |
|
|
|
|
|
|
|