INDUSTRY EXHIBITION
The HiPC-2004 industrial exhibition will run on Monday Dec. 20th and Tuesday Dec. 21st.
Interested exhibitors can contact [email protected]
INDUSTRY KEYNOTE SESSION
Monday, December 20 2 :3 0 p m - 5 :3 0 p m
Chair: Sudheendra Hangal
SUN Microsystems
 |
The Cray Cascade Project
John Feo, Cray Inc.
and San Diego Supercomputer Center
|
In 2002, the US Department of Defense initiated the High Productivity
Computing Systems Project to develop a next generation computer system
capable of sustaining a petaflop. Cray is one of three vendors
funded for Phase II development work. We have proposed, CASCADE, a
revolutionary new computer system comprised of custom processors, a
next-generation interconnection network, and an active memory system.
While still in design, the system is expected to include support for
both heavy-weight threads that exploit high temporal locality and
light-weight threads that exploit high spatial locality. The former
will execute on processors that tolerate memory latencies through a
combination of multithreading, vector, and stream processing. The
latter may execute in an active memory systems with PIM-like
characteristics that may also be multithreaded to tolerate memory
latencies. The interconnection network may be a symmetric Cayley graph
network capable of high bandwidth, low latency communications. Memory
will be physically distributed, but shared.
A sophisticated programming environment is proposed to assist
application programmers utilize automatically the machine's unique
processing capabilities. We expect that the global shared memory and
the hardware's ability to tolerate memory latencies when executing
either heavy- or light-weight threads will eliminate many of the
programming challenges confronting scientific application developers
today.
In this talk, I will present the design goals for Cascade and describe
the architecture and programming environment as they are currently
envisioned.
 |
A Convergence of Computing
Paradigms
Reza Rooholamini, Director of
Enterprise Solutions, Dell Product Group
|
High Performance Computing (HPC) has traditionally solved technical
problems within the research community using proprietary systems.
Standardization of hardware and software has afforded us an alternative
to this proprietary approach by replacing them with clusters built from
standard hardware and software building blocks. A cursory look at
the Top500 fastest supercomputer list reveals that 291 of the recorded
entries in the current list are based on the clustering
technology. Within our market segments, we have seen a growing
number of cluster deployments in the Oil and Gas, Pharmaceuticals,
Manufacturing, Financials, Weather Modeling, Life Sciences,
Entertainment, and Government and Academia. In this talk, we
present the evolutionary steps in HPC to date, provide a few customer
case studies, identify three enablers for this paradigm (namely
emergence of Linux as a viable operating system for the enterprise,
standardization of hardware and software building blocks, and adoption
of the "scale-out" architecture in the enterprise), and conclude by
observing that the boundaries between the technical computing and
business computing are blurring, leading to our vision of a Scalable
Enterprise for our customers.
 |
Instruction Sets, Operating
Environments And Grids - Where Are We
Going?
Frank Baetke, Global
HPC-Technology
Program Manager, Hewlett Packard
|
In the last years we have seen a dramatic change in the area of
high performance computing, primarily driven by the invasion of
commodity components at the processor, node and interconnect level. As
a consequence, the majority of high-performance computing architectures
can be put into very few categories and the number of vendors active in
high-performance computing is rather declining than increasing.
A trend towards consolidation can also be observed in the area of
operating systems. Very few professional Unix environments will
survive. Linux continues to carry the torch of open-source philosophy
and has conquered remarkable segments in professional environments.
A similar trend can be observed at the level of application software
and again the number of supported operating environments - the
combination of an operating system and an instruction set - will rather
decline than increase.
Beyond the level of operating systems, Grids are emerging as a new
paradigm. Again, we are seeing a trend towards consolidation and a more
realistic view of remaining issues and future potential.
 |
High Performance Computing Based
on
Intel Architecture
David S. Scott, APAC Technical
Director for HPC, Intel Corporation |
The momentum for using Intel based systems for High Performance
Computing continues to increase. Over half of the systems on the
June Top500 list were based on Intel architecture including 61 based on
Itanium processors. Those numbers are expected to increase again
when the next list is released at Supercomputing 2004 in November.
This charge to using commodity processors in HPC systems is based on
several factors. The availability of Intel based systems from
multiple vendors prevents users from being locked into proprietary
solutions. The absolute performance and price/performance of COTS
processors driven by Moore's law becomes ever more compelling.
The increasing availability of tuned software in such areas as the oil
industry, manufacturing, life sciences, and many other application
areas has contributed. Finally, the availability of powerful and
sophisticated software development tools has made these systems ideally
suited to those researchers who are developing their own codes.
This talk will look at the hardware and software technologies that have
made Intel based systems ideal for high performance computing. It
will describe the technologies that will be coming in the future and
the success of several customers in India and Asia.
|
Compute Cluster on Windows: The next generation integrated platform
for HPC solutions
Saptak Sen and Perraju Bendapudi, Microsoft |
High Performance Computing (HPC) has traditionally solved technical
problems within the research community using fragile binding of
non-integrated systems. The future of HPC lies in integrated workflows,
where server clusters and desktops will seamlessly process complex
parallel, distributed, and data-driven computations across a large
network of solutions. The user will be able to utilize best-of-breed
computational models to create complex, multi-stage simulations residing
anywhere within an enterprise or the Internet.
Microsoft will be delivering a new edition next year targeted at the HPC
market. The product is designed to create a "personal supercomputing"
solution - a great out-of-the-box experience - surrounded by a wide
ecosystem of partners, products, and services to maximize business
value. Never before have the developer, IT professional, and User
experience been as uniformly effortless.
Presentation slides for all the keynotes will be available from this
website after the event.
|