|
|
|
Download PDF version » |
|
|
|
Tutorial I |
|
High Performance Computing with CUDA |
|
Presenter: Sanjiv Satoor, NVIDIA corp. and Punit Kishore, NVIDIA corp |
|
|
|
Abstract |
|
NVIDIA CUDA is a general purpose scalable parallel programming model for writing highly parallel applications and unleashing the massive computing performance of NVIDIA's many core GPUs. It provides several key abstractions – a hierarchy of thread blocks, shared memory, and barrier synchronization. This model has proven quite successful at programming multi-threaded many-core GPUs and scales transparently to hundreds of cores. Performance speed-ups on a range of applications ranges from 20x to 100x. This tutorial will cover the CUDA architecture and programming model. The CUDA tools and libraries will also be discussed. It will also include a brief about CUDA applications in various domains. |
|
|
|
Tutorial II |
|
Hadoop - Delivering petabyte scale computing and storage on commodity hardware |
|
Presenters: Yahoo Bangalore Cloud Computing Team |
|
|
|
Abstract |
|
Hadoop is a framework for running applications on large clusters built of commodity hardware. Hadoop implements a computational paradigm named Map/Reduce, where the application is divided into many small units of work, each of which may be executed on any node in the cluster (10,000+ nodes). In addition, it provides a distributed file system (HDFS) that stores data (petabytes) on the compute nodes,providing very high aggregate bandwidth across the cluster. This tutorial presents an overview of Hadoop and then dives under the hood to look at its architecture, implementation, related sub-projects, and case studies of applications being built and deployed on Hadoop. |
|
|
|
Tutorials Chair |
|
Rajeev Sivaram, Google, USA |
|
|
|
|
|
TOP ^ |
|
|
|
|
|
|
|