Speaker: David Power, Boston Limited
Title: Lessons learned from HPC: The dark art of moving bottlenecks around systems architectures
Bio: With a background in Artificial Intelligence, Deep Learning and Distributed Genetic Algorithms, David started his High Performance Computing career at the University of Limerick. Since then, David has spent over fifteen years working on large-scale HPC systems with a history of bringing award-winning products to market.
Abstract: In this talk we will focus on how HPC architectures map to AI and talk through how to build a balanced system when a large number of GPUs are integrated into the overall architecture.
Speaker: Ashok Bhat, Product Manager HPC SW ARM
Bio: Ashok Bhat is Senior Product Manager at Arm, UK covering HPC tools on Arm. He has been working with partners and customers worldwide on their journey to migrate their workloads to Arm.
Title: HPC on ARM
Abstract: Over the years, there has been steady progress of Arm being ready for HPC. In Nov 2018, Sandia Astra, in the USA, became the first Arm supercomputer in Top500 list. Earlier this year, Fujitsu unveiled details of their A64FX processor to be used in the upcoming exascale-class Post-K supercomputer in Japan. In this talk, the presenter will provide an overview of HPC on Arm, covering readiness of both hardware and software ecosystem for HPC workloads.
Speaker: Kumar Deepak, Xilinx
Bio: Kumar Deepak is a Distinguished Engineer in the Software and IP group at Xilinx with over 20 years of experience in architecting complex software and hardware systems. He currently leads the development of verification tools for the Vivado and SDx software suites. He has 18patents in the areas of compilers, simulation, profiling and debugging. He is very passionate about and deeply involved in the roll out of SDAccel based compute acceleration using Xilinx FPGAs. He received his B.S in Electronics and Communication Engineering from Indian Institute of Technology, Kharagpur.
Title: Exploiting the Power of FPGA-based Acceleration in the Cloud
Abstract: Over the past ten years cloud computing has seen phenomenal growth in usage, size and diversity of services. Performance-sensitive applications like video conferencing, genomics and artificial intelligence are taking off and placing increasing demands on data centers, which need to scale their resources to accommodate millions of simultaneous users and/or IoT devices.
As a result, cloud services companies are starting to adopt custom compute accelerators that are optimized for specific applications and provide better performance, lower latency and lower power than traditional X86-based servers. FPGAs, with their inherent ability to create massively parallel customized compute accelerators at much lower latency and power usage than CPUs and GPUs, offer exactly what cloud vendors need. Using adaptable devices they can achieve an order of magnitude of performance at reduced power, and can quickly reconfigure the devices to handle the demands of specific workloads.
But FPGAs are notoriously difficult to program, and in order to fully meet their potential as data center accelerators new high-level programming methods are required to meet the needs of a diverse set of users from hardware engineers to algorithm designers to application developers.
FPGAs, with its inherent ability to create massively parallel customized digital circuit implementation for an application at much lower power usage than CPUs and GPUs, offer exactly what cloud vendors need. This talk will provide some examples of application level performance challenges, how FPGAs can be used to meet these challenges and how new high level programming tools can be used to harness the performance and power-efficiency of FPGAs in the Cloud.