Industry Birds of a Feather Sessions

Industry partners Birds of a Feather schedule and session details are provided below. All registered HiPC attendees are eligible to participate in these sessions.

Watch this page for updates closer to the conference.


Day 2 (December 18th)

1:30-3:30 PM SHELL
  • Simulating and Visualizing Fluid Flow: HPC challenges
  • Numerical modeling of complex flows – An HPC challenge
  • Challenges in Flow and Thermal Response Estimation for Detailed Design
  • Smoothed Particle Hydrodynamics (SPH) - An Innovative CFD Approach and its Computational Challenges
  • HPC bringing WOW factor to Visual world
  • Better physics and throughput in CFD with HPC
4:00-6:00 PM NVIDIA
  • Opening: New product announcement and recent updates
  • CUDA 7 and Beyond
  • Machine Learning with GPUs
7:00-9:00 PM MELLANOX
  • Mellanox EDR - Step into the world of 100Gb/s

Day 3 (December 19th)

1:30-3:30 PM AMD
  • OpenCL 2.0 and OpenMP 4.0, the open source programming frameworks which protect your software investments and ensure a smooth long term transition to SoC solutions
4:00-6:00 PM INTEL
  • Convergence of Big Data and HPC infrastructure via Lustre
  • Lustre PFS Ecosystems: Architecture and Use Cases
  • Performability Comparison of Lustre and HDFS for MR Applications

SHELL

[]


Simulating and Visualizing Fluid Flow: HPC challenges

This session provides a common platform for effective discussions between the CFD application engineers and researchers and developers of HPC hardware and algorithms. In this session we intend to discuss the best practices, current challenges and way forward for using CFD in HPC environment to solve/visualize real life complex fluid flow and heat transfer problems in various domains ranging from Aerospace, Healthcare, Oil & Gas and many others. Additionally, we also want to float the idea of starting an industry driven research consortium on this area. Below are some of the topics covered in this session:

  • Computational Modeling of Multiphase Flow
  • Computational Modeling of Free Surface Flow
  • Fluid Structure Interaction
  • Aerodynamics
  • Aerothermics
  • Complex Heat Transfer and thermal management
  • Turbulence Modeling
  • Multiphysics Modeling
  • Complex Rheology of Non-Newtonian Fluids
  • CFD for Biomedical applications
  • High Performance computing (HPC) challenges for CFD

Numerical modeling of complex flows – An HPC challenge

Speaker(s):
Anindya Kanti De, Amol Thakre, Ishani Roy (GE Global Research – Bangalore)

The increasing dependence on predictive modeling in guiding design & development of products in a wide range of technology & engineering applications has continuously pushed for development of newer tools and methods, aiding detailed simulation of complex physical phenomena. One such area which is currently going through a steady evolution in modeling methods is complex flows. These flows often involve varying levels of complexities ranging from multi-phase mixtures (mostly) in the turbulent regime, length and time scales spanning several orders of magnitude, particulate matters as well as flow through deformable walls. Complex physics, high resolution grid and time scales demands high performance computational infrastructure. This includes advanced hardware, architectures & storage, as well as enhanced capabilities in pre-processing, solver, post-processing and visualization.

In this presentation, we will focus on some relevant challenges & learnings in the context of two applications, viz. blood flow through a human-heart and multiphase flow-regime transitions in oil and gas applications. The necessity for complex flow modeling requires simultaneous development of advanced algorithms and steady evolution in processor architecture. Today, many supercomputing facilities are built on heterogeneous architectures combining different processor types enabling faster and accurate computation. Recent developments have necessitated formulation of novel algorithm that can transgress architectures and platforms. The presentation will also briefly highlight two such methods, Lattice-Boltzmann Method (LBM) and Smoothed Particle Hydrodynamics (SPH), applied to the complex flows mentioned above.

Challenges in Flow and Thermal Response Estimation for Detailed Design

Speaker(s):
Suman Basu (Samsung)

Acceptance of CFD as design tool has grown fast. Improvement in computational power and parallel computation has enabled CFD modelling of complete product with all the details. However, unsteady computation, to predict response time for a detailed design, remains a stumbling block. Even with the help of HPC, it is difficult at times to complete the computation within the stipulated time in a design process. Only a small fraction of the data generated through such computation is of any use for the design. Therefore, experience of the CFD engineer plays a vital part in deciding which are the important and relevant information. Multiphase flow and conjugate heat transfer problems pose considerable challenge to CFD engineers in this respect.  For CFD to be accepted as a design tool to the larger community, these difficulties must be effectively overcome.

Smoothed Particle Hydrodynamics (SPH) - An Innovative CFD Approach and its Computational Challenges

Speaker(s):
Aarthi Thyagarajan (Shell)

Smoothed-particle hydrodynamics (SPH) is a computational method used for simulating fluid flows and solid mechanics. It was developed initially for astrophysical problems. It is gaining interest for complex situations such as free surface flows and Fluid-Structure Interaction (FSI) involving large deformation. The other category of problems where SPH can be used are those involving multi-physics (eg. multi-phase, heat and mass transfer coupled with reaction), multi-scale (eg. mesoscale and macroscale effects) and problems that involves irregular/complex solid geometry. In terms of accuracy, SPH results are equivalent to grid based methods for several benchmark cases. In this talk we will briefly address how SPH is advantageous compared to the standard/conventional CFD techniques for the aforementioned problems. SPH is a particle based method, and one of the main challenges in using SPH is its computational efficiency. We will address in detail the scales of capability/computational complexity that are required to handle some of the situations/problems and how HPC plays a major role.

HPC bringing WOW factor to Visual world

Speaker(s):
Shajy Thomas (Moving Pictures)

Do you know it would take 445 Years to render Godzilla on a single computer

Do you know creating Avatar Pandora virtual world consumed petabytes of storage

With the unrealistic expectation of the viewers, directors Visual effects and animation movie production is becoming more and more challenging, While the real world is working towards sustainability the reel world working on showing unrealistic and futuristic dream.

The technology requirement whether its high performing rendering or high throughput storage or strong network backplane or review mechanism is increasing very rapidly. Any high quality VFX or Animation movie will involve a large HPC cluster computing millions of render hours ,millions of tasks providing the best visual experience. Each project has its own complexities and requirements which relates to dynamic Workload to be managed with minimum error rates and optimum utilization.

Over the Session We will talk about these workflows, behind the scene HPC contribution making it possible, few complexities and challenges, How HPC is leveraged to address these challenges, How technology plays a role bringing in life to dumb 3D models , how the impossible shots are done with realistic feel, what does it take for the Godzilla to break a building, or Spiderman to jump a tower or Pandora virtual world to be created.

Better physics and throughput in CFD with HPC

Speaker(s):
Mohan Srinivasa (ANSYS)

For a practicing engineer the charm and promise of HPC is the ability to simulate problems with more realism and detail faster than was possible before. With examples from several industries I will discuss how advances in CFD models and HPC has made it possible to gain more reliable results faster for a wide range of problems. I will discuss how engineers are using advances in HPC to gain better insights and higher productivity by utilizing HPC solutions from desktop and high end workstations, shared compute clusters and enterprise HPC deployments.


NVIDIA

[]


Opening: New product announcement and recent updates

CUDA 7 and Beyond

CUDA, NVIDIA's parallel computing platform and programming model, is extending its reach. CUDA support for GPU computing is expanding to support systems based on x86, ARM64, and POWER CPUs, providing a choice of high-performance computing platforms. Programmers can program GPUs natively in the most popular programming languages: C, C++, Fortran, Python, and Java. New CUDA software features like Unified Memory, drop-in libraries, and powerful developer tools make high-performance GPU computing with CUDA easier than ever before. And NVIDIA's future GPU architectures and NVLink interconnect will provide unprecedented efficiency for heterogeneous computing. This talk will present the latest developments in the NVIDIA CUDA computing platform and ecosystem, giving insight into the philosophy driving the development of CUDA.

Machine Learning with GPUs

The GPU evolved from its humble beginnings as a VGA accelerator to become a massively parallel general processor for heterogeneous computing systems. Driven by an insatiable hunger for more realism in computer graphics, the GPU evolved into a massively parallel processor, executing extensive programs on every pixel over 60-times a second. In recent times, the rise of the internet in different segments has accelerated the data explosion - a motivating force for the great success of algorithms in deep learning. This is posing the problem of big data and heterogeneous high performance computing which is another key enabler of the success of Machine Learning. This talk will cover the challenges in ML and what problems they create that can benefit from heterogeneous high performance computing (GPU). We will also talk about the cuDNN which is a GPU-accelerated library of primitives for deep neural networks. It emphasizes performance, ease-of-use, and low memory overhead. NVIDIA cuDNN is designed to be integrated into higher-level machine learning frameworks, such as UC Berkeley’s popular Caffe software. The simple, drop-in design allows developers to focus on designing and implementing neural net models rather than tuning for performance, while still achieving the high performance modern parallel computing hardware affords.


MELLANOX

[]


Mellanox EDR - Step into the world of 100Gb/s

Speaker(s):
Ashrut Ambastha – Staff System Engg
Tong Lui – Director Marketing (APAC)

The exponential growth in data and the ever growing demand for higher performance to serve the requirements of the leading scientific applications drive the need for Petascale systems and beyond and the ability to connect tens-of-thousands of compute and co-processor nodes in a very fast and efficient way. The interconnect has become the enabler of data and the enabler of efficient simulations. Beyond throughput and latency, the data center interconnect needs be able to offload the processing units from the communications work in order to deliver the desired efficiency and scalability. Mellanox has already demonstrated 100Gb/s cable solutions in March 2014 and announced the world’s first 100Gb/ switch at the ISC’14 conference, June 2014. Furthermore, Mellanox has recently introduced the HPC-X software package that provides a complete solution for MPI and PGAS/SHMEM/UPC environments with smart offloading techniques. The presentation will cover the latest technology and solutions from Mellanox that connect the world’s fastest supercomputers, and a roadmap for the next generation InfiniBand speed.


AMD

[]


OpenCL 2.0 and OpenMP 4.0, the open source programming frameworks which protect your software investments and ensure a smooth long term transition to SoC solutions

Speaker(s):
Bruno Stefanizzi, Senior Manager Software Development – Open CL
JC Baratault, Senior Business Development Manager – Compute and HPC

SoCs with integrated graphics based on x86 or ARM CPU architecture, like the AMD APU ones, have serial and parallel cores embedded in one die, sharing the same memory to eliminate time-expensive data transfer. These SoC are much more power efficient than discrete GPUs plugged onto a PCIe bus and represent the long term HPC future. During this AMD BoF session you will learn how open source programming frameworks allow you to protect your legacy code investments and ensure their long term evolution. With its FirePro S-series, AMD delivers a full line-up of professional graphics accelerators with impressive sustained performance per Watt ratio. These boards have been qualified in OEM and ODM servers like Supermicro, Asus, Gigabyte and Tyan. The hardware alone is meaningless if the right software stack is not available to the users. This software stack must not only be feature-rich but also allow the users to select the hardware platform matching their reals needs over the years. There is no future for proprietary programming frameworks where users are locked to one supplier and not free to choose their hardware solution. OpenCL 2.0 and OpenMP 4.0 are two key open source programming frameworks allowing users to obtain full benefit of existing GPU accelerators while being ready for an effortless transition to SoC solutions over time. AMD experts will explore all of these benefits available to users of OpenCL 2.0 and OpenMP 4.0.


INTEL

[]


Convergence of Big Data and HPC infrastructure via Lustre

Speaker(s):
Paresh Pattani

This talk focuses on how Big data problems using Hadoop can be run on existing HPC Infrastructure. The approach relies on connectors developed by Intel in the Intel Enterprise Edition of Lustre* Software to (1) replace HDFS with Lustre parallel file system and (2) replace Yarn resource manager with HPC based SLURM scheduler. Advantage of this approach: (1) using existing HPC infrastructure for Hadoop jobs thus saving cost of maintaining and deploying Hadoop cluster, (2) availability of POSIX compliant file system, Lustre, instead of HDFS, (3) faster performance compared to HDFS, (4) no 3-way replication of the file system, (5) ability to use HPC scheduler (SLURM) therefore easy to integrate in HPC environment.

Lustre PFS Ecosystems: Architecture and Use Cases

Speaker(s):
Atul Vidwansa

This talk focuses on current Lustre feature development by DDN in conjunction with Intel along with architectural advancements necessary to achieve TB/s performance at reasonable cost. It also talks about real life use cases on application acceleration by injecting burst buffer cache into Lustre parallel filesystem. We will be presenting application performance on Lustre from engineering, cosmology and life sciences background with Lustre Burst Buffer Cache.”

Performability Comparison of Lustre and HDFS for MR Applications

Speaker(s):
Rekha Singhal

With its simple principles to achieve parallelism and fault tolerance, the Map-reduce framework has captured wide attention, from traditional high performance computing to marketing organizations. The most popular open source implementation of this framework is Hadoop, which works on the Hadoop Distributed File System (HDFS). The traditional high performance computing has always been at the forefront of developing and deploying cutting edge technology and solutions such as Lustre, a Parallel IO file systems, to meet its ever growing need. To support new and upcoming use cases, there is a focus on tighter integration of Hadoop with existing HPC stacks. In this presentation, we share our work on one such integration by analyzing an FSI workload built using map reduce framework and evaluating the performance and performability (degraded performance during failures) of the application on an integrated stack with Hadoop and Lustre through Hadoop extensions such as Hadoop Adapter for Lustre (HAL) and HPC Adapter for MapReduce (HAM) developed by Intel, while comparing the performance against the Hadoop Distributed File System (HDFS).