Computer Science Seminar: Dr. Jee Choi from IBM TJ Watson
Host: Zhaojun Bai
When: Thursday, March 23rd at 3:10pm
Where: 1131 Kemper Hall
Title: High-Performance Tensor Decomposition for Data Analytics and Co-design
Many social and scientific domains give rise to data with multi-way relationships that can naturally be represented by tensors, or multi-dimensional arrays. Decomposing – or factoring – tensors can reveal latent properties that are otherwise difficult to see. However, due to the relatively recent rise in popularity of tensor decomposition in HPC, its challenges in performance optimization is poorly understood. In this presentation, I will explain the steps taken to identify and isolate the major bottlenecks in tensor decomposition algorithms, and demonstrate significant speedup over prior state-of-the-art using various cache blocking mechanisms. I will also show our first-cut attempt at creating a performance model for tensor decomposition that will pave the way for future work in creating a composable autotuning framework for developing faster and more versatile tensor decomposition libraries, and for algorithm-architecture co-design for faster data analytics hardware.
Jee Choi is a postdoctoral researcher at IBM TJ Watson. He received his PhD in Electrical and Computer Engineering at Georgia Tech, where he worked on all things HPC. His work on auto-tuning sparse matrix-vector multiply for GPUs is the one of the most cited paper in its area, and his PhD dissertation on energy and power modeling for HPC applications was one of the first to directly connect algorithmic properties to architectural parameters for energy and power. His latest endeavor is optimizing tensor decomposition algorithms for data analytics. This is part of a larger co-design project at IBM to design the next-gen data processing system.
1131 Kemper Hall