Optimize/tune sequential codes for square matrix multiply (aka dgemm: C = C + A B where A, B, and C have dimension m by m). Explain the optimizations you uses and performance characteristics you observed.
Matrix multiplication is a basic building block in many scientific computing codes. Since it is an O(n3) algorithm, these codes often spend much of their time in matrix multiplication. The most naive code to multiply matrices is short, simple and very slow:
for i = 1 to m for j = 1 to m for k = 1 to m C(i,j) = C(i,j) + A(i,k) * B(k,j) end end endWe're providing implementations of the trivial unoptimized matrix multiply code in C and Fortran, and a very simple blocked implementation in C. We're also providing a wrapper for the BLAS (Basic Linear Algebra Subroutines) library. The BLAS is a standard interface for building blocks like matrix multiplication, and many vendors provide optimized versions of the BLAS for their platforms. You may want to compare your routines to codes that others have optimized; if you do, think about the relative difficulty of using someone else's library to writing and optimizing your own routine. Knowing when and how to make use of standard libraries is an important skill in building fast programs!
The unblocked routines we provide do as poorly as you'd expect, and the blocked routine provided doesn't do much better.
The performance of the two routines is shown above. The naive unblocked routine is the "blocksize=1" curve, and is just the standard three nested loops. The blocked code (block size of 30, selected as the "best" in an exhaustive search of square tile sizes up to blocksize=256) is shown in green. The peak performance of the machine used is 667 MFlop/s, and both implementations achieve less than a quarter of that performance. However, the unblocked routine's performance completely dies on matrices larger than 255 x 255, while the blocked routine gives more consistent performance for all the tested sizes. Note the performance dips at powers of 2 (cache conflicts). The machine used for this diagram has a 2 Mbyte external cache, and 3 matrices * 256x256 entries * 8 bytes/entry is 1.5 Mbytes. Obviously, there's overhead in both operation count and memory usage.
![]()
You may start with following subroutinesYou should get a lot of improvement from blocking/tiling, but you may want to play with other optimizations doing some independent reading.
- Basic (trivial, unoprtimized) matrix multiply code in C basic_dgemm.c
- Basic (trivial, unoprtimized) matrix multiply code in Fortran basic_fdgemm.f
- A very simple blocked implementation in C blocked_dgemm.c
- A wrapper for calling BLAS library wrap_dgemm.c
Submit your homework report (or send me the webpage link to your homework) The write-up should
- explain the optimization techniques used or attempted,
- try to reason for any odd behavior (e.g., dips) in performance, and how the same kernel performs on a different hardware platform.
- show the results of your optimizations, include a graph comparing your ``my_dgemm.c'' with ``basic_dgemm.c''.
- try to explain why it performs poorly. Your explanations should rely heavily on knowledge of the memory hierarchy.
- Video 53:15--1:22:07 of Demmel's lecture 1/19/2012 and 1:14--28:00 of 1/24/2012 on optimizing matrix-matrix multiplication
- An early reference on the idea of cache blocking in the context of matrix multiply.
- BeBOP: a performance tuning and benchmarking project
- Need information on gcc compiler flags? Read the manual.