Progressive CUDA GEMM Optimization: From Memory-Bound to Swizzling

A step-by-step guide to optimizing FP32 CUDA GEMM kernels. Learn how to overcome warp-level memory coalescing bottlenecks, eliminate 32-way shared memory bank conflicts using memory padding, and implement zero-waste XOR address swizzling.

Demystifying FlashAttention: Forward, Backward, and Triton Implementation

A breakdown of FlashAttention’s forward and backward passes, including Online Softmax, LogSumExp materialization, gradient recomputation, and core Triton implementations.

Deep Dive into Triton GEMM Optimization: From Naive Tiling to Hopper TMA

A step-by-step guide to optimizing GEMM in Triton, covering Tiling, Autotuning, L2 Cache Optimizations, and Hopper TMA.