HPC-3-use-openmp(shared-memory-method)

Introduction to HPC, shared memory parallel using openmp 1 The multicore system The relationship with L1-L3 cache. The L3 cache is shared, but every core have its own L1-2 cache. 2 Using openmp 1 #include "omp.h" Before using it, we need to define how many threads we want to use: In Unix system: 1 export OMP_NUM_THREADS=4 The instruction: 1 #pragma omp parallel If we put this macro before one line of code or one block, the line or block will be executed $OMP_NUM_TRHREADS times. ...

April 16, 2024 · 4 min · 684 words · Xi Chen

HPC-1-divide-and-conquer-block-matrix-algorithmr

week 2 block matrix algorithm 1. BLIS reference high performance implitation v.s. naive methods: 2. With different block size: This is the MB NB PB = 40. But if the block size is too small, the performance is not as good as naive PJI. The front for loop is JIP is not related to the performance of the algorithm because the computer will focus on each implementation in blocking. That means the register will focus on optimize the final for loop: the Gemm_JPI function, but will not paralize and optimize the for loop for block - matrix- matrix - multiplication. ...

April 15, 2024 · 9 min · 1852 words · Xi Chen

HPC-2-Memory-hierarchy-in-computer

Hierarchy Memory 1. Why use Hierarchy Memory Because the register memory is much faster than main memory, in fact the difference is about two magnitude. And the performance gap will be larger because the CPU’s speed increase faster than main memory. In this situation, if we fetch data from the main memory too many times, the expense will be very expensive. But if we create some memory which is faster than main memory but a little bit slower than register memory. We call it cache. ...

April 15, 2024 · 15 min · 3002 words · Xi Chen