_mm_clflush是否真的刷新了缓存?

时间:2018-09-26 20:52:58

标签: linux x86-64 cpu-architecture cpu-cache cache-locality

我试图通过编写和运行测试程序来了解硬件缓存的工作原理:

#include <stdio.h>
#include <stdint.h>
#include <x86intrin.h>

#define LINE_SIZE   64

#define L1_WAYS     8
#define L1_SETS     64
#define L1_LINES    512

// 32K memory for filling in L1 cache
uint8_t data[L1_LINES*LINE_SIZE];

int main()
{
    volatile uint8_t *addr;
    register uint64_t i;
    int junk = 0;
    register uint64_t t1, t2;

    printf("data: %p\n", data);

    //_mm_clflush(data);
    printf("accessing 16 bytes in a cache line:\n");
    for (i = 0; i < 16; i++) {
        t1 = __rdtscp(&junk);
        addr = &data[i];
        junk = *addr;
        t2 = __rdtscp(&junk) - t1;
        printf("i = %2d, cycles: %ld\n", i, t2);
    }
}

我在没有_mm_clflush的情况下运行代码,而结果仅以_mm_clflush显示,则第一次访问内存的速度更快。

_mm_clflush

$ ./l1
data: 0x700c00
accessing 16 bytes in a cache line:
i =  0, cycles: 280
i =  1, cycles: 84
i =  2, cycles: 91
i =  3, cycles: 77
i =  4, cycles: 91

不带_mm_clflush

$ ./l1
data: 0x700c00
accessing 16 bytes in a cache line:
i =  0, cycles: 3899
i =  1, cycles: 91
i =  2, cycles: 105
i =  3, cycles: 77
i =  4, cycles: 84

刷新缓存行只是没有意义,而是实际上变得更快了?谁能解释为什么会这样?谢谢

----------------进一步的实验-------------------

让我们假设3899个周期是由TLB丢失引起的。为了证明我对高速缓存命中/未命中的了解,我对代码进行了一些修改,以比较在L1 cache hitL1 cache miss情况下的内存访问时间。

这一次,代码跳过了缓存行大小(64字节),并访问下一个内存地址。

*data = 1;
_mm_clflush(data);
printf("accessing 16 bytes in a cache line:\n");
for (i = 0; i < 16; i++) {
    t1 = __rdtscp(&junk);
    addr = &data[i];
    junk = *addr;
    t2 = __rdtscp(&junk) - t1;
    printf("i = %2d, cycles: %ld\n", i, t2);
}

// Invalidate and flush the cache line that contains p from all levels of the cache hierarchy.
_mm_clflush(data);
printf("accessing 16 bytes in different cache lines:\n");
for (i = 0; i < 16; i++) {
    t1 = __rdtscp(&junk);
    addr = &data[i*LINE_SIZE];
    junk = *addr;
    t2 = __rdtscp(&junk) - t1;
    printf("i = %2d, cycles: %ld\n", i, t2);
}

由于我的计算机具有8路组L1数据高速缓存,具有64组,共32KB。如果我每64个字节访问一次内存,则应导致所有缓存丢失。但似乎已经缓存了许多缓存行:

$ ./l1
data: 0x700c00
accessing 16 bytes in a cache line:
i =  0, cycles: 273
i =  1, cycles: 70
i =  2, cycles: 70
i =  3, cycles: 70
i =  4, cycles: 70
i =  5, cycles: 70
i =  6, cycles: 70
i =  7, cycles: 70
i =  8, cycles: 70
i =  9, cycles: 70
i = 10, cycles: 77
i = 11, cycles: 70
i = 12, cycles: 70
i = 13, cycles: 70
i = 14, cycles: 70
i = 15, cycles: 140
accessing 16 bytes in different cache lines:
i =  0, cycles: 301
i =  1, cycles: 133
i =  2, cycles: 70
i =  3, cycles: 70
i =  4, cycles: 147
i =  5, cycles: 56
i =  6, cycles: 70
i =  7, cycles: 63
i =  8, cycles: 70
i =  9, cycles: 63
i = 10, cycles: 70
i = 11, cycles: 112
i = 12, cycles: 147
i = 13, cycles: 119
i = 14, cycles: 56
i = 15, cycles: 105

这是由预取引起的吗?还是我的理解有什么问题?谢谢

3 个答案:

答案 0 :(得分:1)

我想这可能是由于一开始的TLB失误造成的? _mm_clflush实际上将此虚拟地址缓存到TLB中,我可能对吗?怎么证明呢?

答案 1 :(得分:1)

我在行def find_min_max(arr, i, nums): if i == len(arr): return nums if arr[i] < nums[0]: nums[0] = arr[i] if arr[i] > nums[1]: nums[1] = arr[i] return find_min_max(arr, i+1, nums) 前添加了对data的额外引用,它显示clflush确实刷新了缓存行。修改后的代码:

_mm_clflush(data)

我在计算机(Intel(R)Core(TM)i5-8500 CPU)上运行了修改后的代码,并得到了以下信息:

没有clflush:

#include <stdio.h>
#include <stdint.h>
#include <x86intrin.h>

#define LINE_SIZE   64
#define L1_WAYS     8
#define L1_SETS     64
#define L1_LINES    512

// 32K memory for filling in L1 cache
uint8_t data[L1_LINES*LINE_SIZE];

int main()
{
    volatile uint8_t *addr;
    register uint64_t i;
    int junk = 0;
    register uint64_t t1, t2;

    printf("data: %p", data);
    data[0] = 1;
    //_mm_clflush(data);

    printf("accessing 16 bytes in a cache line:\n");
    for (i = 0; i < 16; i++) {
        t1 = __rdtscp(&junk);
        addr = &data[i];
        junk = *addr;
        t2 = __rdtscp(&junk) - t1;
        printf("i = %2d, cycles: %ld\n", i, t2);
    }
}

使用clflush:

data: 0000000000407980
accessing 16 bytes in a cache line:
i =  0, cycles: 64
i =  1, cycles: 46
i =  2, cycles: 49
i =  3, cycles: 48
i =  4, cycles: 46

答案 2 :(得分:1)

没有clflush,第一次加载大约需要3899个周期,这大约是处理次要页面错误所花费的时间。 rdtscp序列化加载操作,从而确保以后所有加载到L1缓存中同一行的数据。现在,当您在循环之前添加clflush时,将在循环之外触发并处理页面错误。当页面错误处理程序返回并且clflush被重新执行时,目标缓存行将被刷新。在Intel处理器上,rdtscp确保在发出循环中的第一个负载之前刷新该行。因此,现金层次结构中的第一个负载未命中,其等待时间将与内存访问的等待时间差不多。就像前面的情况一样,以后的加载由rdtscp序列化,因此它们全部命中了L1D。

尽管我们考虑了rdtscp的开销,但是测得的L1D命中延迟过高。您使用-O3进行编译了吗?

在静态分配高速缓存行的情况下,仅当我使用mmap时,在Linux 4.4.0-154上使用gcc 5.5.0无法重现您的结果(即,较小的页面错误)。 。如果您告诉我您的编译器版本和内核版本,也许我可以进一步调查。

关于第二个问题,测量负载延迟的方式将使您无法区分L1D命中和L2命中,因为测量误差可能与延迟之差一样大。您可以使用MEM_LOAD_UOPS_RETIRED.L1_HITMEM_LOAD_UOPS_RETIRED.L2_HIT性能计数器进行检查。 L1和L2硬件预取器很容易检测到顺序访问模式,因此,如果不关闭预取器,则命中也就不足为奇了。

相关问题