时间性能SIMD汇编程序:更长的循环执行速度更快

时间:2016-04-15 16:33:12

标签: loops assembly time simd

我最近在汇编(x86_64)中学习了SIMD,并且有一些意想不到的结果。它归结为以下几点。

我有两个程序多次运行循环。第一个程序包含一个执行4个SIMD指令的循环,第二个程序包含一个完全相同的循环和一个额外的指令。代码如下所示:

第一个程序:

section .bss
doublestorage: resb 8

section .text
    global _start

_start:
    mov rax, 0x0000000100000001
    mov [doublestorage], rax
    cvtpi2pd xmm1, [doublestorage]
    cvtpi2pd xmm2, [doublestorage]
    cvtpi2pd xmm3, [doublestorage]
    cvtpi2pd xmm4, [doublestorage]
    cvtpi2pd xmm5, [doublestorage]
    cvtpi2pd xmm6, [doublestorage]
    cvtpi2pd xmm7, [doublestorage]

    mov rax, (1 << 31)
loop:
    movupd xmm1, xmm3
    movupd xmm2, xmm5
    divpd xmm1, xmm2
    addpd xmm4, xmm1
    dec rax
    jnz loop

    mov rax, 60
    mov rdi, 0
    syscall

第二个项目:

section .bss
doublestorage: resb 8

section .text
    global _start

_start:
    mov rax, 0x0000000100000001
    mov [doublestorage], rax
    cvtpi2pd xmm1, [doublestorage]
    cvtpi2pd xmm2, [doublestorage]
    cvtpi2pd xmm3, [doublestorage]
    cvtpi2pd xmm4, [doublestorage]
    cvtpi2pd xmm5, [doublestorage]
    cvtpi2pd xmm6, [doublestorage]
    cvtpi2pd xmm7, [doublestorage]

    mov rax, (1 << 31)
loop:
    movupd xmm1, xmm3
    movupd xmm2, xmm5
    divpd xmm1, xmm2
    addpd xmm4, xmm1
    movupd xmm6, xmm7
    dec rax
    jnz loop

    mov rax, 60
    mov rdi, 0
    syscall

现在,我的想法如下:第二个程序有更多的执行指令,因此执行需要相当长的时间。但是,如果我计算两个程序,第二个程序比第一个程序花费的时间更少。我运行了两个程序总共100次,结果是:

Runtime first program: mean: 5.6129 s, standard deviation: 0.0156 s
Runtime second program: mean: 5.5056 s, standard deviation: 0.0147 s

我得出结论,第二个程序运行速度更快。这些结果对我来说似乎违反直觉,所以我想知道这种行为可能是什么原因。

要完成,我正在运行Ubuntu 15.10和NASM编译器(-elf64)并使用Intel Core i7-5600。另外,我检查了反汇编,编译器没有进行任何优化:

第一个项目的观察:

exec/instr4:     file format elf64-x86-64


Disassembly of section .text:

00000000004000b0 <.text>:
  4000b0:   48 b8 01 00 00 00 01    movabs $0x100000001,%rax
  4000b7:   00 00 00 
  4000ba:   48 89 04 25 28 01 60    mov    %rax,0x600128
  4000c1:   00 
  4000c2:   66 0f 2a 0c 25 28 01    cvtpi2pd 0x600128,%xmm1
  4000c9:   60 00 
  4000cb:   66 0f 2a 14 25 28 01    cvtpi2pd 0x600128,%xmm2
  4000d2:   60 00 
  4000d4:   66 0f 2a 1c 25 28 01    cvtpi2pd 0x600128,%xmm3
  4000db:   60 00 
  4000dd:   66 0f 2a 24 25 28 01    cvtpi2pd 0x600128,%xmm4
  4000e4:   60 00 
  4000e6:   66 0f 2a 2c 25 28 01    cvtpi2pd 0x600128,%xmm5
  4000ed:   60 00 
  4000ef:   66 0f 2a 34 25 28 01    cvtpi2pd 0x600128,%xmm6
  4000f6:   60 00 
  4000f8:   66 0f 2a 3c 25 28 01    cvtpi2pd 0x600128,%xmm7
  4000ff:   60 00 
  400101:   b8 00 00 00 80          mov    $0x80000000,%eax
  400106:   66 0f 10 cb             movupd %xmm3,%xmm1
  40010a:   66 0f 10 d5             movupd %xmm5,%xmm2
  40010e:   66 0f 5e ca             divpd  %xmm2,%xmm1
  400112:   66 0f 58 e1             addpd  %xmm1,%xmm4
  400116:   48 ff c8                dec    %rax
  400119:   75 eb                   jne    0x400106
  40011b:   b8 3c 00 00 00          mov    $0x3c,%eax
  400120:   bf 00 00 00 00          mov    $0x0,%edi
  400125:   0f 05                   syscall 

第二个项目的观察:

exec/instr5:     file format elf64-x86-64


Disassembly of section .text:

00000000004000b0 <.text>:
  4000b0:   48 b8 01 00 00 00 01    movabs $0x100000001,%rax
  4000b7:   00 00 00 
  4000ba:   48 89 04 25 2c 01 60    mov    %rax,0x60012c
  4000c1:   00 
  4000c2:   66 0f 2a 0c 25 2c 01    cvtpi2pd 0x60012c,%xmm1
  4000c9:   60 00 
  4000cb:   66 0f 2a 14 25 2c 01    cvtpi2pd 0x60012c,%xmm2
  4000d2:   60 00 
  4000d4:   66 0f 2a 1c 25 2c 01    cvtpi2pd 0x60012c,%xmm3
  4000db:   60 00 
  4000dd:   66 0f 2a 24 25 2c 01    cvtpi2pd 0x60012c,%xmm4
  4000e4:   60 00 
  4000e6:   66 0f 2a 2c 25 2c 01    cvtpi2pd 0x60012c,%xmm5
  4000ed:   60 00 
  4000ef:   66 0f 2a 34 25 2c 01    cvtpi2pd 0x60012c,%xmm6
  4000f6:   60 00 
  4000f8:   66 0f 2a 3c 25 2c 01    cvtpi2pd 0x60012c,%xmm7
  4000ff:   60 00 
  400101:   b8 00 00 00 80          mov    $0x80000000,%eax
  400106:   66 0f 10 cb             movupd %xmm3,%xmm1
  40010a:   66 0f 10 d5             movupd %xmm5,%xmm2
  40010e:   66 0f 5e ca             divpd  %xmm2,%xmm1
  400112:   66 0f 58 e1             addpd  %xmm1,%xmm4
  400116:   66 0f 10 f7             movupd %xmm7,%xmm6
  40011a:   48 ff c8                dec    %rax
  40011d:   75 e7                   jne    0x400106
  40011f:   b8 3c 00 00 00          mov    $0x3c,%eax
  400124:   bf 00 00 00 00          mov    $0x0,%edi
  400129:   0f 05                   syscall 

1 个答案:

答案 0 :(得分:1)

no such thing as an "i7 5600"。我假设你的意思是i7 5600U,这是一款低功耗(15W TDO)CPU,具有2.6GHz基础/3.2GHz turbo。

你能仔细检查一下这是否可以重现?确保两个测试中的CPU时钟速度保持恒定,因为低功耗CPU可能无法在分频单元一直忙碌的情况下以全涡轮增压运行。

使用性能计数器(例如perf stat ./a.out)测试核心时钟周期也许也很有用。 (&#34;参考&#34;周期。你想要计算时钟实际运行的实际周期。)

IACA仅支持Haswell。对于两个循环,它不会说除了每次迭代14c以外的任何事情,而是分配器吞吐量的瓶颈。 (Agner Fog对divpd的测量结果为one per 8-14c throughput on Haswell, one per 8c on Broadwell。)

a recent question about broadwell throughput,但那是关于前端饱和的问题。

这个循环应该是divpd吞吐量(one per 8c on Broadwell)的纯粹瓶颈。如果效果是真实的,那么我唯一的解释就是movupd insn中的一个并不总是被消除,并且从divpd偷取p0一个周期有时。

循环中的三个未融合域uop都在不同的端口上运行,因此它们无法相互延迟。 (p0上为divpd,p1上为addpd,p6上已预测为已融合cmp/jcc

实际上,即使这个理论也没有用水。未消除的movaps xmm,xmm使用Broadwell上的端口5。我假设movupd xmm,xmm的奇怪选择也解码为port5 uop。 (Agner Fog甚至没有为movups / movupd的注册表单列出条目,因为每个人都使用movaps。或movapd如果他们喜欢将insn类型与数据匹配,即使它的字节更长并且现有的uarch不关心s vs d的旁路延迟,只有movaps表示float / double和{{ 1}}表示整数。)

有趣的是,我的2.4GHz E6600(Conroe / merom microarch)在4.5秒内运行你的循环。 Agner Fog的表格列出Merom上的movdqa,每5-31c一个。 divpd可能发生在5c。 Sandybridge的最佳案例分化明显慢于Nehalem。只有Skylake,最佳情况下的吞吐量下降速度与Merom一样快。 (吞吐量固定为每4c一个128b 1.0/1.0)。

顺便说一句,这是您的代码版本,它使用了几种更常用的方式在regs中设置FP数据:

divpd

对于IACA,我用过:

    default REL              ;  use RIP-relative for access to globals by default, so you don't have to write `[rel my_global]`

section .rodata
    ; ALIGN 16               ; only desirable if we're doing a 128b load instead of a 64b broadcast-load
one_dp: dq   1.0

section .text

global _start
_start:
    mov      rax, 0x0000000100000001
    mov      [rsp-16], rax     ; if you're going to store/reload, use the stack for scratch space, not a static location that will probably cache-miss.
    cvtpi2pd xmm1, [rsp-16]    ; this is the form with xmm, mm/m64 args.   Interestingly, for the memory operand form, this should perform the same but saves a byte in the encoding.
    cvtdq2pd xmm8, [rsp-16]    ; this is the "normal" insn, with  xmm, xmm/m64 args.
    movq     xmm9, rax
    cvtdq2pd xmm9, xmm9
    ; Fun fact: 64bit int <-> FP is only available for scalar until AVX-512 introduces packed conversions for qword integer vectors.

    ;mov      eax, 1      ; still 1 from earlier
    cvtsi2sd xmm2, eax
    unpcklpd xmm2, xmm2   ; broadcast

    pcmpeqw  xmm3,xmm3          ; generate the constant on the fly, from Agner Fog's asm guide
    psllq    xmm3, 54
    psrlq    xmm3, 2            ; (double)1.0 in both halves.

    movaps   xmm4, xmm3         ; duplicate the data instead of re-doing the conversion.  Shorter and cheaper.
    movaps   xmm5, xmm3
    movaps   xmm6, xmm3

    ;movaps   xmm7, [ones]      ; load 128b constant
    movddup   xmm7, [one_dp]        ; broadcast-load 

    mov eax, (1 << 31)          ; 1<<31 fits in a 32bit reg just fine.
;IACA_start
.loop:
    ;movupd   xmm1, xmm3
    ;movupd   xmm2, xmm5
    movaps   xmm1, xmm3      ; prob. no perf diff, but weird to see movupd used for reg-reg moves
    movaps   xmm2, xmm5
    divpd    xmm1, xmm2
    addpd    xmm4, xmm1
    ;movaps  xmm6, xmm7
    dec      eax
    jnz    .loop
;IACA_end

    mov eax, 60
    xor edi,edi
    syscall                     ; exit(0)

您是否尝试过更多地剥离循环?

%macro  IACA_start 0
     mov ebx, 111
     db 0x64, 0x67, 0x90
%endmacro
%macro  IACA_end 0
     mov ebx, 222
     db 0x64, 0x67, 0x90
%endmacro

然后循环只有4个融合域uops。应该是零差异,因为.loop: movapd xmm1, xmm3 ; replace the only operand that div writes divpd xmm1, xmm2 addpd xmm4, xmm1 dec eax jnz .loop 吞吐量应该仍然是唯一的瓶颈。

或使用AVX:

divpd