尽管有启动边界指令,但低寄存器使用率

时间:2013-03-11 12:53:14

标签: cuda

在优化我的内核时,我尝试为每个线程的编译提供尽可能多的寄存器。我有一个1300点的网格,我可以任意划分成块来同时处理。考虑到我的CUDA设备(GTX 460,comute功能2.1)支持每个SM 32,768个寄存器,我的数学技能告诉我,最多可以产生两个672个线程的块

32,768 / 1344 = 24

每个线程注册。

通过

编译我的内核
__global__ void
__launch_bounds__(672, 2)
moduleB3(...)

结果

ptxas : info : Compiling entry function _Z8moduleB3PfS_S_S_S_S_S_S_S_S_S_S_S_S_S_S_PiS_S_S_S_S_S_ffffiffffiiffii' for 'sm_20'
ptxas : info : Function properties for _Z8moduleB3PfS_S_S_S_S_S_S_S_S_S_S_S_S_S_S_PiS_S_S_S_S_S_ffffiffffiiffii
48 bytes stack frame, 84 bytes spill stores, 44 bytes spill loads
ptxas : info : Used 20 registers, 184 bytes cmem[0], 24 bytes cmem[16]

当不提供launch_bounds()时,寄存器的使用率要高得多。我实际上有几个内核,其中任何一个中使用的寄存器的最大数量是20,而我怀疑的是24。 任何有根据的猜测我的考虑因素都在哪里?

编辑:问题是,当指定启动边界时,寄存器使用量会减少。 以下是没有启动边界的编译器的输出:

ptxas : info : Compiling entry function _Z10moduleA2_1PfS_S_S_S_S_S_S_S_S_S_S_S_S_S_S_fffffffiiii' for 'sm_21'
ptxas : info : Function properties for _Z10moduleA2_1PfS_S_S_S_S_S_S_S_S_S_S_S_S_S_S_fffffffiiii
0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas : info : Used 56 registers, 140 bytes cmem[0], 40 bytes cmem[16]

这里有__launch_bounds(672,2):

ptxas : info : Compiling entry function '_Z10moduleA2_1PfS_S_S_S_S_S_S_S_S_S_S_S_S_S_S_fffffffiiii' for 'sm_21'
ptxas : info : Function properties for _Z10moduleA2_1PfS_S_S_S_S_S_S_S_S_S_S_S_S_S_S_fffffffiiii
120 bytes stack frame, 156 bytes spill stores, 124 bytes spill loads
ptxas : info : Used 20 registers, 140 bytes cmem[0], 40 bytes cmem[16]

据我了解,编译器宁愿使用更多寄存器,但由于资源限制而无法使用。但是,使用的寄存器不能累加到可用的32,768。如前所述,每个线程的上限应为24个寄存器。我可以理解,如果编译器选择实现一些计数较低的内核,但是我的内核的 none ,它使用了更多没有启动边界的寄存器,请求超过20个。

我认为发布内核不会有任何好处,但当然你可以看看。以下是(希望)最简单的一个:

__global__ void 
__launch_bounds__(672, 2)
moduleA2_1(float *d_t, float *d_x, float *d_p, float *d_rho, float *d_b, float *d_u, 
float *d_ua, float *d_us, float *d_qa, float *d_qs, float *d_dlna,
float *d_cs, float *d_va, float *d_ma, float *d_uc2, float *d_rhs,
float k_b, float m_h, float gamma, float PI, float Gmsol, float r_sol, float fourpoint_constant, int radius, int numNodes, int numBlocks_A2_1, int numGridsPerSM)
{
int idx, idg, ids;

//input
float t, p, rho, b, u, ua, us, qa, qs, dlna;
//output
float a2, cs, va, ms, ma, vs12, vs22, uc2, dlna2, rhs; 

extern volatile __shared__ float smemA21[];
float volatile *s_lna2;
s_lna2 = &smemA21[0];

ids = blockIdx.x / numBlocks_A2_1;
idx = (blockIdx.x % numBlocks_A2_1) * (blockDim.x - 2*radius) + threadIdx.x - radius;
idg = numGridsPerSM * ids;

while(idg < numGridsPerSM * (ids + 1))
{
    if(idx >= 0 && idx < numNodes)
    {
        t = d_t[idg * numNodes + idx];
        p = d_p[idg * numNodes + idx];
        rho = d_rho[idg * numNodes + idx];
        b = d_b[idg * numNodes + idx];
        u = d_u[idg * numNodes + idx];
        ua = d_ua[idg * numNodes + idx];
        us = d_us[idg * numNodes + idx];
        qa = d_qa[idg * numNodes + idx];
        qs = d_qs[idg * numNodes + idx];
        dlna = d_dlna[idg * numNodes + idx];
    }

    //computeA2(i);      // isothermal sound speed (squared)
    a2 = k_b / m_h * t;

    //computeLna2(i);
    s_lna2[threadIdx.x] = (float)log(a2);

    //computeCs(i);      // adiabatic sound speed
    cs = gamma * p / rho;
    d_checkInf(&cs);
    cs = sqrt(cs);

    //computeVa(i);      // Alfven speed
    va = b  / (float)sqrt(4*PI*1E-7*rho);
    d_checkInf(&va);

    //computeMs(i);      // sonic Mach number
    ms = u / cs;
    d_checkInf(&ms);
    if(ms < FLT_MIN)
        ms = FLT_MIN;

    //computeMa(i);      // Alfven Mach number
    ma = u / va;
    d_checkInf(&ma);
    if(ma < FLT_MIN)
        ma = FLT_MIN;

    //computeUc2(i);     // critival speed (squared)
    uc2 = a2 + ua / (4 * rho) * (1 + 3 * ma)/(1 + ma) + 8 * us / (3 * rho) * (ms)/(1 + ms);

    //computeVs12(i);    // support value 1
    vs12 = us / (3 * rho) * (1 - 7 * ms)/(1 + ms);

    //computeVs22(i);    // support value 2
    vs22 = 4 * us / (3 * rho) * (ms - 1)/(ms + 1);

    __syncthreads();
    //fourpointLna2(i); 
    if((threadIdx.x > radius-1) && (threadIdx.x < blockDim.x - radius) && (idx < numNodes))
    {
        if (idx == 0)  // FO-forward difference
            dlna2 = (s_lna2[threadIdx.x+1] - s_lna2[threadIdx.x])/(d_x[idg * numNodes + idx+1] - d_x[idg * numNodes + idx]);
        else if (idx == numNodes - 1)  // FO-rearward difference
            dlna2 = (s_lna2[threadIdx.x] - s_lna2[threadIdx.x-1])/(d_x[idg * numNodes + idx] - d_x[idg * numNodes + idx-1]);
        else if (idx == 1 || idx == numNodes - 2) //SO-central difference
            dlna2 = (s_lna2[threadIdx.x+1] - s_lna2[threadIdx.x-1])/(d_x[idg * numNodes + idx+1] - d_x[idg * numNodes + idx-1]);
        else if(idx > 1 && idx < numNodes - 2 && threadIdx.x > 1 && threadIdx.x < blockDim.x - 2)
            dlna2 = fourpoint_constant * ((s_lna2[threadIdx.x+1] - s_lna2[threadIdx.x-1])/(d_x[idg * numNodes + idx+1] - d_x[idg * numNodes + idx-1])) + (1-fourpoint_constant) * ((s_lna2[threadIdx.x+2] - s_lna2[threadIdx.x-2])/(d_x[idg * numNodes + idx+2] - d_x[idg * numNodes + idx-2]));
        else
            dlna2 = 0;
    }

    //par_computeRhs();
    if(idx >= 0 && idx < numNodes)
    {
        if (u == 0)
            rhs = - Gmsol / (float)pow(d_x[idg * numNodes + idx] + r_sol, 2) + (uc2 + vs12) * dlna - (a2 + vs22) * dlna2;
        else
            rhs = - Gmsol / (float)pow(d_x[idg * numNodes + idx] + r_sol, 2) + (uc2 + vs12) * dlna - (a2 + vs22) * dlna2 + 1 / rho * (qa / (2.0f*(u + va)) + 4.0f * qs / (3.0f*(u + cs)));
    }

    //par_calcSurfaceValues(); 

    if(threadIdx.x > radius-1 && threadIdx.x < blockDim.x - radius && idx < numNodes)
    {
        d_cs[idg * numNodes + idx] = cs;
        d_va[idg * numNodes + idx] = va;
        d_ma[idg * numNodes + idx] = ma;
        d_uc2[idg * numNodes + idx] = uc2;
        d_rhs[idg * numNodes + idx] = rhs;
    }
    idg++;
}
}

感谢您抽出宝贵时间。

1 个答案:

答案 0 :(得分:1)

这看起来像ptxas中的错误。作为一种解决方法,您可以将内核编译为PTX,然后在内核代码的开头更改行

.maxntid 672, 1, 1
.minnctapersm 2

.maxnreg 24

然后编译PTX文件。这将为您提供一个确实使用24个寄存器的内核。

顺便说一下,对这个内核进行分析以确定它是否确实可以在每个SM运行两个块,或者是否存在一些无法解释的原因导致无法实现这个内容将会很有趣。