CUDA:__syncthreads()在共享内存操作之前?

时间:2014-04-08 06:43:11

标签: memory concurrency cuda shared

我处于无法使用CUDA调试器的相当糟糕的情况。我在使用单个共享数组(增量)的应用程序中使用__syncthreads时得到了一些奇怪的结果。下面的代码在循环中执行:

__syncthreads(); //if I comment this out, things get funny
deltas[lex_index_block] = intensity - mean;
__syncthreads(); //this line doesnt seem to matter regardless if the first sync is commented out or not
//after sync: do something with the values of delta written in this threads and other threads of this block

基本上,我有重叠块的代码(由于算法的性质,需要)。该程序确实编译并运行,但不知何故,我在垂直重叠的区域中得到了系统错误的值。这对我来说非常混乱,因为我认为正确的同步方法是在线程执行我对共享内存的写操作后进行同步。

这是整个功能:

//XC without repetitions
template <int blocksize, int order>
__global__ void __xc(unsigned short* raw_input_data, int num_frames, int width, int height,
                 float * raw_sofi_data, int block_size, int order_deprecated){

//we make a distinction between real pixels and virtual pixels
//real pixels are pixels that exist in the original data

//overlap correction: every new block has a margin of 3 threads doing less work (only computing deltas)
int x_corrected = global_x() - blockIdx.x * 3;
int y_corrected = global_y() - blockIdx.y * 3;

//if the thread is responsible for any real pixel
if (x_corrected < width && y_corrected < height){

    //        __shared__ float deltas[blocksize];
    __shared__ float deltas[blocksize];

    //the outer pixels of a block do not update SOFI values as they do not have sufficient information available
    //they are used only to compute mean and delta
    //also, pixels at the global edge have to be thrown away (as there is not sufficient data to interpolate)
    bool within_inner_block =
            threadIdx.x > 0
            && threadIdx.y > 0
            && threadIdx.x < blockDim.x - 2
            && threadIdx.y < blockDim.y - 2
            //global edge
            && x_corrected > 0
            && y_corrected > 0
            && x_corrected < width - 1
            && y_corrected < height - 1
            ;


    //init virtual pixels
    float virtual_pixels[order * order];
    if (within_inner_block){
        for (int i = 0; i < order * order; ++i) {
            virtual_pixels[i] = 0;
        }
    }


    float mean = 0;
    float intensity;
    int lex_index_block = threadIdx.x + threadIdx.y * blockDim.x;



    //main loop
    for (int frame_idx = 0; frame_idx < num_frames; ++frame_idx) {

        //shared memory read and computation of mean/delta
        intensity = raw_input_data[lex_index_3D(x_corrected,y_corrected, frame_idx, width, height)];

        __syncthreads(); //if I comment this out, things break
        deltas[lex_index_block] = intensity - mean;
        __syncthreads(); //this doesnt seem to matter

        mean = deltas[lex_index_block]/(float)(frame_idx+1);

        //if the thread is responsible for correlated pixels, i.e. not at the border of the original frame
        if (within_inner_block){
            //WORKING WITH DELTA STARTS HERE
            virtual_pixels[0] += deltas[lex_index_2D(
                        threadIdx.x,
                        threadIdx.y + 1,
                        blockDim.x)]
                    *
                    deltas[lex_index_2D(
                        threadIdx.x,
                        threadIdx.y - 1,
                        blockDim.x)];

            virtual_pixels[1] += deltas[lex_index_2D(
                        threadIdx.x,
                        threadIdx.y,
                        blockDim.x)]
                    *
                    deltas[lex_index_2D(
                        threadIdx.x + 1,
                        threadIdx.y,
                        blockDim.x)];

            virtual_pixels[2] += deltas[lex_index_2D(
                        threadIdx.x,
                        threadIdx.y,
                        blockDim.x)]
                    *
                    deltas[lex_index_2D(
                        threadIdx.x,
                        threadIdx.y + 1,
                        blockDim.x)];

            virtual_pixels[3] += deltas[lex_index_2D(
                        threadIdx.x,
                        threadIdx.y,
                        blockDim.x)]
                    *
                    deltas[lex_index_2D(
                        threadIdx.x+1,
                        threadIdx.y+1,
                        blockDim.x)];
            //                xc_update<order>(virtual_pixels, delta2, mean);
        }
    }

    if (within_inner_block){
        for (int virtual_idx = 0; virtual_idx < order*order; ++virtual_idx) {
            raw_sofi_data[lex_index_2D(x_corrected*order + virtual_idx % order,
                                       y_corrected*order + (int)floorf(virtual_idx / order),
                                       width*order)]=virtual_pixels[virtual_idx];
        }
    }
}
}

1 个答案:

答案 0 :(得分:3)

从我所看到的,在循环迭代之间的应用程序中可能存在危险。对循环迭代deltas[lex_index_block]的{​​{1}}的写入可以映射到与迭代frame_idx+1中不同线程中的deltas[lex_index_2D(threadIdx.x, threadIdx.y -1, blockDim.x)]读取相同的位置。这两个访问是无序的,结果是不确定的。尝试使用frame_idx运行应用。