使用nvprof计算CUDA内核执行次数

时间:2017-03-09 21:38:34

标签: cuda nvprof

是否可以使用nvprof来计算CUDA内核执行次数(即启动了多少内核)?

现在,当我运行nprof时,我看到的是:

==537== Profiling application: python tf.py
==537== Profiling result:
Time(%)      Time     Calls       Avg       Min       Max  Name
 51.73%  91.294us        20  4.5640us  4.1280us  6.1760us  [CUDA memcpy HtoD]
 43.72%  77.148us        20  3.8570us  3.5840us  4.7030us  [CUDA memcpy DtoH]
  4.55%  8.0320us         1  8.0320us  8.0320us  8.0320us  [CUDA memset]

==537== API calls:
Time(%)      Time     Calls       Avg       Min       Max  Name
 90.17%  110.11ms         1  110.11ms  110.11ms  110.11ms  cuDevicePrimaryCtxRetain
  6.63%  8.0905ms         1  8.0905ms  8.0905ms  8.0905ms  cuMemAlloc
  0.57%  700.41us         2  350.21us  346.89us  353.52us  cuMemGetInfo
  0.55%  670.28us         1  670.28us  670.28us  670.28us  cuMemHostAlloc
  0.28%  347.01us         1  347.01us  347.01us  347.01us  cuDeviceTotalMem
...

1 个答案:

答案 0 :(得分:1)

是的,它可能。如果您不知道,可以使用documentation和命令行帮助(nvprof --help)。

nvprof:

的最简单用法提供了您所要求的内容

nvprof ./my_application

这将输出(除其他外)一个按名称列出的内核列表,每个内核的启动次数,以及每个内核占GPU使用率的百分比。

以下是一个例子:

$ nvprof ./t1288
==12904== NVPROF is profiling process 12904, command: ./t1288
addr@host: 0x402add
addr@device: 0x8
run on device
func_A is correctly invoked!
run on host
func_A is correctly invoked!
==12904== Profiling application: ./t1288
==12904== Profiling result:
Time(%)      Time     Calls       Avg       Min       Max  Name
 98.93%  195.28us         1  195.28us  195.28us  195.28us  run_on_device(Parameters*)
  1.07%  2.1120us         1  2.1120us  2.1120us  2.1120us  assign_func_pointer(Parameters*)

==12904== Unified Memory profiling result:
Device "Tesla K20Xm (0)"
   Count  Avg Size  Min Size  Max Size  Total Size  Total Time  Name
       1  4.0000KB  4.0000KB  4.0000KB  4.000000KB  3.136000us  Host To Device
       6  32.000KB  4.0000KB  60.000KB  192.0000KB  34.20800us  Device To Host
Total CPU Page faults: 3

==12904== API calls:
Time(%)      Time     Calls       Avg       Min       Max  Name
 98.08%  321.35ms         1  321.35ms  321.35ms  321.35ms  cudaMallocManaged
  0.93%  3.0613ms       364  8.4100us     278ns  286.84us  cuDeviceGetAttribute
  0.42%  1.3626ms         4  340.65us  331.12us  355.60us  cuDeviceTotalMem
  0.38%  1.2391ms         2  619.57us  113.13us  1.1260ms  cudaLaunch
  0.08%  251.20us         4  62.798us  57.985us  70.827us  cuDeviceGetName
  0.08%  246.55us         2  123.27us  21.343us  225.20us  cudaDeviceSynchronize
  0.03%  98.950us         1  98.950us  98.950us  98.950us  cudaFree
  0.00%  8.9820us        12     748ns     278ns  2.2670us  cuDeviceGet
  0.00%  6.0260us         2  3.0130us     613ns  5.4130us  cudaSetupArgument
  0.00%  5.7190us         3  1.9060us     490ns  4.1130us  cuDeviceGetCount
  0.00%  5.2370us         2  2.6180us  1.2100us  4.0270us  cudaConfigureCall
$

在上面的示例中,run_on_deviceassign_func_pointer是内核名称。我链接的文档中也有示例输出。