如何使用TPU诊断内存不足错误

时间:2019-07-16 13:26:20

标签: tensorflow tpu

我正在尝试在TPU上训练u-net的变体,并且似乎有3个操作正在使用24 Gb内存。鉴于网络很大,我无法知道它们在哪里。您如何找出这些不透明堆栈跟踪所引用的实际操作?

RuntimeError: Compilation failed: Compilation failure: Ran out of memory in memory space hbm. Used 27.90G of 16.00G hbm. Exceeded hbm capacity by 11.90G.

Total hbm usage >= 27.90G:
    reserved        528.00M
    program          27.38G
    arguments       unknown size

Output size unknown.

Program hbm requirement 27.38G:
    reserved          12.0K
    scoped             1.0K
    HLO temp         27.38G (5.6% utilization, 0.0% fragmentation (1.14M))

  Largest program allocations in hbm:

  1. Size: 8.00G
     Operator: op_type="CrossReplicaSum" op_name="tpu_139655909282424/CrossReplicaSum"
     Shape: f32[256,512,128,2]{3,2,1,0}
     Unpadded size: 128.00M
     Extra memory due to padding: 7.88G (64.0x expansion)
     XLA label: %cross-replica-sum = f32[256,512,128,2]{3,2,1,0} cross-replica-sum(f32[256,512,128,2]{3,2,1,0} %bitcast.1), replica_groups={{0,1,2,3,4,5,6,7}}, barrier="custom:0", to_apply=%sum.902, metadata={op_type="CrossReplicaSum" op_name="tpu_139655909282424/CrossRep...
     Allocation type: HLO temp
     ==========================

  2. Size: 8.00G
     Operator: op_type="Mul" op_name="tpu_139655909282424/mul_1"
     Shape: f32[8,32,512,128,2]{4,3,2,1,0}
     Unpadded size: 128.00M
     Extra memory due to padding: 7.88G (64.0x expansion)
     XLA label: %fusion.4 = (f32[8,32,512,128,2]{4,3,2,1,0}, f32[8,32,512,128,2]{4,3,2,1,0}) fusion(f32[8]{0} %fusion.1265, f32[32,512,128,2]{3,2,1,0} %reshape.319, f32[32,512,128,2]{3,2,1,0} %copy.5), kind=kLoop, calls=%fused_computation.4, metadata={op_type="Mul" op_nam...
     Allocation type: HLO temp
     ==========================

  3. Size: 8.00G
     Operator: op_type="Mul" op_name="tpu_139655909282424/mul_1"
     Shape: f32[8,32,512,128,2]{4,3,2,1,0}
     Unpadded size: 128.00M
     Extra memory due to padding: 7.88G (64.0x expansion)
     XLA label: %fusion.4 = (f32[8,32,512,128,2]{4,3,2,1,0}, f32[8,32,512,128,2]{4,3,2,1,0}) fusion(f32[8]{0} %fusion.1265, f32[32,512,128,2]{3,2,1,0} %reshape.319, f32[32,512,128,2]{3,2,1,0} %copy.5), kind=kLoop, calls=%fused_computation.4, metadata={op_type="Mul" op_nam...
     Allocation type: HLO temp
     ==========================

1 个答案:

答案 0 :(得分:1)

您可以通过traceback / traceback_with_start_lines属性找到操作定义点的回溯。例如,要为op打印回溯,您可以编写如下函数:

def print_op_traceback(op):  # Note it takes a tf.Operation, for a tf.Tensor use tensor.op
    for f, lno, func, line in op.traceback:
        print(f'{f}:{lno} ({func})\n    {line}')
        # Or before Python 3.6
        # print('{}:{} ({})\n    {}'.format(f, lno, func, line))

然后您可以使用get_operation_by_name查看有问题的操作的追溯:

op = tf.get_default_graph().get_operation_by_name('tpu_139655909282424/CrossReplicaSum')
print_op_traceback(op)