tensorflow聚集或聚集

时间:2017-05-15 13:43:04

标签: indexing tensorflow mapping

我有这个问题。我有两个张量,一个形状(batch_size = 128,高度= 48,宽度= 48,深度= 1)应该包含索引(从0到32x32-1)和另一个形状(batch_size = 128,height = 32,width = 32,depth = 1)包含我应该映射的值。在这第二个中,该批次中的每个矩阵都包含自己的值。

我想映射例如第三个"索引矩阵"使用第三个"地图矩阵",考虑到批次中每个项目内的索引范围从0到32x32。应对批次中的所有项目应用相同的步骤。由于这些东西应该在损失函数中完成,我看到我们在那里使用批处理,我该如何完成这项任务?我认为tf.gather可能会有所帮助,因为我已经使用了但是在一个简单的情况下(比如一个常数阵列),但我不知道如何在这个复杂的情况下使用它。

编辑:

let's suppose I have:
[
   [
    [1,2,0,3],
    [4,2,4,0],
    [1,3,3,1],
    [1,2,4,8]
   ], 
   [
    [3,2,0,0],
    [4,5,4,2],
    [7,6,3,1],
    [1,5,4,8]
   ] 
]  that is a (2,4,4,1) and a tensor
[
  [
   [0.3,0.4,0.6],
   [0.9,0.2,0.5],
   [0.1,0.2,0.1]
  ] , 
  [
   [0.1,0.4,0.5],
   [0.8,0.1,0.6],
   [0.2,0.4,0.3]
  ]
]  that is a (2,3,3,1). 
The first contains the indexes of the second.
I would like an output:
[
   [
    [0.4,0.6,0.3,0.9],
    [0.2,0.6,0.2,0.3],
    [0.4,0.9,0.9,0.4],
    [0.4,0.6,0.2,0.1],
   ],
   [
    [0.8,0.5,0.1,0.1],
    [0.1,0.6,0.1,0.5],
    [0.4,0.2,0.8,0.4],
    [0.4,0.6,0.1,0.3]
   ] 
]

因此索引应该引用到批处理的单个项目。我是否还应该为此转型提供衍生产品?

1 个答案:

答案 0 :(得分:2)

如果我已正确理解您的问题,您将需要使用

<button string="Canceled" type="object" name="canceled_progressbar" class="oe_highlight" attrs="{'invisible': [('state', '=', 'done')]}"/>

@api.multi def return_confirmation(self): return { 'name': 'Are you sure?', 'type': 'ir.actions.act_window', 'res_model': 'tjara.confirm_wizard', 'view_mode': 'form', 'view_type': 'form', 'target': 'new', } @api.multi def canceled_progressbar(self): if(self.return_confirmation()): #Do some code else: #Do some code 是形状output = tf.gather_nd(tensor2, indices) 的矩阵,以便

indices

其中(batch_size, 48, 48, 3)是您要在indices[sample][i][j] = [i, row, col] 中获取的值的坐标。它们是(row, col)中给出的内容的翻译,用2个数字代替1:

tensor2

要动态创建tensor1,应该这样做:

(row, col) = (tensor1[i, j] / 32, tensor1[i, j] % 32)

编辑2

上面的代码有所改变。

上面的代码认为您的输入张量实际上是indicesbatch_size = tf.shape(tensor1)[0] i_mat = tf.transpose(tf.reshape(tf.tile(tf.range(batch_size), [48*48]), [48, 48, batch_size])) # i_mat should be such that i_matrix[i, j, k, l]=i mat_32 = tf.fill(value=tf.constant(32, dtype=tf.int32), dims=[batch_size, 48, 48]) row_mat = tf.floor_div(tensor1, mat_32) col_mat = tf.mod(tensor1, mat_32) indices = tf.stack([i_mat, row_mat, col_mat], axis=-1) output = tf.gather_nd(tensor2, indices) 形状,而不是(batch_size, 48, 48)(batch_size, 32, 32)。要纠正这个问题,请使用例如

(batch_size, 48, 48, 1)

在上面的代码之前,

(batch_size, 32, 32, 1)

最后