Caffe的transformer.preprocessing需要很长时间才能完成

时间:2017-07-25 05:39:00

标签: caffe pycaffe

我写了一个简单的脚本来使用PyCaffe来测试模型,但我发现它非常慢!甚至在GPU上!我的测试集有82K样本,大小为256x256,当我运行下面给出的代码时,需要几个小时才能完成。

我甚至使用了批量图像而不是单个图像,但没有任何变化。目前,它已经运行了5个小时,只处理了50K样品!我该怎么做才能让它更快?

我可以完全避免使用transformer.preprocessing吗?如果是这样的话?

以下是摘录:

#run on gpu
caffe.set_mode_gpu()

#Extract mean from the mean image file
mean_blobproto_new = caffe.proto.caffe_pb2.BlobProto()
f = open(args.mean, 'rb')
mean_blobproto_new.ParseFromString(f.read())
mean_image = caffe.io.blobproto_to_array(mean_blobproto_new)
f.close()

predicted_lables = []
true_labels = []
misclassified =[]
class_names = ['unsafe','safe']
count = 0
correct = 0
batch=[]
plabe_ls = []
batch_size = 50

net1 = caffe.Net(args.proto, args.model, caffe.TEST) 
transformer = caffe.io.Transformer({'data': net1.blobs['data'].data.shape})
transformer.set_transpose('data', (2,0,1))  
transformer.set_mean('data', mean_image[0].mean(1).mean(1))
transformer.set_raw_scale('data', 255)      
transformer.set_channel_swap('data', (2,1,0)) 
net1.blobs['data'].reshape(batch_size, 3,224, 224)
data_blob_shape = net1.blobs['data'].data.shape
data_blob_shape = list(data_blob_shape)
i=0

mu = np.array([ 104,  117,  123])#imagenet mean

#check and see if its lmdb or leveldb
if(args.db_type.lower() == 'lmdb'):
    lmdb_env = lmdb.open(args.db_path)
    lmdb_txn = lmdb_env.begin()
    lmdb_cursor = lmdb_txn.cursor()
    for key, value in lmdb_cursor:
        count += 1 
        datum = caffe.proto.caffe_pb2.Datum()
        datum.ParseFromString(value)
        label = int(datum.label)
        image = caffe.io.datum_to_array(datum).astype(np.uint8)
        if(count % 5000 == 0):
            print('count: ',count)
        if(i < batch_size):
            i+=1
            inf= key,image,label
            batch.append(inf)
        if(i >= batch_size):
            #process n image 
            ims=[]
            for x in range(len(batch)):
                ims.append(transformer.preprocess('data',batch[x][1]))# - mean_image[0].mean(1).mean(1) )
            net1.blobs['data'].data[...] = ims[:]
            out_1 = net1.forward()
            plbl = np.asarray( out_1['pred'])   
            plbl = plbl.argmax(axis=1)
            for j in range(len(batch)):
                if (plbl[j] == batch[j][2]):
                    correct+=1
                else:
                    misclassified.append(batch[j][0])

                predicted_lables.append(plbl[j])
                true_labels.append(batch[j][2]) 
            batch.clear()
            i=0

更新

替换

for x in range(len(batch)):
    ims.append(transformer.preprocess('data',batch[x][1]))
    net1.blobs['data'].data[...] = ims[:]

for x in range(len(batch)):
   img = batch[x][1]
   ims.append(img[:,0:224,0:224])

82K样品在不到一分钟下处理。罪魁祸首确实是预处理方法,我不知道为什么它会像这样!

无论如何,我不能用这种方式使用mean文件。我试着做

ims.append(img[:,0:224,0:224] - mean.mean(1).mean(1))

同样面对这个错误:

ValueError: operands could not be broadcast together with shapes (3,224,224) (3,)

我还需要找到一种更好的方法来裁剪图像,我不知道是否需要将其重新调整为224?或者我应该像咖啡一样使用农作物?

1 个答案:

答案 0 :(得分:2)

我终于成功了!这是运行得更快的代码:

predicted_lables=[]
true_labels = []
misclassified =[]
class_names = ['unsafe','safe']
count =0
correct = 0
batch = []
plabe_ls = []
batch_size = 50
cropx = 224
cropy = 224
i = 0

# Extract mean from the mean image file
mean_blobproto_new = caffe.proto.caffe_pb2.BlobProto()
f = open(args.mean, 'rb')
mean_blobproto_new.ParseFromString(f.read())
mean_image = caffe.io.blobproto_to_array(mean_blobproto_new)
f.close()

caffe.set_mode_gpu() 
net1 = caffe.Net(args.proto, args.model, caffe.TEST) 
net1.blobs['data'].reshape(batch_size, 3, 224, 224)
data_blob_shape = net1.blobs['data'].data.shape

#check and see if its lmdb or leveldb
if(args.db_type.lower() == 'lmdb'):
    lmdb_env = lmdb.open(args.db_path)
    lmdb_txn = lmdb_env.begin()
    lmdb_cursor = lmdb_txn.cursor()
    for key, value in lmdb_cursor:
        count += 1 
        datum = caffe.proto.caffe_pb2.Datum()
        datum.ParseFromString(value)
        label = int(datum.label)
        image = caffe.io.datum_to_array(datum).astype(np.float32)
        #key,image,label
        #buffer n image
        if(count % 5000 == 0):          
            print('{0} samples processed so far'.format(count))
        if(i < batch_size):
            i += 1
            inf= key,image,label
            batch.append(inf)
            #print(key)                 
        if(i >= batch_size):
            #process n image 
            ims=[]              
            for x in range(len(batch)):
                img = batch[x][1]
                #img has c,h,w shape! its already gone through transpose
                #and channel swap when it was being saved into lmdb!
                #method I: crop the both the image and mean file 
                #ims.append(img[:,0:224,0:224] - mean_image[0][:,0:224,0:224] )
                #Method II : resize the image to the desired size(crop size) 
                #img = caffe.io.resize_image(img.transpose(2,1,0), (224, 224))
                #Method III : use center crop just like caffe does in test time
                #center crop
                c,w,h = img.shape
                startx = h//2 - cropx//2
                starty = w//2 - cropy//2
                img = img[:, startx:startx + cropx, starty:starty + cropy]                  
                #transpose the image so we can subtract from mean
                img = img.transpose(2,1,0)
                img -= mean_image[0].mean(1).mean(1)
                #transpose back to the original state
                img = img.transpose(2,1,0)
                ims.append(img)        

            net1.blobs['data'].data[...] = ims[:]
            out_1 = net1.forward()
            plabe_ls = out_1['pred']
            plbl = np.asarray(plabe_ls)
            plbl = plbl.argmax(axis=1)
            for j in range(len(batch)):
                if (plbl[j] == batch[j][2]):
                    correct += 1
                else:
                    misclassified.append(batch[j][0])

                predicted_lables.append(plbl[j])        
                true_labels.append(batch[j][2]) 
            batch.clear()
            i = 0               

虽然我没有得到确切的准确度,但非常接近它(98.65我得到98.61%!我不知道造成这种差异的原因!)

<强>更新
transformer.preprocess花费太长时间完成的原因是因为resize_image()方法。 resize_image需要图像为H,W,C,的形式,而在我的情况下图像已经被转置和通道切换(以c,w,h的形式)(我正在读取lmdb数据集),这导致resize_image()求助于其最慢的调整图像大小的方法,因此每个图像需要处理0.6秒。 现在知道这一点,将图像转换成正确的尺寸,可以解决这个问题。 我不得不这样做:

ims.append(transformer.preprocess('data',img.transpose(2,1,0))) 

请注意,它仍然比上面的方法慢,但它比以前快得多!