使用tiny-dnn训练CNN获取NIST数字

时间:2018-04-12 06:50:39

标签: machine-learning neural-network ocr tiny-dnn

我一直在尝试使用tiny-dnn库训练CNN进行数字识别。使用的数据库是NIST 19.每班的样本数量为1000,用于培训,30个用于测试。所以培训样本总数是 1000 * 10 = 10000。 OpenCV用于图像处理。

获得的最大准确度为40%。这是由于样本数量少吗? 如何提高准确度?

代码如下:

ConvolutionalNN::train()
{
    network<sequential> net;

    // add layers
    net << conv(32, 32, 5, 1, 6) << tiny_dnn::activation::tanh()  // in:32x32x1, 5x5conv, 6fmaps
        << ave_pool(28, 28, 6, 2) << tiny_dnn::activation::tanh() // in:28x28x6, 2x2pooling
        << fc(14 * 14 * 6, 120) << tiny_dnn::activation::tanh()   // in:14x14x6, out:120
        << fc(120, 10);                     // in:120,     out:10

    assert(net.in_data_size() == 32 * 32);
    assert(net.out_data_size() == 10);

    DatabaseReader db;
    db.readTrainingFiles();

    // hold labels -> training filenames
    std::vector<int> labels = db.getTrainLabels();
    std::vector<std::string> trainingFilenames = db.getTrainFileNames();

    std::vector<label_t> train_labels;
    std::vector<vec_t> train_images;

    // loop over training files
    for(int index=0; index<trainingFilenames.size(); index++)
    {
        // output on which file we are training
        std::cout << "Analyzing label -> file: " <<  labels[index] << "|" <<  trainingFilenames[index] << std::endl;

        // read image file (grayscale)
        cv::Mat imgMat = cv::imread(trainingFilenames[index], 0);

        Mat nonZero;
        Mat invert = 255 - imgMat;
        findNonZero(invert, nonZero);
        Rect bb = boundingRect(nonZero);
        Mat img = invert(bb);

        int w=32, h=32,scale=1;
        cv::Mat resized;
        cv::resize(img, resized, cv::Size(w, h));

        imshow("img", resized);
        waitKey(30);
        //convert to float

        resized.convertTo(resized, CV_32FC1);
        cv::normalize(resized,resized, -1, 1, NORM_MINMAX);

        //convert to vec_t

        vec_t d;
        tiny_dnn::float_t *ptr = resized.ptr<tiny_dnn::float_t>(0);
        d = tiny_dnn::vec_t(ptr, ptr + resized.cols * resized.rows );

        train_images.push_back(d);
        train_labels.push_back(labels[index]);


    }

    // declare optimization algorithm
    adagrad optimizer;

    cout << "Training Started" << endl;

    // train (50-epoch, 30-minibatch)
    net.train<mse, adagrad>(optimizer, train_images, train_labels, 30, 50);

    cout << "Training Completed" << endl;



    // save
    net.save("net");

}

由于 阿迈勒

2 个答案:

答案 0 :(得分:0)

为什么你只想预测10个课程?如果我没记错的话,NIST 19有62个类,所以你应该用62个神经元创建一个softmax激活的输出层。顺便说一句,tanh激活方式很老,我建议你使用ReLU。

您也可以尝试进行一些数据扩充。在继续训练CNN时,尝试为输入图像生成随机扰动。

答案 1 :(得分:0)

问题在于学习率高。当学习率从默认值0.01变为0.001时,准确度从40%增加到90%

更改的行是

optimizer.alpha = static_cast<tiny_dnn::float_t>(0.001);

由于

阿迈勒