模板换行时的SWIG错误

时间:2017-10-08 13:32:51

标签: python c++ templates swig

我正在尝试使用SWIG在python中包装一段代码。以下是界面文件:

/* dnn_face_embedding_extractor.ii */
%module dnn_face_embedding_extractor
%{
/* Put header files here or function declarations like below */
extern template <template <int,template<typename>class,int,typename> class      block, int N, template<typename>class BN, typename SUBNET>;
extern template <template <int,template<typename>class,int,typename> class block, int N, template<typename>class BN, typename SUBNET>;
extern template <int N, template <typename> class BN, int stride, typename SUBNET> ;
extern std::vector<matrix<float,0,1>> am_main(char* image_name);
%}

%template <template <int,template<typename>class,int,typename> class block, int N, template<typename>class BN, typename SUBNET>
%template <template <int,template<typename>class,int,typename> class block, int N, template<typename>class BN, typename SUBNET>
%template <int N, template <typename> class BN, int stride, typename SUBNET> ;
extern std::vector<matrix<float,0,1>> am_main(char* image_name);

我遇到以下错误: dnn_face_embedding_extractor.ii:11:错误:输入(1)中的语法错误。 任何人都可以帮助我吗?

P.S。我想包装下面的代码,这是没有错误的,我可以编译它。同样值得注意的是,代码受到Dlib中的一个示例的启发。

#include <dlib/dnn.h>
#include <dlib/gui_widgets.h>
#include <dlib/clustering.h>
#include <dlib/string.h>
#include <dlib/image_io.h>
#include <dlib/image_processing/frontal_face_detector.h>
using namespace dlib;
using namespace std;

template <template <int,template<typename>class,int,typename> class block, int N, template<typename>class BN, typename SUBNET>
using residual = add_prev1<block<N,BN,1,tag1<SUBNET>>>;

template <template <int,template<typename>class,int,typename> class block, int N, template<typename>class BN, typename SUBNET>
using residual_down = add_prev2<avg_pool<2,2,2,2,skip1<tag2<block<N,BN,2,tag1<SUBNET>>>>>>;

template <int N, template <typename> class BN, int stride, typename SUBNET> 
using block  = BN<con<N,3,3,1,1,relu<BN<con<N,3,3,stride,stride,SUBNET>>>>>;

template <int N, typename SUBNET> using ares      = relu<residual<block,N,affine,SUBNET>>;
template <int N, typename SUBNET> using ares_down = relu<residual_down<block,N,affine,SUBNET>>;

template <typename SUBNET> using alevel0 = ares_down<256,SUBNET>;
template <typename SUBNET> using alevel1 = ares<256,ares<256,ares_down<256,SUBNET>>>;
template <typename SUBNET> using alevel2 = ares<128,ares<128,ares_down<128,SUBNET>>>;
template <typename SUBNET> using alevel3 = ares<64,ares<64,ares<64,ares_down<64,SUBNET>>>>;
template <typename SUBNET> using alevel4 = ares<32,ares<32,ares<32,SUBNET>>>;

using anet_type = loss_metric<fc_no_bias<128,avg_pool_everything<
                        alevel0<
                        alevel1<
                        alevel2<
                        alevel3<
                        alevel4<
                        max_pool<3,3,2,2,relu<affine<con<32,7,7,2,2,
                        input_rgb_image_sized<150>
                        >>>>>>>>>>>>;

std::vector<matrix<float,0,1>> am_main(char* image_name) try{
    // The first thing we are going to do is load all our models.  First, since we need to
    // find faces in the image we will need a face detector:
frontal_face_detector detector = get_frontal_face_detector();
    // We will also use a face landmarking model to align faces to a standard pose:  (see face_landmark_detection_ex.cpp for an introduction)
    shape_predictor sp;
    deserialize("shape_predictor_5_face_landmarks.dat") >> sp;
    // And finally we load the DNN responsible for face recognition.
    anet_type net;
    deserialize("dlib_face_recognition_resnet_model_v1.dat") >> net;

    matrix<rgb_pixel> img;
    load_image(img, image_name);

    // Run the face detector on the image of our action heroes, and for each face extract a
    // copy that has been normalized to 150x150 pixels in size and appropriately rotated
    // and centered.
    std::vector<matrix<rgb_pixel>> faces;
    for (auto face : detector(img))
    {
        auto shape = sp(img, face);
        matrix<rgb_pixel> face_chip;
        extract_image_chip(img, get_face_chip_details(shape,150,0.25), face_chip);
        faces.push_back(move(face_chip));
    }

    if (faces.size() == 0)
    {
        cout << "No faces found in image!" << endl;
        std::vector<matrix<float,0,1>> face_descriptors;
        return face_descriptors;
    }

    // This call asks the DNN to convert each face image in faces into a 128D vector.
    // In this 128D vector space, images from the same person will be close to each other
    // but vectors from different people will be far apart.  So we can use these vectors to
    // identify if a pair of images are from the same person or from different people.  
    std::vector<matrix<float,0,1>> face_descriptors = net(faces);
    return face_descriptors;
    // return 0;
}
catch (std::exception& e)
{
    cout << e.what() << endl;
}
int main(int argc, char** argv) {
    return 0;
}

1 个答案:

答案 0 :(得分:0)

请参阅http://www.swig.org/Doc3.0/SWIGDocumentation.html#SWIGPlus_nn30第6.18节模板中的Swig文档。您需要在接口文件中实例化您的模板,解析器期望在第11行进行实例化。尝试类似:

%template (MyStdVector) std::vector<matrix<float,0,1> >;

和其他模板类似。

相关问题