无法编译boost spirit word_count_lexer示例

时间:2015-09-25 16:36:45

标签: c++ boost boost-spirit boost-spirit-qi boost-spirit-lex

我正在继续学习Boost Spirit库,并且在我无法编译的示例中遇到了comile问题。您可以在此处找到示例来源:source place。 您也可以查看此代码并在Coliru

上编译结果
#include <boost/config/warning_disable.hpp>
#include <boost/spirit/include/lex_lexertl.hpp>

//#define BOOST_SPIRIT_USE_PHOENIX_V3
#include <boost/spirit/include/phoenix_operator.hpp>
#include <boost/spirit/include/phoenix_statement.hpp>
#include <boost/spirit/include/phoenix_algorithm.hpp>
#include <boost/spirit/include/phoenix_core.hpp>

#include <string>
#include <iostream>

namespace lex = boost::spirit::lex;

struct distance_func
{
    template <typename Iterator1, typename Iterator2>
    struct result : boost::iterator_difference<Iterator1> {};

    template <typename Iterator1, typename Iterator2>
    typename result<Iterator1, Iterator2>::type 
    operator()(Iterator1& begin, Iterator2& end) const
    {
        return std::distance(begin, end);
    }
};
boost::phoenix::function<distance_func> const distance = distance_func();

//[wcl_token_definition
template <typename Lexer>
struct word_count_tokens : lex::lexer<Lexer>
{
    word_count_tokens()
      : c(0), w(0), l(0)
      , word("[^ \t\n]+")     // define tokens
      , eol("\n")
      , any(".")
    {
        using boost::spirit::lex::_start;
        using boost::spirit::lex::_end;
        using boost::phoenix::ref;

        // associate tokens with the lexer
        this->self 
            =   word  [++ref(w), ref(c) += distance(_start, _end)]
            |   eol   [++ref(c), ++ref(l)] 
            |   any   [++ref(c)]
            ;
    }

    std::size_t c, w, l;
    lex::token_def<> word, eol, any;
};
//]

///////////////////////////////////////////////////////////////////////////////
//[wcl_main
int main(int argc, char* argv[])
{
  typedef 
        lex::lexertl::token<char const*, lex::omit, boost::mpl::false_> 
     token_type;

/*<  This defines the lexer type to use
>*/  typedef lex::lexertl::actor_lexer<token_type> lexer_type;

/*<  Create the lexer object instance needed to invoke the lexical analysis 
>*/  word_count_tokens<lexer_type> word_count_lexer;

/*<  Read input from the given file, tokenize all the input, while discarding
     all generated tokens
>*/  std::string str;
    char const* first = str.c_str();
    char const* last = &first[str.size()];

/*<  Create a pair of iterators returning the sequence of generated tokens
>*/  lexer_type::iterator_type iter = word_count_lexer.begin(first, last);
    lexer_type::iterator_type end = word_count_lexer.end();

/*<  Here we simply iterate over all tokens, making sure to break the loop
     if an invalid token gets returned from the lexer
>*/  while (iter != end && token_is_valid(*iter))
        ++iter;

    if (iter == end) {
        std::cout << "lines: " << word_count_lexer.l 
                  << ", words: " << word_count_lexer.w 
                  << ", characters: " << word_count_lexer.c 
                  << "\n";
    }
    else {
        std::string rest(first, last);
        std::cout << "Lexical analysis failed\n" << "stopped at: \"" 
                  << rest << "\"\n";
    }
    return 0;
}

当我尝试编译它时,我收到很多错误,请参阅Coliru上的完整列表。

这个例子有什么问题?需要更改什么以及为什么要编译它?

1 个答案:

答案 0 :(得分:1)

显然Lex的内部发生了一些变化,迭代器现在有时会变成右值。

您需要将distance_func调整为

operator()(Iterator1 begin, Iterator2 end) const

operator()(Iterator1 const& begin, Iterator2 const& end) const

然后它有效。请参阅 Live On Coliru