不能编译boost spirit word_count_lexer示例

Can`t compile boost spirit word_count_lexer example

本文关键字:count 示例 lexer word 编译 boost spirit 不能      更新时间:2023-10-16

我正在继续学习Boost Spirit库,并且有我无法编译的示例的编译问题。您可以在这里找到示例的源代码:源位置。您还可以查看此代码并在Coliru

上编译结果。
#include <boost/config/warning_disable.hpp>
#include <boost/spirit/include/lex_lexertl.hpp>
//#define BOOST_SPIRIT_USE_PHOENIX_V3
#include <boost/spirit/include/phoenix_operator.hpp>
#include <boost/spirit/include/phoenix_statement.hpp>
#include <boost/spirit/include/phoenix_algorithm.hpp>
#include <boost/spirit/include/phoenix_core.hpp>
#include <string>
#include <iostream>
namespace lex = boost::spirit::lex;
struct distance_func
{
    template <typename Iterator1, typename Iterator2>
    struct result : boost::iterator_difference<Iterator1> {};
    template <typename Iterator1, typename Iterator2>
    typename result<Iterator1, Iterator2>::type 
    operator()(Iterator1& begin, Iterator2& end) const
    {
        return std::distance(begin, end);
    }
};
boost::phoenix::function<distance_func> const distance = distance_func();
//[wcl_token_definition
template <typename Lexer>
struct word_count_tokens : lex::lexer<Lexer>
{
    word_count_tokens()
      : c(0), w(0), l(0)
      , word("[^ tn]+")     // define tokens
      , eol("n")
      , any(".")
    {
        using boost::spirit::lex::_start;
        using boost::spirit::lex::_end;
        using boost::phoenix::ref;
        // associate tokens with the lexer
        this->self 
            =   word  [++ref(w), ref(c) += distance(_start, _end)]
            |   eol   [++ref(c), ++ref(l)] 
            |   any   [++ref(c)]
            ;
    }
    std::size_t c, w, l;
    lex::token_def<> word, eol, any;
};
//]
///////////////////////////////////////////////////////////////////////////////
//[wcl_main
int main(int argc, char* argv[])
{
  typedef 
        lex::lexertl::token<char const*, lex::omit, boost::mpl::false_> 
     token_type;
/*<  This defines the lexer type to use
>*/  typedef lex::lexertl::actor_lexer<token_type> lexer_type;
/*<  Create the lexer object instance needed to invoke the lexical analysis 
>*/  word_count_tokens<lexer_type> word_count_lexer;
/*<  Read input from the given file, tokenize all the input, while discarding
     all generated tokens
>*/  std::string str;
    char const* first = str.c_str();
    char const* last = &first[str.size()];
/*<  Create a pair of iterators returning the sequence of generated tokens
>*/  lexer_type::iterator_type iter = word_count_lexer.begin(first, last);
    lexer_type::iterator_type end = word_count_lexer.end();
/*<  Here we simply iterate over all tokens, making sure to break the loop
     if an invalid token gets returned from the lexer
>*/  while (iter != end && token_is_valid(*iter))
        ++iter;
    if (iter == end) {
        std::cout << "lines: " << word_count_lexer.l 
                  << ", words: " << word_count_lexer.w 
                  << ", characters: " << word_count_lexer.c 
                  << "n";
    }
    else {
        std::string rest(first, last);
        std::cout << "Lexical analysis failedn" << "stopped at: "" 
                  << rest << ""n";
    }
    return 0;
}

当我尝试编译它时,我收到了很多错误,请参阅Coliru上的完整列表。

这个例子有什么问题?编译它需要修改什么,为什么需要修改?

显然Lex的内部发生了一些变化,迭代器现在有时是右值了。

您需要调整distance_func以读取

operator()(Iterator1 begin, Iterator2 end) const

operator()(Iterator1 const& begin, Iterator2 const& end) const

那就行了。参见Live On Coliru