从R中的句子中提取对象的动作

时间:2017-09-27 06:58:22

标签: r nlp opennlp

我想从R中的句子列表中提取对象所做的操作。给出一个小概述。

S = “The boy opened the box. He took the chocolates. He ate the chocolates. 
     He went to school”

我正在寻找如下组合:

Opened box
Took chocolates
Ate chocolates
Went school

我能够单独提取动词和名词。但无法找到一种方法将它们结合起来以获得这样的见解。

library(openNLP)
library(openNLPmodels.en)
library(NLP)

s = as.String("The boy opened the box. He took the chocolates. He ate the 
               chocolates. He went to school")

tagPOS<-  function(x, ...) {
s <- as.String(x)
word_token_annotator<- Maxent_Word_Token_Annotator()
a2 <- Annotation(1L, "sentence", 1L, nchar(s))
a2 <- annotate(s, word_token_annotator, a2)
a3 <- annotate(s, Maxent_POS_Tag_Annotator(), a2)
a3w <- a3[a3$type == "word"]
POStags<- unlist(lapply(a3w$features, `[[`, "POS"))
POStagged<- paste(sprintf("%s/%s", s[a3w], POStags), collapse = ",")
list(POStagged = POStagged, POStags = POStags)
}

nouns = c("/NN", "/NNS","/NNP","/NNPS")
verbs = c("/VB","/VBD","/VBG","/VBN","/VBP","/VBZ")

s = tolower(s)
s = gsub("\n","",s)
s = gsub('"',"",s)

tags = tagPOS(s)
tags = tags$POStagged
tags = unlist(strsplit(tags, split=","))

nouns_present = tags[grepl(paste(nouns, collapse = "|"), tags)]
nouns_present = unique(nouns_present)
verbs_present = tags[grepl(paste(verbs, collapse = "|"), tags)]
verbs_present = unique(verbs_present)
nouns_present<- gsub("^(.*?)/.*", "\\1", nouns_present)
verbs_present = gsub("^(.*?)/.*", "\\1", verbs_present)
nouns_present = 
paste("'",as.character(nouns_present),"'",collapse=",",sep="")
verbs_present = 
paste("'",as.character(verbs_present),"'",collapse=",",sep="")

这个想法是构建一个网络图,在点击一个动词节点时,所有附加到它的对象都会出现,反之亦然。 对此的任何帮助都会很棒。

1 个答案:

答案 0 :(得分:0)

我假设您也想在关键动作动词之前和之后得到这些词。我可以通过使用tidytext包来实现这一点。 (参考链接:https://uc-r.github.io/word_relationships

library(tidytext)
library(tidyverse)

#first create another column with divided up text strings by n(i set as every two words paired together)
mydf <-unnest_tokens(comments, "tokens", Response, token = "ngrams", n=2, to_lower = TRUE, drop = FALSE)

#remove stopwords:
mydf %>%
  separate(tokens, c("word1", "word2"), sep = " ") %>%
  filter(!word1 %in% stop_words$word,
         !word2 %in% stop_words$word,
         ) %>%
  count(word1, word2, sort = TRUE) %>% view()
相关问题