清理Mechanical Turk的文本?

时间:2012-07-05 13:20:26

标签: r mechanicalturk

是否有预先存在的函数来清理Mechanical Turk的data.frame的字符列?这是一个它被挂起的一个例子:

x <- "Duke\U3e32393cs B or C, no concomittant malignancy, ulcerative colitis, Crohn\U3e32393cs disease, renal, heart or liver failure"

我认为那些是unicode字符,但是MT并没有让我继续使用它们。我很明显可以很容易地使用它们,但是我使用了MT一点点,希望有一个更通用的解决方案来删除所有非ascii字符。

修改

我可以按如下方式删除编码:

> iconv(x,from="UTF-8",to="latin1",sub=".")
[1] "Duke......s B or C, no concomittant malignancy, ulcerative colitis, Crohn......s disease, renal, heart or liver failure"

但是,对于那些对任何元素使用非utf8编码的向量,我仍然缺乏更通用的解决方案。

> dput(vec)
c("Colorectal cancer patients Duke\U3e32393cs B or C, no concomittant malignancy, ulcerative colitis, Crohn\U3e32393cs disease, renal, heart or liver failure", 
"Patients with Parkinson\U3e32393cs Disease not already on levodopa", 
"hi")

请注意,常规文本编码为“unknown”,没有转换为“latin1”,因此使用iconv的简单解决方案失败。我在下面尝试了一个更细微的解决方案,但我对此并不满意。

1 个答案:

答案 0 :(得分:4)

要回答我自己的问题并希望有人有更好的方法,因为我不相信这会处理所有时髦的文字:

sanitize.text <- function(x) {
  stopifnot(is.character(x))
  sanitize.each.element <- function(elem) {
    ifelse(
      Encoding(elem)=="unknown",
      elem,
      iconv(elem,from=as.character(Encoding(elem)),to="latin1",sub="")
    )
  }
  x <- sapply(x, sanitize.each.element)
  names(x) <- NULL
  x
}

> sanitize.text(vec)
[1] "Colorectal cancer patients Dukes B or C, no concomittant malignancy, ulcerative colitis, Crohns disease, renal, heart or liver failure"
[2] "Patients with Parkinsons Disease not already on levodopa"                                                                              
[3] "hi"   

一个处理MT其他导入怪癖的函数:

library(taRifx)
write.sanitized.csv <- function( x, file="", ... ) {
  sanitize.text <- function(x) {
    stopifnot(is.character(x))
    sanitize.each.element <- function(elem) {
      ifelse(
        Encoding(elem)=="unknown",
        elem,
        iconv(elem,from=as.character(Encoding(elem)),to="latin1",sub="")
      )
    }
    x <- sapply(x, sanitize.each.element)
    names(x) <- NULL
    x
  }
  x <- japply( df=x, sel=sapply(x,is.character), FUN=sanitize.text)
  colnames(x) <- gsub("[^a-zA-Z0-9_]", "_", colnames(x) )
  write.csv( x, file, row.names=FALSE, ... )
}

修改

由于缺少放置此代码的更好位置,您可以找出字符向量的哪个元素导致的问题,即使上述函数也无法解决以下问题:

#' Function to locate a non-ASCII character
#' @param txt A character vector
#' @return A logical of length length(txt) 
locateBadString <- function(txt) {
  vapply(txt, function(x) {
    class( try( substr( x, 1, nchar(x) ) ) )!="try-error"
  }, TRUE )
}

<强> EDIT2

认为这应该有效:

iconv(x, to = "latin1", sub="")

感谢@Masoud的回答:https://stackoverflow.com/a/20250920/636656