MSE使用h2o进行异常检测

时间:2016-01-11 15:46:53

标签: r machine-learning deep-learning h2o

我使用h2o给出的例子进行心电图异常检测。 当尝试手动计算MSE时,我得到了不同的结果。 为了证明我使用最后一个测试用例的区别 但所有23个案例都不同。 附件是完整的代码:

谢谢, 利。

suppressMessages(library(h2o))
localH2O = h2o.init(max_mem_size = '6g', # use 6GB of RAM of *GB available
                nthreads = -1) # use all CPUs (8 on my personal computer :3)

# Download and import ECG train and test data into the H2O cluster
train_ecg <- h2o.importFile(path = "http://h2o-public-test-data.s3.amazonaws.com/smalldata/anomaly/ecg_discord_train.csv",
                          header = FALSE,
                          sep = ",")
test_ecg <- h2o.importFile(path = "http://h2o-public-test-data.s3.amazonaws.com/smalldata/anomaly/ecg_discord_test.csv",
                         header = FALSE,
                         sep = ",")
# Train deep autoencoder learning model on "normal"
# training data, y ignored
anomaly_model <- h2o.deeplearning(x = names(train_ecg),
                                 training_frame = train_ecg,
                                 activation = "Tanh",
                                 autoencoder = TRUE,
                                 hidden = c(50,20,50),
                                 l1 = 1e-4,
                                 epochs = 100)

# Compute reconstruction error with the Anomaly
# detection app (MSE between output layer and input layer)
recon_error <- h2o.anomaly(anomaly_model, test_ecg)

# Pull reconstruction error data into R and
# plot to find outliers (last 3 heartbeats)
recon_error <- as.data.frame(recon_error)
recon_error
plot.ts(recon_error)
test_recon <- h2o.predict(anomaly_model, test_ecg)

t <- as.vector(test_ecg[23,])
r <- as.vector(test_recon[23,])
mse.23 <- sum((t-r)^2)/length(t)
mse.23
recon_error[23,]

> mse.23
[1] 2.607374
> recon_error[23,]
[1] 8.264768

2 个答案:

答案 0 :(得分:1)

这不是一个真正的答案,但是我做了@Arno Candel所建议的。我尝试将测试和训练数据合并并归一化为0-1。之后,我将合并和标准化的数据拆分回测试和训练数据,并运行由OP生成的脚本。但是,通过手动计算,我仍然得到不同的MSE。当我分别标准化测试和训练数据时,MSE也有所不同。我可以做些什么来正确地进行手动计算?

suppressMessages(library(purrr))
suppressMessages(library(dplyr))
suppressMessages(library(h2o))

localH2O = h2o.init(max_mem_size = '6g', # use 6GB of RAM of *GB available
                nthreads = -1) # use all CPUs (8 on my personal computer :3)

# Download and import ECG train and test data into the H2O cluster
train_ecg <- h2o.importFile(path = "http://h2o-public-test-data.s3.amazonaws.com/smalldata/anomaly/ecg_discord_train.csv",
                          header = FALSE,
                          sep = ",")
test_ecg <- h2o.importFile(path = "http://h2o-public-test-data.s3.amazonaws.com/smalldata/anomaly/ecg_discord_test.csv",
                         header = FALSE,
                         sep = ",")
### adding this section
# normalize data 
train_ecg <- as.data.frame(train_ecg)
test_ecg <- as.data.frame(test_ecg)

dat <- rbind(train_ecg,test_ecg)

get_desc <- function(x) {
  map(x, ~list(
    min = min(.x),
    max = max(.x),
    mean = mean(.x),
    sd = sd(.x)
  ))
}

normalization_minmax <- function(x, desc) {
  map2_dfc(x, desc, ~(.x - .y$min)/(.y$max - .y$min))
}

desc <- dat %>%
  get_desc()

dat <- dat %>%
  normalization_minmax(desc)

train_ecg  <- as.matrix(dat[1:20,]) ; test_ecg <- as.matrix(dat[21:43,])

# Train deep autoencoder learning model on "normal"
# training data, y ignored
anomaly_model <- h2o.deeplearning(x = names(train_ecg),
                                 training_frame = train_ecg,
                                 activation = "Tanh",
                                 autoencoder = TRUE,
                                 hidden = c(50,20,50),
                                 l1 = 1e-4,
                                 epochs = 100)

# Compute reconstruction error with the Anomaly
# detection app (MSE between output layer and input layer)
recon_error <- h2o.anomaly(anomaly_model, test_ecg)

# Pull reconstruction error data into R and
# plot to find outliers (last 3 heartbeats)
recon_error <- as.data.frame(recon_error)
recon_error
plot.ts(recon_error)
test_recon <- h2o.predict(anomaly_model, test_ecg)

t <- as.vector(test_ecg[23,])
r <- as.vector(test_recon[23,])
mse.23 <- sum((t-r)^2)/length(t)
mse.23
recon_error[23,]


> mse.23
[1] 23.14947
> recon_error[23,]
[1] 8.076866

答案 1 :(得分:0)

对于H2O中的自动编码器,MSE数学在标准化空间中完成,以避免数字缩放问题。例如,如果你有分类特征或非常大的数字,神经网络自动编码器不能直接对这些数字进行操作,而是首先进行虚拟的单热编码和数字特征的规范化,然后它做fwd / back重建误差的传播和计算(在标准化和扩展空间中)。对于纯数值数据,您可以先按其范围(最大 - 最小)手动划分每列,并且结果应匹配。

这是一个明确执行此检查的JUnit(在该数据集上): https://github.com/h2oai/h2o-3/blob/master/h2o-algos/src/test/java/hex/deeplearning/DeepLearningAutoEncoderTest.java#L86-L104

您还可以查看https://0xdata.atlassian.net/browse/PUBDEV-2078了解详情。