Spark调用ShuffleBlockFetcherIterator时发生了什么?

时间:2015-12-17 02:13:47

标签: apache-spark apache-spark-sql

我的火花工作似乎花了很多时间来获得积木。有时它会在一小时或2小时内执行此操作。我的数据集有1个分区,因此我不确定为什么它会进行如此多的改组。有谁知道这到底发生了什么?

15/12/16 18:05:27 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:05:27 INFO ShuffleBlockFetcherIterator: Getting 4 non-empty blocks out of 4 blocks
15/12/16 18:05:27 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:05:40 INFO ShuffleBlockFetcherIterator: Getting 200 non-empty blocks out of 200 blocks
15/12/16 18:05:40 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:05:40 INFO ShuffleBlockFetcherIterator: Getting 4 non-empty blocks out of 4 blocks
15/12/16 18:05:40 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:05:59 INFO ShuffleBlockFetcherIterator: Getting 200 non-empty blocks out of 200 blocks
15/12/16 18:05:59 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:05:59 INFO ShuffleBlockFetcherIterator: Getting 4 non-empty blocks out of 4 blocks
15/12/16 18:05:59 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:06:13 INFO ShuffleBlockFetcherIterator: Getting 200 non-empty blocks out of 200 blocks
15/12/16 18:06:13 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:06:13 INFO ShuffleBlockFetcherIterator: Getting 4 non-empty blocks out of 4 blocks
15/12/16 18:06:13 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:06:33 INFO ShuffleBlockFetcherIterator: Getting 200 non-empty blocks out of 200 blocks
15/12/16 18:06:33 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:06:33 INFO ShuffleBlockFetcherIterator: Getting 4 non-empty blocks out of 4 blocks
15/12/16 18:06:33 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:06:49 INFO ShuffleBlockFetcherIterator: Getting 200 non-empty blocks out of 200 blocks
15/12/16 18:06:49 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:06:49 INFO ShuffleBlockFetcherIterator: Getting 4 non-empty blocks out of 4 blocks
15/12/16 18:06:49 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:07:14 INFO ShuffleBlockFetcherIterator: Getting 200 non-empty blocks out of 200 blocks
15/12/16 18:07:14 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
15/12/16 18:07:14 INFO ShuffleBlockFetcherIterator: Getting 4 non-empty blocks out of 4 blocks
15/12/16 18:07:14 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:07:33 INFO ShuffleBlockFetcherIterator: Getting 200 non-empty blocks out of 200 blocks
15/12/16 18:07:33 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
15/12/16 18:07:33 INFO ShuffleBlockFetcherIterator: Getting 4 non-empty blocks out of 4 blocks
15/12/16 18:07:33 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:07:46 INFO ShuffleBlockFetcherIterator: Getting 200 non-empty blocks out of 200 blocks
15/12/16 18:07:46 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
15/12/16 18:07:47 INFO ShuffleBlockFetcherIterator: Getting 4 non-empty blocks out of 4 blocks
15/12/16 18:07:47 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:07:58 INFO ShuffleBlockFetcherIterator: Getting 200 non-empty blocks out of 200 blocks
15/12/16 18:07:58 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
15/12/16 18:07:58 INFO ShuffleBlockFetcherIterator: Getting 4 non-empty blocks out of 4 blocks
15/12/16 18:07:58 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms

2 个答案:

答案 0 :(得分:0)

ShuffleBlockFetcherIterator是一个Scala迭代器,它从本地和远程BlockManagers获取多个shuffle块(也称为shuffle map输出)。

它允许迭代一系列块作为(BlockId,InputStream)对,因此调用者可以在接收时以流水线方式处理随机块。

对于表现 - 你需要调整你的操作;或配置。

答案 1 :(得分:0)

仅供参考,FYI spark可能所做的不仅是获取块,还可能禁用了其他所有功能的日志记录功能。如果您尚未转到Spark历史记录服务器并查看查询的SQL选项卡。这很可能是您改组和处理过多数据的症状。尝试减少所拖拽的数据量,或将其分解成较小的块或获得更大的群集。

“我的数据集有1个分区,所以我不确定为什么要进行如此多的改组。”

还请记住,分区是spark中的重载单词。即使您有1个表分区,在处理数据时也可能有多个spark数据切片分区。

相关问题