对于并行化集合的“foreach”永远不会开始

时间:2014-03-28 21:06:07

标签: multithreading mongodb scala collections parallel-processing

我有一个包含作业的Mongo数据库,我想并行处理;我想过尝试使用并行集合来透明地处理线程(而不是使用线程池会非常困难)。我想出了这段代码:

def run(stopSignal: SynchronizedQueue[Any]) = {
  val queue = new Iterator[Job] {
    private var prevId = new ObjectId("000000000000000000000000")

    def hasNext = stopSignal.isEmpty

    @tailrec
    def next = {
      val job = Job
        .where(_.status eqs Pending)
        // this works because the IDs start with a timestamp part
        .where(_._id gt prevId)
        .orderAsc(_.regTime)
        .get()
      job match {
        case Some(job) =>
          prevId = job.id
          println(s"next() => ${job.id}")
          job
        case None if hasNext =>
          Thread.sleep(500) // TODO: use a tailable cursor instead
          next
        case None =>
          throw new InterruptedException
      }
    }
  }

  try {
    queue.toStream.par.foreach { job =>
      println(s"processing ${job.id}...")
      processOne(job)
      println(s"processing complete: ${job.id}")
    }
  } catch { case _: InterruptedException => }
}

这会产生:

next() => 53335f7bef867e6f0805abdb
next() => 53335fc6ef867e6f0805abe2
next() => 53335ffcef867e6f0805abe6
next() => 53336005ef867e6f0805abe7
next() => 53336008ef867e6f0805abe8
next() => 5333600cef867e6f0805abe9

但处理永远不会开始;即从未调用传递给foreach的函数。如果我删除.par电话,它可以正常工作(但当然是连续的)。

这里究竟泄漏了哪些抽象?我该如何解决它?或者我应该放弃使用并行集合来继续使用更简单的线程池方法?

2 个答案:

答案 0 :(得分:3)

par方法首先将流的元素排放到ParSeq。 所以当你致电queue.toStream.par时。它将遍历流(调用底层迭代器' s hasNext和下一个方法,直到迭代器没有下一个)。检索完所有作业后,它就会开始调用processJob

例如

scala> (1 to 100).iterator.toStream
res7: scala.collection.immutable.Stream[Int] = Stream(1, ?)

scala> (1 to 100).iterator.toStream.par
res8: scala.collection.parallel.immutable.ParSeq[Int] = ParVector(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100)

par方法并非懒惰

如果您只是希望执行并行(实际上它已经并行但不是懒惰):

答案 1 :(得分:0)

结束线程池方法;我仍然把@ jilen的答案保持为被接受的答案,因为他回答了我的问题,但我也在发布解决方案:

http://www.javacodegeeks.com/2013/11/throttling-task-submission-with-a-blockingexecutor-2.html(取自Java Concurrency in Practice)获取BlockingExecutor代码段(转换为Scala),然后直接绕过任何Scala / Future包装使用它:

// 2 processing; 2 in the queue; 4 total
val executor = new BlockingExecutor(poolSize = 2, queueSize = 2)

try {
  queue.foreach { job =>
    executor.submit(new Runnable {
      def run = {
        println(s"processing ${job.id}...")
        processOne(job)
        println(s"processing complete: ${job.id}")
      }
    })
  }
} catch { case _: InterruptedException => }
executor.shutdown