我的应用程序要求我有多个线程运行从各种HDFS节点获取数据。为此,我使用线程执行程序池和分叉线程。 分叉:
val pathSuffixList = fileStatuses.getOrElse("FileStatus", List[Any]()).asInstanceOf[List[Map[String, Any]]]
pathSuffixList.foreach(block => {
ConsumptionExecutor.execute(new Consumption(webHdfsUri,block))
})
我的课程消费:
class Consumption(webHdfsUri: String, block:Map[String,Any]) extends Runnable {
override def run(): Unit = {
val uriSplit = webHdfsUri.split("\\?")
val fileOpenUri = uriSplit(0) + "/" + block.getOrElse("pathSuffix", "").toString + "?op=OPEN"
val inputStream = new URL(fileOpenUri).openStream()
val datumReader = new GenericDatumReader[Void]()
val dataStreamReader = new DataFileStream(inputStream, datumReader)
// val schema = dataStreamReader.getSchema()
val dataIterator = dataStreamReader.iterator()
while (dataIterator.hasNext) {
println(" data : " + dataStreamReader.next())
}
}
}
消费执行者:
object ConsumptionExecutor{
val counter: AtomicLong = new AtomicLong()
val executionContext: ExecutorService = Executors.newCachedThreadPool(new ThreadFactory {
def newThread(r: Runnable): Thread = {
val thread: Thread = new Thread(r)
thread.setName("ConsumptionExecutor-" + counter.incrementAndGet())
thread
}
})
executionContext.asInstanceOf[ThreadPoolExecutor].setMaximumPoolSize(200)
def execute(trigger: Runnable) {
executionContext.execute(trigger)
}
}
但是我想使用Akka流媒体/ Akka演员,我不需要提供固定的线程池大小,而Akka会处理所有事情。 我对Akka以及流媒体和演员的概念都很陌生。有人可以以示例代码的形式给我任何线索以适合我的用例吗? 提前谢谢!
答案 0 :(得分:1)
一个想法是为您正在读取的每个HDFS节点创建ActorPublisher的(子类)实例,然后Merge
将它们作为Source
个{ {3}}
像这样的伪代码,其中省略了ActorPublisher
来源的详细信息:
val g = PartialFlowGraph { implicit b =>
import FlowGraphImplicits._
val in1 = actorSource1
val in2 = actorSource2
// etc.
val out = UndefinedSink[T]
val merge = Merge[T]
in1 ~> merge ~> out
in2 ~> merge
// etc.
}
这可以通过迭代它们并为每个人的merge
添加边缘来改进演员源集合,但是这给出了这个想法。