Spark cassandra连接器连接错误,没有更多的主机可以尝试

时间:2015-04-01 09:06:15

标签: cassandra apache-spark datastax

我遇到了与datastax spark-Cassandra-connector相关的问题。当我试图测试我们的spark-Cassandra连接时,我使用下面的代码。我的问题是这段代码在半小时后抛出异常。我认为有一些连接问题,任何人都可以提供帮助,我被困住了。

    SparkConf conf = new SparkConf(true)
    .setMaster("local")
    .set("spark.cassandra.connection.host",
            Config.CASSANDRA_CONTACT_POINT)
    .setAppName(Config.CASSANDRA_DB_NAME)
    .set("spark.executor.memory",
            Config.Spark_Executor_Memory);
    SparkContext javaSparkContext = new SparkContext(conf);
    SparkContextJavaFunctions functions = CassandraJavaUtil.javaFunctions(javaSparkContext);

    for(;;){
    JavaRDD<ObjectHandler> obj = functions.cassandraTable(Config.CASSANDRA_DB_NAME,
            "my_users", ObjectHandler.class);
     System.out.println("#####" + obj.count() + "#####");
    }

错误:

java.lang.OutOfMemoryError: Java heap space
at org.jboss.netty.buffer.HeapChannelBuffer.slice(HeapChannelBuffer.java:201)
at org.jboss.netty.buffer.AbstractChannelBuffer.readSlice(AbstractChannelBuffer.java:323)
at com.datastax.driver.core.CBUtil.readValue(CBUtil.java:247)
at com.datastax.driver.core.Responses$Result$Rows$1.decode(Responses.java:395)
at com.datastax.driver.core.Responses$Result$Rows$1.decode(Responses.java:383)
at com.datastax.driver.core.Responses$Result$2.decode(Responses.java:201)
at com.datastax.driver.core.Responses$Result$2.decode(Responses.java:198)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:182)
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:66)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:310)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
19:11:12.311 DEBUG [New I/O worker #1612][com.datastax.driver.core.Connection] Defuncting connection to /192.168.1.26:9042
com.datastax.driver.core.TransportException: [/192.168.1.26:9042] Unexpected exception triggered (java.lang.OutOfMemoryError: Java heap space)
    at com.datastax.driver.core.Connection$Dispatcher.exceptionCaught(Connection.java:614)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:60)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.exceptionCaught(FrameDecoder.java:377)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.java:525)
    at org.jboss.netty.channel.AbstractChannelSink.exceptionCaught(AbstractChannelSink.java:48)
    at org.jboss.netty.channel.DefaultChannelPipeline.notifyHandlerException(DefaultChannelPipeline.java:658)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:566)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:310)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.OutOfMemoryError: Java heap space
    at org.jboss.netty.buffer.HeapChannelBuffer.slice(HeapChannelBuffer.java:201)
    at org.jboss.netty.buffer.AbstractChannelBuffer.readSlice(AbstractChannelBuffer.java:323)
    at com.datastax.driver.core.CBUtil.readValue(CBUtil.java:247)
    at com.datastax.driver.core.Responses$Result$Rows$1.decode(Responses.java:395)
    at com.datastax.driver.core.Responses$Result$Rows$1.decode(Responses.java:383)
    at com.datastax.driver.core.Responses$Result$2.decode(Responses.java:201)
    at com.datastax.driver.core.Responses$Result$2.decode(Responses.java:198)
    at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:182)
    at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:66)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:310)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    ... 3 more
19:11:13.549 DEBUG [New I/O worker #1612][com.datastax.driver.core.Connection] [/192.168.1.26:9042-1] closing connection
19:11:12.311 DEBUG [main][com.datastax.driver.core.ControlConnection] [Control connection] error on /192.168.1.26:9042 connection, no more host to try
com.datastax.driver.core.ConnectionException: [/192.168.1.26:9042] Operation timed out
    at com.datastax.driver.core.DefaultResultSetFuture.onTimeout(DefaultResultSetFuture.java:138)
    at com.datastax.driver.core.Connection$ResponseHandler$1.run(Connection.java:763)
    at org.jboss.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:546)
    at org.jboss.netty.util.HashedWheelTimer$Worker.notifyExpiredTimeouts(HashedWheelTimer.java:446)
    at org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:395)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at java.lang.Thread.run(Thread.java:722)
19:11:13.551 DEBUG [main][com.datastax.driver.core.Cluster] Shutting down
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /192.168.1.26:9042 (com.datastax.driver.core.ConnectionException: [/192.168.1.26:9042] Operation timed out))
    at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:195)
    at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
    at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1143)
    at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:313)
    at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:166)
    at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$4.apply(CassandraConnector.scala:151)
    at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$4.apply(CassandraConnector.scala:151)
    at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:36)
    at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:61)
    at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:72)
    at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:97)
    at com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:108)
    at com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:131)
    at com.datastax.spark.connector.rdd.CassandraRDD.tableDef$lzycompute(CassandraRDD.scala:206)
    at com.datastax.spark.connector.rdd.CassandraRDD.tableDef(CassandraRDD.scala:205)
    at com.datastax.spark.connector.rdd.CassandraRDD.<init>(CassandraRDD.scala:212)
    at com.datastax.spark.connector.SparkContextFunctions.cassandraTable(SparkContextFunctions.scala:48)
    at com.datastax.spark.connector.SparkContextJavaFunctions.cassandraTable(SparkContextJavaFunctions.java:47)
    at com.datastax.spark.connector.SparkContextJavaFunctions.cassandraTable(SparkContextJavaFunctions.java:89)
    at com.datastax.spark.connector.SparkContextJavaFunctions.cassandraTable(SparkContextJavaFunctions.java:140)
    at com.shephertz.app42.paas.spark.SegmentationWorker.main(SegmentationWorker.java:52)

2 个答案:

答案 0 :(得分:1)

看起来你的堆空间用完了:

java.lang.OutOfMemoryError: Java heap space

java-driver(spark-connector用于与cassandra交互的东西)解除了连接,因为在处理请求时抛出了OutOfMemoryError。当连接失效时,其主机将被关闭。

NoHostAvailableException可能会被引发,因为所有主机因为连接被取消而被关闭,可能是因为OutOfMemoryError。

你知道为什么你会得到一个OutOfMemoryError吗?你的堆大小是多少?你在做什么会导致很多对象在你的JVM堆上?你可能有内存泄漏吗?

答案 1 :(得分:0)

您的错误可能在于如何配置JVM。如果未正确调整设置,则垃圾回收可能会导致一些问题。如果您正在使用Cassandra&gt; 2.0见Datastax's "Tuning Java Resources"

Cassandra如何使用文档中的内存:

  

使用像Cassandra这样的基于java的系统,通常可以分配   垃圾收集暂停时间之前堆上大约8GB的内存   开始成为一个问题。现代机器比内存有更多的内存   那和Cassandra可以利用额外的内存作为页面缓存   访问磁盘上的文件时分配超过8GB的内存   由于Cassandra元数据的数量,堆会造成问题   磁盘上的数据。 Cassandra元数据驻留在内存中   与总数据成比例。一些组件按比例增长   达到总记忆的大小。

     

在Cassandra 1.2及更高版本中,Bloom过滤器和压缩偏移   存储此元数据的映射驻留在堆外,大大增加了   每个节点的数据容量,Cassandra可以有效处理。在   Cassandra 2.0,分区摘要也存在于堆外。

请发布您的JVM选项以获得进一步的帮助。

相关问题