使用beeline或hive2 jdbc驱动程序时的Datastax连接异常(Tableau)

时间:2016-04-19 19:18:46

标签: tableau datastax datastax-enterprise spark-cassandra-connector

我在我的开发虚拟机(Centos 7)上安装了Datastax企业版2.8。安装顺利进行,单节点集群运行良好。但是当我尝试使用beeline或hive2 jdbc驱动程序连接到群集时,我收到错误,如下所示。我的主要目标是使用Datastax Enterprise驱动程序或Spark Sql驱动程序连接Tableau。

观察到的错误是:

  

错误2016-04-14 17:57:56,915   org.apache.thrift.server.TThreadPoolServer:期间发生错误   处理消息。了java.lang.RuntimeException:   org.apache.thrift.transport.TTransportException:无效状态-128           在org.apache.thrift.transport.TSaslServerTransport $ Factory.getTransport(TSaslServerTransport.java:219)   〜[libthrift-0.9.3.jar:0.9.3]           at org.apache.thrift.server.TThreadPoolServer $ WorkerProcess.run(TThreadPoolServer.java:269)   〜[libthrift-0.9.3.jar:0.9.3]           在java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)   [NA:1.7.0_99]           at java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:615)   [NA:1.7.0_99]           at java.lang.Thread.run(Thread.java:745)[na:1.7.0_99]引起:org.apache.thrift.transport.TTransportException:状态无效   -128           在org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)   〜[libthrift-0.9.3.jar:0.9.3]           在org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184)   〜[libthrift-0.9.3.jar:0.9.3]           at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)   〜[libthrift-0.9.3.jar:0.9.3]           在org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)   〜[libthrift-0.9.3.jar:0.9.3]           在org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)   〜[libthrift-0.9.3.jar:0.9.3]           在org.apache.thrift.transport.TSaslServerTransport $ Factory.getTransport(TSaslServerTransport.java:216)   〜[libthrift-0.9.3.jar:0.9.3]           ...省略了4个常见帧错误2016-04-14 17:58:59,140 org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend:   申请已被杀死。原因:Master删除了我们的申请:   镇静

     

我的cassandra.yml配置:

     

cluster_name:' Cluster1'

     

num_tokens:256

     

hinted_handoff_enabled:true hinted_handoff_throttle_in_kb:1024   max_hints_delivery_threads:2

     

batchlog_replay_throttle_in_kb:1024

     

authenticator:AllowAllAuthenticator

     

authorizer:AllowAllAuthorizer

     

permissions_validity_in_ms:2000

     

partitioner:org.apache.cassandra.dht.Murmur3Partitioner

     

data_file_directories:         - / var / lib / cassandra / data

     

commitlog_directory:/ var / lib / cassandra / commitlog

     

disk_failure_policy:停止

     

commit_failure_policy:停止

     

key_cache_size_in_mb:

     

key_cache_save_period:14400

     

row_cache_size_in_mb:0

     

row_cache_save_period:0

     

counter_cache_size_in_mb:

     

counter_cache_save_period:7200

     

saved_caches_directory:/ var / lib / cassandra / saved_caches

     

commitlog_sync:periodic commitlog_sync_period_in_ms:10000

     

commitlog_segment_size_in_mb:32

     

seed_provider:        - class_name:org.apache.cassandra.locator.SimpleSeedProvider         参数:              - 种子:" 10.33.1.124"

     

concurrent_reads:32个concurrent_writes:32个concurrent_counter_writes:   32

     

memtable_allocation_type:heap_buffers

     

index_summary_capacity_in_mb:

     

index_summary_resize_interval_in_minutes:60

     

trickle_fsync:false trickle_fsync_interval_in_kb:10240

     

storage_port:7000

     

ssl_storage_port:7001

     

listen_address:10.33.1.124

     

start_native_transport:true native_transport_port:9042

     

start_rpc:true

     

rpc_address:10.33.1.124

     

rpc_port:9160

     

rpc_keepalive:true

     

rpc_server_type:sync

     

thrift_framed_transport_size_in_mb:15

     

incremental_backups:false

     

snapshot_before_compaction:false

     

auto_snapshot:true

     

tombstone_warn_threshold:1000 tombstone_failure_threshold:100000

     

column_index_size_in_kb:64

     

batch_size_warn_threshold_in_kb:64

     

compaction_throughput_mb_per_sec:16

     

compaction_large_partition_warning_threshold_mb:100

     

sstable_preemptive_open_interval_in_mb:50

     

read_request_timeout_in_ms:5000 range_request_timeout_in_ms:10000   write_request_timeout_in_ms:2000 counter_write_request_timeout_in_ms:   5000 cas_contention_timeout_in_ms:1000   truncate_request_timeout_in_ms:60000 request_timeout_in_ms:10000

     

cross_node_timeout:false

     

endpoint_snitch:com.datastax.bdp.snitch.DseSimpleSnitch

     

dynamic_snitch_update_interval_in_ms:100   dynamic_snitch_reset_interval_in_ms:600000   dynamic_snitch_badness_threshold:0.1

     

request_scheduler:org.apache.cassandra.scheduler.NoScheduler

     

server_encryption_options:       internode_encryption:无       keystore:resources / dse / conf / .keystore       keystore_password:cassandra       truststore:resources / dse / conf / .truststore       truststore_password:cassandra

     

client_encryption_options:       启用:false       可选:false       keystore:resources / dse / conf / .keystore       keystore_password:cassandra

     

internode_compression:dc

     

inter_dc_tcp_nodelay:false

     

concurrent_counter_writes:32

     

counter_cache_size_in_mb:

     

counter_cache_save_period:7200

     

memtable_allocation_type:heap_buffers

     

index_summary_capacity_in_mb:

     

index_summary_resize_interval_in_minutes:60

     

sstable_preemptive_open_interval_in_mb:50

     

counter_write_request_timeout_in_ms:5000

     

当使用beeline连接时,我收到错误:

     

dse直线Beeline版本0.12.0.11由Apache Hive beeline> !连接   jdbc:hive2://10.33.1.124:10000扫描完成10ms连接到   jdbc:hive2://10.33.1.124:10000输入用户名   jdbc:hive2://10.33.1.124:10000:cassandra输入密码   jdbc:hive2://10.33.1.124:10000:*********错误:无效的URL:   jdbc:hive2://10.33.1.124:10000(state = 08S01,code = 0)0:   JDBC:hive2://10.33.1.124:10000> !连接   jdbc:hive2://10.33.1.124:10000连接到   jdbc:hive2://10.33.1.124:10000输入用户名   jdbc:hive2://10.33.1.124:10000:输入密码   jdbc:hive2://10.33.1.124:10000:错误:无效的URL:   jdbc:hive2://10.33.1.124:10000(state = 08S01,code = 0)1:   JDBC:hive2://10.33.1.124:10000>

通过Tableau连接时,我也会看到类似的错误。

1 个答案:

答案 0 :(得分:2)

JDBC驱动程序连接到SparkSql Thrift服务器。如果您没有启动它,则无法连接它。

dse spark-sql-thriftserver
/Users/russellspitzer/dse/bin/dse:
usage: dse spark-sql-thriftserver <command> [Spark SQL Thriftserver Options]

Available commands:
  start                             Start Spark SQL Thriftserver
  stop                              Stops Spark SQL Thriftserver
相关问题