使用Zookeeper设置Clickhouse 3节点循环集群时,clickhouse副本/服务器无法相互连接

时间:2019-07-06 12:45:00

标签: apache-zookeeper clickhouse

数据不会复制到每个Clickhouse副本。检查Clickhouse-server日志时,它会显示

DB :: StorageReplicatedMergeTree :: queueTask()::: Poco :: Exception。代码:1000,e.code()= 0,e.displayText()=找不到主机:ip-172-1-140-243(版本19.9.2.4)

我同时安装Clickhouse和Zookeeper的每台计算机上都有3台不同的计算机。我正在尝试使用Zookeeper设置3节点Clickhouse群集。我已按照以下步骤进行配置https://blog.uiza.io/replicated-and-distributed-on-clickhouse-part-2/。我已经在所有Clickhouse实例上创建了表和副本,并在zookeeper中进行了验证。在zookeeper中为所有副本创建了目录。在所有实例上都创建了/etc/metrica.xml,zoo.cfg,/etc/clickhouse-server/config.xml。提供1个实例中的全部3个文件

/etc/metrica.xml文件

<?xml version="1.0"?>
<yandex>
<clickhouse_remote_servers>
    <perftest_3shards_1replicas>
        <shard>
             <internal_replication>true</internal_replication>
            <replica>
                <default_database>dwh01</default_database>
                <host>172.1.34.199</host>
                <port>9000</port>
            </replica>
            <replica>
                <default_database>dwh01</default_database>
                <host>172.1.73.156</host>
                <port>9000</port>
            </replica>
        </shard>

         <shard>
             <internal_replication>true</internal_replication>
            <replica>
                <default_database>dwh02</default_database>
                <host>172.1.73.156</host>
                <port>9000</port>
            </replica>
            <replica>
                <default_database>dwh02</default_database>
                <host>172.1.140.243</host>
                <port>9000</port>
            </replica>
        </shard>

        <shard>
             <internal_replication>true</internal_replication>
            <replica>
                <default_database>dwh03</default_database>
                <host>172.1.140.243</host>
                <port>9000</port>
            </replica>
            <replica>
                <default_database>dwh03</default_database>
                <host>172.1.34.199</host>
                <port>9000</port>
            </replica>
        </shard>
    </perftest_3shards_1replicas>
</clickhouse_remote_servers>


<zookeeper-servers>
  <node index="1">
    <host>172.1.34.199</host>
    <port>2181</port>
  </node>
 <node index="2">
    <host>172.1.73.156</host>
    <port>2181</port>
  </node>
 <node index="3">
    <host>172.1.140.243</host>
    <port>2181</port>
  </node> 
</zookeeper-servers>

<macros replace="replace">
  <cluster>OLAPLab</cluster>
  <dwhshard00>01</dwhshard00>
  <dwhshard01>03</dwhshard01>
  <dwhreplica00>01</dwhreplica00>
  <dwhreplica01>02</dwhreplica01>
  <shard>01</shard>
  <replica>node1</replica>
</macros>
<interserver_http_host>ip-172-1-34-199</interserver_http_host>
</yandex>

/etc/clickhouse-server/config.xml

Only added this line rest of the config is default config
<listen_host>::</listen_host>

/usr/lib/zookeeper/conf/zoo.cfg

maxClientCnxns=50
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/var/lib/zookeeper
# the port at which the clients will connect
clientPort=2181
# the directory where the transaction logs are stored.
dataLogDir=/var/lib/zookeeper
server.1=172.1.34.199:2888:3888
server.2=172.1.73.156:2888:3888
server.3=172.1.140.243:2888:3888

/ etc / hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost6 localhost6.localdomain6
127.0.0.1    ip-172-1-34-199
127.0.0.1    172.1.34.199

要更改所有副本中的复制数据应该属于所有实例

2 个答案:

答案 0 :(得分:0)

机器之间是否可以访问Clickhouse和Zookeeper端口?

就像,您可以运行wget http://172.1.140.243:9000吗?相同的方法分别适用于Zookeper端口。

答案 1 :(得分:0)

您似乎需要在两个节点( 172.1.34.199 172.1.73.156 )的主机文件中添加字符串< / p>

172.1.140.243 ip-172-1-140-243
相关问题