Elasticsearch:如何将群集运行状况从黄色更改为绿色

时间:2015-03-22 09:33:55

标签: elasticsearch

我有一个包含一个节点的群集(按本地)。健康集群是黄色的。现在我添加了一个节点,但是不能在第二个节点中分配分片。所以我的集群的健康状况仍然是黄色的。我无法将此状态更改为绿色,与本指南不同:health cluster example

那么如何将健康状态改为绿色?

我的群集:enter image description here 群集健康:

curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'  
{
 "cluster_name" : "astrung",
 "status" : "yellow",
 "timed_out" : false,
 "number_of_nodes" : 2,
 "number_of_data_nodes" : 2,
 "active_primary_shards" : 22,
 "active_shards" : 22,
 "relocating_shards" : 0,
 "initializing_shards" : 2,
 "unassigned_shards" : 20
}

碎片状态:

curl -XGET 'http://localhost:9200/_cat/shards?v'
 index          shard prirep state        docs  store ip          node 
 _river         0     p      STARTED         2  8.1kb 192.168.1.3 One  
 _river         0     r      UNASSIGNED                                
 megacorp       4     p      STARTED         1  3.4kb 192.168.1.3 One  
 megacorp       4     r      UNASSIGNED                                
 megacorp       0     p      STARTED         2  6.1kb 192.168.1.3 One  
 megacorp       0     r      UNASSIGNED                                
 megacorp       3     p      STARTED         1  2.2kb 192.168.1.3 One  
 megacorp       3     r      UNASSIGNED                                
 megacorp       1     p      STARTED         0   115b 192.168.1.3 One  
 megacorp       1     r      UNASSIGNED                                
 megacorp       2     p      STARTED         1  2.2kb 192.168.1.3 One  
 megacorp       2     r      UNASSIGNED                                
 mybucket       2     p      STARTED         1  2.1kb 192.168.1.3 One  
 mybucket       2     r      UNASSIGNED                                
 mybucket       0     p      STARTED         0   115b 192.168.1.3 One  
 mybucket       0     r      UNASSIGNED                                
 mybucket       3     p      STARTED         2  5.4kb 192.168.1.3 One  
 mybucket       3     r      UNASSIGNED                                
 mybucket       1     p      STARTED         1  2.2kb 192.168.1.3 One  
 mybucket       1     r      UNASSIGNED                                
 mybucket       4     p      STARTED         1  2.5kb 192.168.1.3 One  
 mybucket       4     r      UNASSIGNED                                
 .kibana        0     r      INITIALIZING             192.168.1.3 Two  
 .kibana        0     p      STARTED         2  8.9kb 192.168.1.3 One  
 .marvel-kibana 2     p      STARTED         0   115b 192.168.1.3 One  
 .marvel-kibana 2     r      UNASSIGNED                                
 .marvel-kibana 0     r      INITIALIZING             192.168.1.3 Two  
 .marvel-kibana 0     p      STARTED         1  2.9kb 192.168.1.3 One  
 .marvel-kibana 3     p      STARTED         0   115b 192.168.1.3 One  
 .marvel-kibana 3     r      UNASSIGNED                                
 .marvel-kibana 1     p      STARTED         0   115b 192.168.1.3 One  
 .marvel-kibana 1     r      UNASSIGNED                                
 .marvel-kibana 4     p      STARTED         0   115b 192.168.1.3 One  
 .marvel-kibana 4     r      UNASSIGNED                                
 user_ids       4     p      STARTED        11    5kb 192.168.1.3 One  
 user_ids       4     r      UNASSIGNED                                
 user_ids       0     p      STARTED         7 25.1kb 192.168.1.3 One  
 user_ids       0     r      UNASSIGNED                                
 user_ids       3     p      STARTED        11  4.9kb 192.168.1.3 One  
 user_ids       3     r      UNASSIGNED                                
 user_ids       1     p      STARTED         8 28.7kb 192.168.1.3 One  
 user_ids       1     r      UNASSIGNED                                
 user_ids       2     p      STARTED        11  8.5kb 192.168.1.3 One  
 user_ids       2     r      UNASSIGNED                              

3 个答案:

答案 0 :(得分:24)

我建议你将所有索引的复制因子更新为0,然后将其更新回1.让我知道这是否有效! 这是一个让你开始的卷曲

curl -XPUT 'http://localhost:9200/_settings' -d '
{
    "index" : {
        "number_of_replicas" : 0
    }
}'

答案 1 :(得分:1)

恢复通常需要很长时间,查看文件的数量和大小,应该花很长时间才能恢复。

看起来您遇到了相互联系的节点问题,请检查防火墙规则,确保每个节点都可以访问端口9200和9300.

答案 2 :(得分:0)

就像上面的@mohitt一样,将number_of_replicas更新为零(仅适用于本地开发人员,在生产中请小心使用)

您可以在Kibana DevTool控制台中运行以下命令:

PUT _settings
{
    "index" : {
        "number_of_replicas" : 0
    }
}
相关问题