Kubernetes Multi Master设置

时间:2018-06-06 18:05:07

标签: docker kubernetes kubectl orchestration kubeadm

[已解决]法兰绒不适用于我改为编织网。如果你不想提供pod-network-cidr:config.yaml中的“10.244.0.0/16”标志

我想用kubernetes进行多主设置并尝试了很多不同的方法。即使我采取的最后一种方式也不行。问题是dns和法兰绒网络插件不想启动。他们每次都获得CrashLoopBackOff状态。我的方式如下所示。

首先在每个节点上创建一个带有此命令的外部etcd集群(仅更改地址)

nohup etcd --name kube1 --initial-advertise-peer-urls http://192.168.100.110:2380 \
  --listen-peer-urls http://192.168.100.110:2380 \
  --listen-client-urls http://192.168.100.110:2379,http://127.0.0.1:2379 \
  --advertise-client-urls http://192.168.100.110:2379 \
  --initial-cluster-token etcd-cluster-1 \
  --initial-cluster kube1=http://192.168.100.110:2380,kube2=http://192.168.100.108:2380,kube3=http://192.168.100.104:2380 \
  --initial-cluster-state new &

然后我为kubeadm init命令创建了一个config.yaml文件。

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
  advertiseAddress: 192.168.100.110
etcd:
  endpoints:
  - "http://192.168.100.110:2379"
  - "http://192.168.100.108:2379"
  - "http://192.168.100.104:2379"
apiServerExtraArgs:
  apiserver-count: "3"
apiServerCertSANs:
- "192.168.100.110"
- "192.168.100.108"
- "192.168.100.104"
- "127.0.0.1"
token: "64bhyh.1vjuhruuayzgtykv"
tokenTTL: "0"

启动命令:kubeadm init --config /root/config.yaml

所以现在将/ etc / kubernetes / pki复制到其他节点和配置上,并以相同的方式启动其他主节点。但它不起作用。

那么初始化多主kubernetes集群的正确方法是什么?为什么我的法兰绒网络无法启动?

法兰绒吊舱的状态:

Events:
  Type     Reason                 Age               From            Message
  ----     ------                 ----              ----            -------
  Normal   SuccessfulMountVolume  8m                kubelet, kube2  MountVolume.SetUp succeeded for volume "run"
  Normal   SuccessfulMountVolume  8m                kubelet, kube2  MountVolume.SetUp succeeded for volume "cni"
  Normal   SuccessfulMountVolume  8m                kubelet, kube2  MountVolume.SetUp succeeded for volume "flannel-token-swdhl"
  Normal   SuccessfulMountVolume  8m                kubelet, kube2  MountVolume.SetUp succeeded for volume "flannel-cfg"
  Normal   Pulling                8m                kubelet, kube2  pulling image "quay.io/coreos/flannel:v0.10.0-amd64"
  Normal   Pulled                 8m                kubelet, kube2  Successfully pulled image "quay.io/coreos/flannel:v0.10.0-amd64"
  Normal   Created                8m                kubelet, kube2  Created container
  Normal   Started                8m                kubelet, kube2  Started container
  Normal   Pulled                 8m (x4 over 8m)   kubelet, kube2  Container image "quay.io/coreos/flannel:v0.10.0-amd64" already present on machine
  Normal   Created                8m (x4 over 8m)   kubelet, kube2  Created container
  Normal   Started                8m (x4 over 8m)   kubelet, kube2  Started container
  Warning  BackOff                3m (x23 over 8m)  kubelet, kube2  Back-off restarting failed container

etcd version

etcd --version
etcd Version: 3.3.6
Git SHA: 932c3c01f
Go Version: go1.9.6
Go OS/Arch: linux/amd64

 kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

来自etcd的nohup中的最后一行

2018-06-06 19:44:28.441304 I | etcdserver: name = kube1
2018-06-06 19:44:28.441327 I | etcdserver: data dir = kube1.etcd
2018-06-06 19:44:28.441331 I | etcdserver: member dir = kube1.etcd/member
2018-06-06 19:44:28.441334 I | etcdserver: heartbeat = 100ms
2018-06-06 19:44:28.441336 I | etcdserver: election = 1000ms
2018-06-06 19:44:28.441338 I | etcdserver: snapshot count = 100000
2018-06-06 19:44:28.441343 I | etcdserver: advertise client URLs = http://192.168.100.110:2379
2018-06-06 19:44:28.441346 I | etcdserver: initial advertise peer URLs = http://192.168.100.110:2380
2018-06-06 19:44:28.441352 I | etcdserver: initial cluster = kube1=http://192.168.100.110:2380,kube2=http://192.168.100.108:2380,kube3=http://192.168.100.104:2380
2018-06-06 19:44:28.443825 I | etcdserver: starting member a4df4f699dd66909 in cluster 73f203cf831df407
2018-06-06 19:44:28.443843 I | raft: a4df4f699dd66909 became follower at term 0
2018-06-06 19:44:28.443848 I | raft: newRaft a4df4f699dd66909 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2018-06-06 19:44:28.443850 I | raft: a4df4f699dd66909 became follower at term 1
2018-06-06 19:44:28.447834 W | auth: simple token is not cryptographically signed
2018-06-06 19:44:28.448857 I | rafthttp: starting peer 9e0f381e79b9b9dc...
2018-06-06 19:44:28.448869 I | rafthttp: started HTTP pipelining with peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450791 I | rafthttp: started peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450803 I | rafthttp: added peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450809 I | rafthttp: starting peer fc9c29e972d01e69...
2018-06-06 19:44:28.450816 I | rafthttp: started HTTP pipelining with peer fc9c29e972d01e69
2018-06-06 19:44:28.453543 I | rafthttp: started peer fc9c29e972d01e69
2018-06-06 19:44:28.453559 I | rafthttp: added peer fc9c29e972d01e69
2018-06-06 19:44:28.453570 I | etcdserver: starting server... [version: 3.3.6, cluster version: to_be_decided]
2018-06-06 19:44:28.455414 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (writer)
2018-06-06 19:44:28.455431 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (writer)
2018-06-06 19:44:28.455445 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (stream MsgApp v2 reader)
2018-06-06 19:44:28.455578 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (stream Message reader)
2018-06-06 19:44:28.455697 I | rafthttp: started streaming with peer fc9c29e972d01e69 (writer)
2018-06-06 19:44:28.455704 I | rafthttp: started streaming with peer fc9c29e972d01e69 (writer)
@

2 个答案:

答案 0 :(得分:0)

如果您没有任何托管偏好设置,并且您可以在AWS上创建群集,那么可以使用KOPS轻松完成。

https://github.com/kubernetes/kops

通过KOPS,您可以轻松配置主服务器的自动调节组,并可以指定集群所需的主服务器和节点数。

答案 1 :(得分:0)

法兰绒不合作,所以我换了编织网。如果你不想使用提供pod-network-cidr:" 10.244.0.0/16"在config.yaml中标记