节点之间的Kubernetes pod分布

时间:2016-12-15 08:47:27

标签: kubernetes

有没有办法让kubernetes尽可能地分发pods? 我对所有部署和全局请求都有“请求” 作为HPA。所有节点都是一样的。

刚出现我的ASG缩小节点并且一个服务完全不可用的情况,因为所有4个pod都在缩小的同一节点上。

我想维护一种情况,即每个部署必须在至少2个节点上扩展其容器。

2 个答案:

答案 0 :(得分:9)

听起来像你想要的是Inter-Pod Affinity and Pod Anti-affinity

  

Kubernetes引入了荚间亲和力和抗亲和力   1.4。通过Pod间关联性和反关联性,您可以根据pod上的标签限制您的pod有资格安排的节点   已经在节点上运行,而不是基于节点上的标签。   规则的形式是“这个吊舱应该(或者,在这种情况下)   如果X已经在运行,则反关联,不应该在X中运行   符合规则Y的一个或多个pod。“Y表示为LabelSelector   具有关联的命名空间列表(或“所有”命名空间);不像   节点,因为pods是命名空间(因此pods上的标签   隐式命名空间),pod标签上的标签选择器必须   指定选择器应该应用于哪些名称空间。从概念上讲X.   是一个拓扑域,如节点,机架,云提供商区域,云   提供者区域等。使用topologyKey表示它   系统用于表示此类拓扑的节点标签的键   域名,例如请参阅本节中上面列出的标签键   “插曲:内置节点标签。”

反关联可用于确保您跨越故障域传播您的pod。您可以将这些规则说明为首选项或硬性规则。在后一种情况下,如果它无法满足您的约束,那么pod将无法安排。

答案 1 :(得分:9)

我在这里利用Anirudh's answer添加示例代码。

我最初的kubernetes yaml看起来像这样:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: say-deployment
spec:
  replicas: 6
  template:
    metadata:
      labels:
        app: say
    spec:
      containers:
      - name: say
        image: gcr.io/hazel-champion-200108/say
        ports:
        - containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
  name: say-service
spec:
  selector:
    app: say
  ports:
    - protocol: TCP
      port: 8080
  type: LoadBalancer
  externalIPs:
    - 192.168.0.112

此时,kubernetes调度程序以某种方式决定所有6个副本应该部署在同一节点上。

然后我added requiredDuringSchedulingIgnoredDuringExecution强制将pods部署在不同的节点上:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: say-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: say
    spec:
      containers:
      - name: say
        image: gcr.io/hazel-champion-200108/say
        ports:
        - containerPort: 8080
      affinity:
              podAntiAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  - labelSelector:
                      matchExpressions:
                        - key: "app"
                          operator: In
                          values:
                          - say
                    topologyKey: "kubernetes.io/hostname"
---
kind: Service
apiVersion: v1
metadata:
  name: say-service
spec:
  selector:
    app: say
  ports:
    - protocol: TCP
      port: 8080
  type: LoadBalancer
  externalIPs:
    - 192.168.0.112

现在所有pod都在不同的节点上运行。由于我有3个节点和6个pod,其他3个pod(6减3)无法运行(挂起)。这是因为我需要它:requiredDuringSchedulingIgnoredDuringExecution

kubectl get pods -o wide 

NAME                              READY     STATUS    RESTARTS   AGE       IP            NODE
say-deployment-8b46845d8-4zdw2   1/1       Running            0          24s       10.244.2.80   night
say-deployment-8b46845d8-699wg   0/1       Pending            0          24s       <none>        <none>
say-deployment-8b46845d8-7nvqp   1/1       Running            0          24s       10.244.1.72   gray
say-deployment-8b46845d8-bzw48   1/1       Running            0          24s       10.244.0.25   np3
say-deployment-8b46845d8-vwn8g   0/1       Pending            0          24s       <none>        <none>
say-deployment-8b46845d8-ws8lr   0/1       Pending            0          24s       <none>        <none>

现在,如果preferredDuringSchedulingIgnoredDuringExecution apiVersion: extensions/v1beta1 kind: Deployment metadata: name: say-deployment spec: replicas: 6 template: metadata: labels: app: say spec: containers: - name: say image: gcr.io/hazel-champion-200108/say ports: - containerPort: 8080 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: "app" operator: In values: - say topologyKey: "kubernetes.io/hostname" --- kind: Service apiVersion: v1 metadata: name: say-service spec: selector: app: say ports: - protocol: TCP port: 8080 type: LoadBalancer externalIPs: - 192.168.0.112 loosen这个要求。{/>

NAME                              READY     STATUS    RESTARTS   AGE       IP            NODE
say-deployment-57cf5fb49b-26nvl   1/1       Running   0          59s       10.244.2.81   night
say-deployment-57cf5fb49b-2wnsc   1/1       Running   0          59s       10.244.0.27   np3
say-deployment-57cf5fb49b-6v24l   1/1       Running   0          59s       10.244.1.73   gray
say-deployment-57cf5fb49b-cxkbz   1/1       Running   0          59s       10.244.0.26   np3
say-deployment-57cf5fb49b-dxpcf   1/1       Running   0          59s       10.244.1.75   gray
say-deployment-57cf5fb49b-vv98p   1/1       Running   0          59s       10.244.1.74   gray

前3个pod分布在3个不同的节点上,就像前一种情况一样。其余3个(6个pods减去3个节点)根据kubernetes内部考虑因素部署在各个节点上。

var str_1 = "<tr><td><a href='postPage.html'>";
var str_2 = "</td></tr>\n";
postRef.once('value').then(function(snapshot){
    snapshot.forEach(function(childSnapshot){
        var childData = childSnapshot.val();
        total_post[total_post.length] = str_1 + childData.post_title + str_2;
    });
    document.getElementById('allposts').innerHTML = total_post.join('');
}).catch(e => console.log(e.message));<br/>