Kubernetes pod不会跨越不同的节点传播

时间:2016-05-28 01:19:04

标签: kubernetes google-kubernetes-engine

我在GKE上有一个Kubernetes集群。我知道Kubernetes将使用相同的标签传播豆荚,但这不会发生在我身上。这是我的节点描述。

Name:                   gke-pubnation-cluster-prod-high-cpu-14a766ad-node-dpob
Conditions:
  Type          Status  LastHeartbeatTime                       LastTransitionTime                      Reason                          Message
  ----          ------  -----------------                       ------------------                      ------                          -------
  OutOfDisk     False   Fri, 27 May 2016 21:11:17 -0400         Thu, 26 May 2016 22:16:27 -0400         KubeletHasSufficientDisk        kubelet has sufficient disk space available
  Ready         True    Fri, 27 May 2016 21:11:17 -0400         Thu, 26 May 2016 22:17:02 -0400         KubeletReady                    kubelet is posting ready status. WARNING: CPU hardcapping unsupported
Capacity:
 cpu:           2
 memory:        1848660Ki
 pods:          110
System Info:
 Machine ID:
 Kernel Version:                3.16.0-4-amd64
 OS Image:                      Debian GNU/Linux 7 (wheezy)
 Container Runtime Version:     docker://1.9.1
 Kubelet Version:               v1.2.4
 Kube-Proxy Version:            v1.2.4
Non-terminated Pods:            (2 in total)
  Namespace                     Name                                                                                    CPU Requests    CPU Limits  Memory Requests Memory Limits
  ---------                     ----                                                                                    ------------    ----------  --------------- -------------
  kube-system                   fluentd-cloud-logging-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-dpob            80m (4%)        0 (0%)              200Mi (11%)     200Mi (11%)
  kube-system                   kube-proxy-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-dpob                       20m (1%)        0 (0%)              0 (0%)          0 (0%)
Allocated resources:
  (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
  CPU Requests  CPU Limits      Memory Requests Memory Limits
  ------------  ----------      --------------- -------------
  100m (5%)     0 (0%)          200Mi (11%)     200Mi (11%)
No events.

Name:                   gke-pubnation-cluster-prod-high-cpu-14a766ad-node-qhw2
Conditions:
  Type          Status  LastHeartbeatTime                       LastTransitionTime                      Reason                          Message
  ----          ------  -----------------                       ------------------                      ------                          -------
  OutOfDisk     False   Fri, 27 May 2016 21:11:17 -0400         Fri, 27 May 2016 18:16:38 -0400         KubeletHasSufficientDisk        kubelet has sufficient disk space available
  Ready         True    Fri, 27 May 2016 21:11:17 -0400         Fri, 27 May 2016 18:17:12 -0400         KubeletReady                    kubelet is posting ready status. WARNING: CPU hardcapping unsupported
Capacity:
 pods:          110
 cpu:           2
 memory:        1848660Ki
System Info:
 Machine ID:
 Kernel Version:                3.16.0-4-amd64
 OS Image:                      Debian GNU/Linux 7 (wheezy)
 Container Runtime Version:     docker://1.9.1
 Kubelet Version:               v1.2.4
 Kube-Proxy Version:            v1.2.4
Non-terminated Pods:            (10 in total)
  Namespace                     Name                                                                                    CPU Requests    CPU Limits  Memory Requests Memory Limits
  ---------                     ----                                                                                    ------------    ----------  --------------- -------------
  default                       pn-minions-deployment-prod-3923308490-axucq                                             100m (5%)       0 (0%)              0 (0%)          0 (0%)
  default                       pn-minions-deployment-prod-3923308490-mvn54                                             100m (5%)       0 (0%)              0 (0%)          0 (0%)
  default                       pn-minions-deployment-staging-2522417973-8cq5p                                          100m (5%)       0 (0%)              0 (0%)          0 (0%)
  default                       pn-minions-deployment-staging-2522417973-9yatt                                          100m (5%)       0 (0%)              0 (0%)          0 (0%)
  kube-system                   fluentd-cloud-logging-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-qhw2            80m (4%)        0 (0%)              200Mi (11%)     200Mi (11%)
  kube-system                   heapster-v1.0.2-1246684275-a8eab                                                        150m (7%)       150m (7%)   308Mi (17%)     308Mi (17%)
  kube-system                   kube-dns-v11-uzl1h                                                                      310m (15%)      310m (15%)  170Mi (9%)      920Mi (50%)
  kube-system                   kube-proxy-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-qhw2                       20m (1%)        0 (0%)              0 (0%)          0 (0%)
  kube-system                   kubernetes-dashboard-v1.0.1-3co2b                                                       100m (5%)       100m (5%)   50Mi (2%)       50Mi (2%)
  kube-system                   l7-lb-controller-v0.6.0-o5ojv                                                           110m (5%)       110m (5%)   70Mi (3%)       120Mi (6%)
Allocated resources:
  (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
  CPU Requests  CPU Limits      Memory Requests Memory Limits
  ------------  ----------      --------------- -------------
  1170m (58%)   670m (33%)      798Mi (44%)     1598Mi (88%)
No events.

以下是部署说明:

Name:                   pn-minions-deployment-prod
Namespace:              default
Labels:                 app=pn-minions,environment=production
Selector:               app=pn-minions,environment=production
Replicas:               2 updated | 2 total | 2 available | 0 unavailable
OldReplicaSets:         <none>
NewReplicaSet:          pn-minions-deployment-prod-3923308490 (2/2 replicas created)

Name:                   pn-minions-deployment-staging
Namespace:              default
Labels:                 app=pn-minions,environment=staging
Selector:               app=pn-minions,environment=staging
Replicas:               2 updated | 2 total | 2 available | 0 unavailable
OldReplicaSets:         <none>
NewReplicaSet:          pn-minions-deployment-staging-2522417973 (2/2 replicas created)

如您所见,所有四个pod都在同一节点上。我应该做些额外的工作吗?

1 个答案:

答案 0 :(得分:1)

默认情况下,pod使用无限制的CPU和内存限制运行。这意味着系统中的任何pod都将能够在执行pod的节点上消耗尽可能多的CPU和内存。 http://kubernetes.io/docs/admin/limitrange/

当您未指定CPU限制时,kubernetes将不知道需要多少CPU资源,并将尝试在一个节点中创建pod。

以下是Deployment

的示例
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: jenkins
spec:
  replicas: 4
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      containers:
        - name: jenkins
          image: quay.io/naveensrinivasan/jenkins:0.4
          ports:
            - containerPort: 8080
          resources:
            limits:
                cpu: "400m"
#          volumeMounts:
#            - mountPath: /var/jenkins_home
#              name: jenkins-volume
#      volumes:
#         - name: jenkins-volume
#           awsElasticBlockStore:
#            volumeID: vol-29c4b99f
#            fsType: ext4
      imagePullSecrets:
         - name: registrypullsecret

以下是创建部署后kubectl describe po | grep Node的输出。

~ aws_kubernetes  naveen@GuessWho  ~/revature/devops/jenkins   jenkins ● k describe po | grep Node
Node:       ip-172-20-0-26.us-west-2.compute.internal/172.20.0.26
Node:       ip-172-20-0-29.us-west-2.compute.internal/172.20.0.29
Node:       ip-172-20-0-27.us-west-2.compute.internal/172.20.0.27
Node:       ip-172-20-0-29.us-west-2.compute.internal/172.20.0.29

现在在4个不同的节点中创建。它基于群集上的cpu限制。您可以增加/减少replicas以查看它是否部署在不同的节点中。

这不是GKE或AWS特定的。