kubernetes:在绑定之前等待创建第一个使用者

时间:2019-04-25 21:10:07

标签: kubernetes

我一直试图在Kubernetes上运行kafka / zookeeper。使用舵图,我可以在集群上安装zookeeper。但是,ZK吊舱卡在挂起状态。当我在其中一个Pod“ didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.”上发布describe时,是调度失败的原因。但是,当我在PVC上发布describe时,得到的是“ waiting for first consumer to be created before binding”。我试图重新生成整个集群,但是结果是一样的。尝试使用https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/作为指导。

有人可以在这里引导我吗?

kubectl获得豆荚-n动物园管理员

kubectl get pods -n zoo-keeper
NAME                         READY   STATUS    RESTARTS   AGE
zoo-keeper-zk-0              0/1     Pending   0          20m
zoo-keeper-zk-1              0/1     Pending   0          20m
zoo-keeper-zk-2             0/1     Pending   0          20m

kubectl获取sc

kubectl get sc
NAME            PROVISIONER                    AGE
local-storage   kubernetes.io/no-provisioner   25m

kubectl描述sc

kubectl describe  sc
Name:            local-storage
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}

Provisioner:           kubernetes.io/no-provisioner
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>

kubectl描述pod foob-zookeeper-0 -n zookeeper

ubuntu@kmaster:~$ kubectl describe pod foob-zookeeper-0 -n zoo-keeper
Name:               foob-zookeeper-0
Namespace:          zoo-keeper
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app=foob-zookeeper
                    app.kubernetes.io/instance=data-coord
                    app.kubernetes.io/managed-by=Tiller
                    app.kubernetes.io/name=foob-zookeeper
                    app.kubernetes.io/version=foob-zookeeper-9.1.0-15
                    controller-revision-hash=foob-zookeeper-5321f8ff5
                    release=data-coord
                    statefulset.kubernetes.io/pod-name=foob-zookeeper-0
Annotations:        foobar.com/product-name: zoo-keeper ZK
                    foobar.com/product-revision: ABC
Status:             Pending
IP:
Controlled By:      StatefulSet/foob-zookeeper
Containers:
  foob-zookeeper:
    Image:       repo.data.foobar.se/latest/zookeeper-3.4.10:1.6.0-15
    Ports:       2181/TCP, 2888/TCP, 3888/TCP, 10007/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP
    Limits:
      cpu:     2
      memory:  4Gi
    Requests:
      cpu:      1
      memory:   2Gi
    Liveness:   exec [zkOk.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
    Readiness:  tcp-socket :2181 delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      ZK_REPLICAS:           3
      ZK_HEAP_SIZE:          1G
      ZK_TICK_TIME:          2000
      ZK_INIT_LIMIT:         10
      ZK_SYNC_LIMIT:         5
      ZK_MAX_CLIENT_CNXNS:   60
      ZK_SNAP_RETAIN_COUNT:  3
      ZK_PURGE_INTERVAL:     1
      ZK_LOG_LEVEL:          INFO
      ZK_CLIENT_PORT:        2181
      ZK_SERVER_PORT:        2888
      ZK_ELECTION_PORT:      3888
      JMXPORT:               10007
    Mounts:
      /var/lib/zookeeper from datadir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-nfcfx (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir-foob-zookeeper-0
    ReadOnly:   false
  default-token-nfcfx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-nfcfx
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  69s (x4 over 3m50s)  default-scheduler  0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.

kubectl获得pv

ubuntu@kmaster:~$ kubectl get  pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS    REASON   AGE
local-pv   50Gi       RWO            Retain           Available           local-storage            10m
ubuntu@kmaster:~$

kubectl获得pvc本地声明

ubuntu@kmaster:~$ kubectl get  pvc local-claim
NAME          STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
local-claim   Pending                                      local-storage   8m9s
ubuntu@kmaster:~$

kubectl描述pvc本地声明

ubuntu@kmaster:~$ kubectl describe pvc local-claim
Name:          local-claim
Namespace:     default
StorageClass:  local-storage
Status:        Pending
Volume:
Labels:        <none>
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Events:
  Type       Reason                Age                    From                         Message
  ----       ------                ----                   ----                         -------
  Normal     WaitForFirstConsumer  2m3s (x26 over 7m51s)  persistentvolume-controller  waiting for first consumer to be created before binding
Mounted By:  <none>

我的PV文件:

cat create-pv.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv
spec:
  capacity:
    storage: 50Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mnt/kafka-mount
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - kmaster

猫pvc.yml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: local-claim
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: local-storage
  resources:
    requests:
      storage: 50Gi

1 个答案:

答案 0 :(得分:3)

好像您在主节点上创建了PV。默认情况下,普通节点使用所谓的taint将主节点标记为不可调度。为了能够在主节点上运行某些服务,您有两个选择:

1)为某些服务添加容忍度,以使其可以在主节点上运行:

tolerations:
- effect: NoSchedule
  key: node-role.kubernetes.io/master

您甚至可以指定某些服务仅在主节点上运行:

nodeSelector:
  node-role.kubernetes.io/master: ""

2)您可以从主节点上删除污点,以便任何pod都可以在其上运行。您应该知道这很危险,因为它会使群集非常不稳定。

kubectl taint nodes --all node-role.kubernetes.io/master-

在此处了解更多信息以及污点和公差:https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/