为什么Kubernetes pod有时会得到CrashLoopBackOff?

时间:2017-11-13 08:59:21

标签: ruby-on-rails vagrant kubernetes centos7 rancher

使用Kubernetes群集:3个主机(1个主节点和2个节点)。

Kubernetes版本:1.7

将Rails应用程序部署到Kubernetes集群。

以下是deployment.yaml文件:

apiVersion: v1
kind: Service
metadata:
  name: server
  labels:
    app: server
spec:
  ports:
    - port: 80
  selector:
    app: server
    tier: backend
  type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: server
  labels:
    app: server
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: server
        tier: backend
    spec:
      containers:
      - image: 192.168.33.13/myapp/server
        name: server
        ports:
        - containerPort: 3000
          name: server
        imagePullPolicy: Always

部署它:

$ kubectl create -f deployment.yaml

然后检查窗格状态:

$ kubectl get pods
NAME                                                         READY     STATUS             RESTARTS   AGE
server-962161505-kw3jf                                       0/1       CrashLoopBackOff   6          9m
server-962161505-lxcfb                                       0/1       CrashLoopBackOff   6          9m
server-962161505-mbnkn                                       0/1       CrashLoopBackOff   6          9m

一开始,其状态为Completed但很快就转到了CrashLoopBackOff。配置yaml文件有什么问题吗?

(顺便说一句,我不想​​在这里运行entrypoint.sh脚本,而是使用job.yaml文件来调用k8s Job来执行此操作。)

修改

kubectl describe pod server-962161505-kw3jf的结果:

Name:           server-962161505-kw3jf
Namespace:      default
Node:           node1/192.168.33.11
Start Time:     Mon, 13 Nov 2017 17:45:47 +0900
Labels:         app=server
                pod-template-hash=962161505
                tier=backend
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"server-962161505","uid":"0acadda6-c84f-11e7-84b8-02178ad2db9a","...
Status:         Running
IP:             10.42.254.104
Created By:     ReplicaSet/server-962161505
Controlled By:  ReplicaSet/server-962161505
Containers:
  server:
    Container ID:   docker://29eca3d9a20c60c83314101b036d742c5868c3bf25a39f28c5e4208bcdbfcede
    Image:          192.168.33.13/myapp/server
    Image ID:       docker-pullable://192.168.33.13/myapp/server@sha256:0e056e3ff5b1f1084e0946bc4211d33c6f48bc06dba7e07340c1609bbd5513d6
    Port:           3000/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 14 Nov 2017 10:13:12 +0900
      Finished:     Tue, 14 Nov 2017 10:13:13 +0900
    Ready:          False
    Restart Count:  26
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-csjqn (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  default-token-csjqn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-csjqn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age                 From            Message
  ----     ------                 ----                ----            -------
  Normal   SuccessfulMountVolume  22m                 kubelet, node1  MountVolume.SetUp succeeded for volume "default-token-csjqn"
  Normal   SandboxChanged         22m                 kubelet, node1  Pod sandbox changed, it will be killed and re-created.
  Warning  Failed                 20m (x3 over 21m)   kubelet, node1  Failed to pull image "192.168.33.13/myapp/server": rpc error: code = 2 desc = Error response from daemon: {"message":"Get http://192.168.33.13/v2/: dial tcp 192.168.33.13:80: getsockopt: connection refused"}
  Normal   BackOff                20m (x5 over 21m)   kubelet, node1  Back-off pulling image "192.168.33.13/myapp/server"
  Normal   Pulling                4m (x7 over 21m)    kubelet, node1  pulling image "192.168.33.13/myapp/server"
  Normal   Pulled                 4m (x4 over 20m)    kubelet, node1  Successfully pulled image "192.168.33.13/myapp/server"
  Normal   Created                4m (x4 over 20m)    kubelet, node1  Created container
  Normal   Started                4m (x4 over 20m)    kubelet, node1  Started container
  Warning  FailedSync             10s (x99 over 21m)  kubelet, node1  Error syncing pod
  Warning  BackOff                10s (x91 over 20m)  kubelet, node1  Back-off restarting failed container

0 个答案:

没有答案
相关问题