将EBS安装到Kubernetes集群中的Pod时遇到问题

时间:2018-11-25 18:59:12

标签: amazon-web-services kubernetes

我使用的集群是使用kubeadm引导的,并且已部署在AWS上。

sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:51:33Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:”linux/amd64"}

我正在尝试配置一个Pod以挂载持久卷(目前我不考虑PV和PVC),这是我使用的清单:

apiVersion: v1
kind: Pod
metadata:
  name: mongodb-aws
spec:
  volumes:
  - name: mongodb-data
    awsElasticBlockStore:
      volumeID: vol-xxxxxx
      fsType: ext4
  containers:
  - image: mongo
    name: mongodb
    volumeMounts:
    - name: mongodb-data
      mountPath: /data/db
    ports:
    - containerPort: 27017
      protocol: TCP

起初,我在pod的日志中遇到了这个错误:

“ mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-xxxx does not exist “

经过一番研究,我发现我必须设置一个云提供商,这是我在过去10个小时中一直尝试做的事情,我测试了很多建议,但没有一个有效;我尝试标记https://github.com/kubernetes/kubernetes/issues/53538#issuecomment-345942305中提到的集群使用的所有资源,也尝试了使用kubeadm运行树内云提供程序的官方解决方案:https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/

kubeadm_config.yml 文件:

apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: "aws"
    cloud-config: "/etc/kubernetes/cloud.conf"
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha3
kubernetesVersion: v1.12.0
apiServerExtraArgs:
  cloud-provider: "aws"
  cloud-config: "/etc/kubernetes/cloud.conf"
apiServerExtraVolumes:
- name: cloud
  hostPath: "/etc/kubernetes/cloud.conf"
  mountPath: "/etc/kubernetes/cloud.conf"
controllerManagerExtraArgs:
  cloud-provider: "aws"
  cloud-config: "/etc/kubernetes/cloud.conf"
controllerManagerExtraVolumes:
- name: cloud
  hostPath: "/etc/kubernetes/cloud.conf"
  mountPath: “/etc/kubernetes/cloud.conf"

/etc/kubernetes/cloud.conf 中,我输入:

[Global] 
KubernetesClusterTag=kubernetes
KubernetesClusterID=kubernetes

运行kubeadm init --config kubeadm_config.yml后,出现以下错误:

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster


The Control Plane is not created

我删除时:

apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: "aws"
    cloud-config: "/etc/kubernetes/cloud.conf"

在kubeadm_config.yml中,我运行kubeadm init --config kubeadm_config.yml, Kubernetes master已经成功初始化,但是当我执行kubectl get pods —all-namespaces时,我得到了:

NAMESPACE     NAME                                       READY   STATUS             RESTARTS   AGE
kube-system   etcd-ip-172-31-31-160                      1/1     Running            0          11m
kube-system   kube-apiserver-ip-172-31-31-160            1/1     Running            0          11m
kube-system   kube-controller-manager-ip-172-31-31-160   0/1     CrashLoopBackOff   6          11m
kube-system   kube-scheduler-ip-172-31-31-160            1/1     Running            0          10m

该控制器未运行。但是apiserver存在--cloud-provider = aws命令行标志(在 /etc/kubernetes/manifests/kube-apiserver.yaml 中)以及控制器管理员( /etc/kubernetes/manifests/kube-controller-manager.yaml

当我运行sudo kubectl logs kube-controller-manager-ip-172-31-13-85 -n kube-system时,我得到了:

Flag --address has been deprecated, see --bind-address instead.
I1126 11:27:35.006433       1 serving.go:293] Generated self-signed cert (/var/run/kubernetes/kube-controller-manager.crt, /var/run/kubernetes/kube-controller-manager.key)
I1126 11:27:35.811493       1 controllermanager.go:143] Version: v1.12.0
I1126 11:27:35.812091       1 secure_serving.go:116] Serving securely on [::]:10257
I1126 11:27:35.812605       1 deprecated_insecure_serving.go:50] Serving insecurely on 127.0.0.1:10252
I1126 11:27:35.812760       1 leaderelection.go:187] attempting to acquire leader lease  kube-system/kube-controller-manager...
I1126 11:27:53.260484       1 leaderelection.go:196] successfully acquired lease kube-system/kube-controller-manager
I1126 11:27:53.261474       1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"b0da1291-f16d-11e8-baeb-02a38a37cfd6", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-172-31-13-85_4603714e-f16e-11e8-8d9d-02a38a37cfd6 became leader
I1126 11:27:53.290493       1 aws.go:1042] Building AWS cloudprovider
I1126 11:27:53.290642       1 aws.go:1004] Zone not specified in configuration file; querying AWS metadata service
F1126 11:27:53.296760       1 controllermanager.go:192] error building controller context: cloud provider could not be initialized: could not init cloud provider "aws": error finding instance i-0b063e2a3c9797398: "error listing AWS instances: \"NoCredentialProviders: no valid providers in chain. Deprecated.\\n\\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors\""
  • 我没有尝试降级kubeadm(只可以使用清单清单MasterConfiguration来使用清单)

如果您需要更多信息,请随时询问。

0 个答案:

没有答案
相关问题