重启Kubernetes petset将清除持久卷

时间:2017-03-01 19:10:17

标签: kubernetes apache-zookeeper glusterfs

我正在运行3个动物园管理员petset,其中使用的是glusterfs持久音量。如果这是你第一次启动petset,一切都很好。

我的一个要求是如果petset被杀死,那么在我重新启动它之后,它们仍将使用相同的持久音量。

我现在面临的问题是,在重新启动petset之后,将清除持久卷中的原始数据。那么如何解决这个问题而不是手动将文件复制出该卷呢?我尝试了reclaimPolicy保留和删除,它们都将清理卷。感谢。

以下是配置文件。

PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfsvol-zookeeper-0
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: zookeeper-vol-0
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: glusterfsvol-zookeeper-0
    namespace: default
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfsvol-zookeeper-1
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: zookeeper-vol-1
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: glusterfsvol-zookeeper-1
    namespace: default
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfsvol-zookeeper-2
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: zookeeper-vol-2
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: glusterfsvol-zookeeper-2
    namespace: default

PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfsvol-zookeeper-0
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
       storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfsvol-zookeeper-1
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
       storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfsvol-zookeeper-2
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
       storage: 1Gi

petset

apiVersion: apps/v1alpha1
kind: PetSet
metadata:
  name: zookeeper
spec:
  serviceName: "zookeeper"
  replicas: 1
  template:
    metadata:
      labels:
        app: zookeeper
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: zookeeper
        securityContext:
          privileged: true
          capabilities:
            add:
              - IPC_LOCK
        image: kuanghaochina/zookeeper-3.5.2-alpine-jdk:latest
        imagePullPolicy: Always
        ports:
          - containerPort: 2888
            name: peer
          - containerPort: 3888
            name: leader-election
          - containerPort: 2181
            name: client
        env:
        - name: ZOOKEEPER_LOG_LEVEL
          value: INFO
        volumeMounts:
        - name: glusterfsvol
          mountPath: /opt/zookeeper/data
          subPath: data
        - name: glusterfsvol
          mountPath: /opt/zookeeper/dataLog
          subPath: dataLog
  volumeClaimTemplates:
  - metadata:
      name: glusterfsvol
    spec:
      accessModes: 
        - ReadWriteMany
      resources:
        requests:
          storage: 1Gi

找到的原因是我使用zkServer-initialize.sh来强制使用zookeeper,但是在脚本中,它会清理dataDir。

1 个答案:

答案 0 :(得分:0)

找到的原因是我使用zkServer-initialize.sh来强制使用zookeeper,但是在脚本中,它会清理dataDir。