如何浏览Kubernetes文件夹?

时间:2019-06-05 07:34:38

标签: kubernetes hyperledger-fabric

我试图在kubernetes上启动Fabric。 然后我得到这个问题CrashLoopBackOff。经过一番搜索,我从日志中看到

2019-06-05 07:30:19.216 UTC [main] main -> ERRO 001 Cannot run peer because error when setting up MSP from directory /etc/hyperledger/fabric/msp: err Could not load a valid signer certificate from directory /etc/hyperledger/fabric/msp/signcerts, err stat /etc/hyperledger/fabric/msp/signcerts: no such file or directory

如何查看我是否安装了正确的文件夹? 我想访问崩溃的容器以检查我的msp文件夹是否存在。

感谢您的帮助!

编辑1:kubectl pod描述peer1 org 1

Name:               peer1-org1-7b9cf7fbd4-74b7q
Namespace:          org1
Priority:           0
PriorityClassName:  <none>
Node:               minikube/10.0.2.15
Start Time:         Wed, 05 Jun 2019 17:48:21 +0900
Labels:             app=hyperledger
                    org=org1
                    peer-id=peer1
                    pod-template-hash=7b9cf7fbd4
                    role=peer
Annotations:        <none>
Status:             Running
IP:                 172.17.0.9
Controlled By:      ReplicaSet/peer1-org1-7b9cf7fbd4
Containers:
  couchdb:
    Container ID:   docker://7b5e80103491476843d365dc234316ae55a92d66f2ea009cf9162583a76907fb
    Image:          hyperledger/fabric-couchdb:x86_64-1.0.0
    Image ID:       docker-pullable://hyperledger/fabric-couchdb@sha256:e89b0f95f6ff674fd043795090dd65a11d727ec005d925545cf0b4fc48aa221d
    Port:           5984/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 05 Jun 2019 17:49:49 +0900
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sjp8t (ro)
  peer1-org1:
    Container ID:  docker://95e743dceafbd78f7e29476302ac86d7eb48f97c9a50db3d174dc6684511c97b
    Image:         hyperledger/fabric-peer:x86_64-1.0.0
    Image ID:      docker-pullable://hyperledger/fabric-peer@sha256:b7c1c2a6b356996c3dbe2b9554055cd2b63194cd7a492a83de2dbabf7f7e3c65
    Ports:         7051/TCP, 7052/TCP, 7053/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP
    Command:
      peer
    Args:
      node
      start
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 05 Jun 2019 17:50:58 +0900
      Finished:     Wed, 05 Jun 2019 17:50:58 +0900
    Ready:          False
    Restart Count:  3
    Environment:
      CORE_LEDGER_STATE_STATEDATABASE:                 CouchDB
      CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS:  localhost:5984
      CORE_VM_ENDPOINT:                                unix:///host/var/run/docker.sock
      CORE_LOGGING_LEVEL:                              DEBUG
      CORE_PEER_TLS_ENABLED:                           false
      CORE_PEER_GOSSIP_USELEADERELECTION:              true
      CORE_PEER_GOSSIP_ORGLEADER:                      false
      CORE_PEER_PROFILE_ENABLED:                       true
      CORE_PEER_TLS_CERT_FILE:                         /etc/hyperledger/fabric/tls/server.crt
      CORE_PEER_TLS_KEY_FILE:                          /etc/hyperledger/fabric/tls/server.key
      CORE_PEER_TLS_ROOTCERT_FILE:                     /etc/hyperledger/fabric/tls/ca.crt
      CORE_PEER_ID:                                    peer1.org1
      CORE_PEER_ADDRESS:                               peer1.org1:7051
      CORE_PEER_GOSSIP_EXTERNALENDPOINT:               peer1.org1:7051
      CORE_PEER_LOCALMSPID:                            Org1MSP
    Mounts:
      /etc/hyperledger/fabric/msp from certificate (rw,path="peers/peer1.org1/msp")
      /etc/hyperledger/fabric/tls from certificate (rw,path="peers/peer1.org1/tls")
      /host/var/run/ from run (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sjp8t (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  certificate:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  org1-pv
    ReadOnly:   false
  run:
    Type:          HostPath (bare host directory volume)
    Path:          /run
    HostPathType:  
  default-token-sjp8t:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-sjp8t
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  2m58s              default-scheduler  Successfully assigned org1/peer1-org1-7b9cf7fbd4-74b7q to minikube
  Normal   Pulling    2m55s              kubelet, minikube  Pulling image "hyperledger/fabric-couchdb:x86_64-1.0.0"
  Normal   Pulled     90s                kubelet, minikube  Successfully pulled image "hyperledger/fabric-couchdb:x86_64-1.0.0"
  Normal   Created    90s                kubelet, minikube  Created container couchdb
  Normal   Started    90s                kubelet, minikube  Started container couchdb
  Normal   Pulling    90s                kubelet, minikube  Pulling image "hyperledger/fabric-peer:x86_64-1.0.0"
  Normal   Pulled     71s                kubelet, minikube  Successfully pulled image "hyperledger/fabric-peer:x86_64-1.0.0"
  Normal   Created    21s (x4 over 70s)  kubelet, minikube  Created container peer1-org1
  Normal   Started    21s (x4 over 70s)  kubelet, minikube  Started container peer1-org1
  Normal   Pulled     21s (x3 over 69s)  kubelet, minikube  Container image "hyperledger/fabric-peer:x86_64-1.0.0" already present on machine
  Warning  BackOff    5s (x6 over 68s)   kubelet, minikube  Back-off restarting failed container

编辑2:

Kubectl获得pv

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                        STORAGECLASS   REASON   AGE
org1-artifacts-pv                          500Mi      RWX            Retain           Available                                                        39m
org1-pv                                    500Mi      RWX            Retain           Available                                                        39m
org2-artifacts-pv                          500Mi      RWX            Retain           Available                                                        39m
org2-pv                                    500Mi      RWX            Retain           Available                                                        39m
orgorderer1-pv                             500Mi      RWX            Retain           Available                                                        39m
pvc-aa87a86f-876e-11e9-99ef-080027f6ce3c   10Mi       RWX            Delete           Bound       orgorderer1/orgorderer1-pv   standard                39m
pvc-aadb69ff-876e-11e9-99ef-080027f6ce3c   10Mi       RWX            Delete           Bound       org2/org2-pv                 standard                39m
pvc-ab2e4d8e-876e-11e9-99ef-080027f6ce3c   10Mi       RWX            Delete           Bound       org2/org2-artifacts-pv       standard                39m
pvc-abb04335-876e-11e9-99ef-080027f6ce3c   10Mi       RWX            Delete           Bound       org1/org1-pv                 standard                39m
pvc-abfaaf76-876e-11e9-99ef-080027f6ce3c   10Mi       RWX            Delete           Bound       org1/org1-artifacts-pv       standard                39m

Kubectl获取pvc

NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
org1-artifacts-pv   Bound    pvc-abfaaf76-876e-11e9-99ef-080027f6ce3c   10Mi       RWX            standard       40m
org1-pv             Bound    pvc-abb04335-876e-11e9-99ef-080027f6ce3c   10Mi       RWX            standard       40m

编辑3:org1-cli.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
    name: org1-artifacts-pv
spec:
    capacity:
       storage: 500Mi
    accessModes:
       - ReadWriteMany
    hostPath:
      path: "/opt/share/channel-artifacts"
    # nfs: 
    #   path: /opt/share/channel-artifacts
    #   server: localhost #change to your nfs server ip here
---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    namespace: org1
    name: org1-artifacts-pv
spec:
   accessModes:
     - ReadWriteMany
   resources:
      requests:
        storage: 10Mi

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
   namespace: org1
   name: cli
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      labels:
       app: cli
    spec:
      containers:
        - name: cli
          image:  hyperledger/fabric-tools:x86_64-1.0.0
          env:
          
          - name: CORE_PEER_TLS_ENABLED
            value: "false"
          #- name: CORE_PEER_TLS_CERT_FILE
          #  value: /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1/peers/peer0.org1/tls/server.crt
          #- name: CORE_PEER_TLS_KEY_FILE
          #  value: /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1/peers/peer0.org1/tls/server.key
          #- name: CORE_PEER_TLS_ROOTCERT_FILE
          #  value: /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1/peers/peer0.org1/tls/ca.crt
          - name: CORE_VM_ENDPOINT
            value: unix:///host/var/run/docker.sock
          - name: GOPATH
            value: /opt/gopath
          - name: CORE_LOGGING_LEVEL
            value: DEBUG
          - name: CORE_PEER_ID
            value: cli
          - name: CORE_PEER_ADDRESS
            value: peer0.org1:7051
          - name: CORE_PEER_LOCALMSPID
            value: Org1MSP
          - name: CORE_PEER_MSPCONFIGPATH
            value: /etc/hyperledger/fabric/msp
          workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
          command: [ "/bin/bash", "-c", "--" ]
          args: [ "while true; do sleep 30; done;" ]
          volumeMounts:
          # - mountPath: /opt/gopath/src/github.com/hyperledger/fabric/peer
          #   name: certificate
          #   subPath: scripts
           - mountPath: /host/var/run/
             name: run
          # - mountPath: /opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go
          #   name: certificate
          #   subPath: chaincode
           - mountPath: /etc/hyperledger/fabric/msp
             name: certificate
             subPath: users/Admin@org1/msp
           - mountPath: /opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
             name: artifacts
      volumes:
        - name: certificate
          persistentVolumeClaim:
              claimName: org1-pv
        - name: artifacts
          persistentVolumeClaim:
              claimName: org1-artifacts-pv
        - name: run
          hostPath:
            path: /var/run 

org1-namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
    name: org1

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: org1-pv
spec:
  capacity:
    storage: 500Mi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: /opt/share/crypto-config/peerOrganizations/org1

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 namespace: org1
 name: org1-pv
spec:
 accessModes:
   - ReadWriteMany
 resources:
   requests:
     storage: 10Mi

---

编辑3:peer1-org1

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: org1
  name:	peer1-org1
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
       app: hyperledger
       role: peer
       peer-id: peer1
       org: org1
    spec:
      containers:
      - name: couchdb
        image: hyperledger/fabric-couchdb:x86_64-1.0.0
        ports:
         - containerPort: 5984


      - name: peer1-org1 
        image: hyperledger/fabric-peer:x86_64-1.0.0
        env:
        - name: CORE_LEDGER_STATE_STATEDATABASE
          value: "CouchDB"
        - name: CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS
          value: "localhost:5984"
        - name: CORE_VM_ENDPOINT
          value: "unix:///host/var/run/docker.sock"
        - name: CORE_LOGGING_LEVEL
          value: "DEBUG"
        - name: CORE_PEER_TLS_ENABLED
          value: "false"
        - name: CORE_PEER_GOSSIP_USELEADERELECTION
          value: "true"
        - name: CORE_PEER_GOSSIP_ORGLEADER
          value: "false" 
        - name: CORE_PEER_PROFILE_ENABLED
          value: "true"
        - name: CORE_PEER_TLS_CERT_FILE
          value: "/etc/hyperledger/fabric/tls/server.crt" 
        - name: CORE_PEER_TLS_KEY_FILE
          value: "/etc/hyperledger/fabric/tls/server.key"
        - name: CORE_PEER_TLS_ROOTCERT_FILE
          value: "/etc/hyperledger/fabric/tls/ca.crt"
        - name: CORE_PEER_ID
          value: peer1.org1
        - name: CORE_PEER_ADDRESS
          value: peer1.org1:7051
        - name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
          value: peer1.org1:7051
        - name: CORE_PEER_LOCALMSPID
          value: Org1MSP
        workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
        ports:
         - containerPort: 7051
         - containerPort: 7052
         - containerPort: 7053
        command: ["peer"]
        args: ["node","start"]
        volumeMounts:
         #- mountPath: /opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts 
         #  name: certificate
         #  subPath: channel-artifacts
         - mountPath: /etc/hyperledger/fabric/msp 
           name: certificate
           #subPath: crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp
           subPath: peers/peer1.org1/msp
         - mountPath: /etc/hyperledger/fabric/tls
           name: certificate
           #subPath: crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/
           subPath: peers/peer1.org1/tls
         - mountPath: /host/var/run/
           name: run
      volumes:
       - name: certificate
         persistentVolumeClaim:
             claimName: org1-pv
       - name: run
         hostPath:
           path: /run
       

---
apiVersion: v1
kind: Service
metadata:
   namespace: org1
   name: peer1
spec:
 selector:
   app: hyperledger
   role: peer
   peer-id: peer1
   org: org1
 type: NodePort
 ports:
   - name: externale-listen-endpoint
     protocol: TCP
     port: 7051
     targetPort: 7051
     nodePort: 30003

   - name: chaincode-listen
     protocol: TCP
     port: 7052
     targetPort: 7052
     nodePort: 30004

---

2 个答案:

答案 0 :(得分:0)

您可以执行kubectl edit pod <podname> -n <namespace>并将命令部分更改为sleep 1000000000,然后pod将会重新启动,您可以进入那里并查看发生了什么。或只是删除部署,编辑yaml以删除peer启动命令,重新部署yaml并查看目录的布局。

答案 1 :(得分:0)

经过一些搜索,我尝试将卷挂载到nginx Kubernetes PVC sample。将pods ClaimName更改为我创建的pvc。从那里我执行bash并浏览我的文件。然后,我可以查看是否安装了正确的文件夹。

相关问题