2016-08-17 245 views
5

我正在嘗試將NFS卷掛載到我的掛包,但沒有成功。Kubernetes無法爲超時掛載卷掛載

我有一個服務器上運行的NFS掛載點,當我嘗試從其他正在運行的服務器連接到它

sudo mount -t nfs -o proto=tcp,port=2049 10.0.0.4:/export /mnt工作正常

另一件事值得一提的是,當我從部署刪除卷並且吊艙正在運行。我登錄進去,我可以telnet到10.0.0.4端口111和2049成功。所以真的不似乎是任何通信問題

還有:

showmount -e 10.0.0.4 
Export list for 10.0.0.4: 
/export/drive 10.0.0.0/16 
/export  10.0.0.0/16 

因此我可以假設存在的服務器和客戶端之間的網絡沒有或配置問題(我使用Amazon和我測試上是相同的安全組作爲K8S爪牙)在服務器

PS: 該服務器是一個簡單的ubuntu-> 50GB磁盤

Kubernetes V1.3.4

於是我開始創建我的PV

apiVersion: v1 
kind: PersistentVolume 
metadata: 
    name: nfs 
spec: 
    capacity: 
    storage: 50Gi 
    accessModes: 
    - ReadWriteMany 
    nfs: 
    server: 10.0.0.4 
    path: "/export" 

我的PVC

kind: PersistentVolumeClaim 
apiVersion: v1 
metadata: 
    name: nfs-claim 
spec: 
    accessModes: 
    - ReadWriteMany 
    resources: 
    requests: 
     storage: 50Gi 

這裏是kubectl這樣描述他們:

Name:  nfs 
    Labels:  <none> 
    Status:  Bound 
    Claim:  default/nfs-claim 
    Reclaim Policy: Retain 
    Access Modes: RWX 
    Capacity: 50Gi 
    Message: 
    Source: 
     Type: NFS (an NFS mount that lasts the lifetime of a pod) 
     Server: 10.0.0.4 
     Path: /export 
     ReadOnly: false 
    No events. 

Name:  nfs-claim 
    Namespace: default 
    Status:  Bound 
    Volume:  nfs 
    Labels:  <none> 
    Capacity: 0 
    Access Modes: 
    No events. 

POD部署:

apiVersion: extensions/v1beta1 
    kind: Deployment 
    metadata: 
     name: mypod 
     labels: 
     name: mypod 
    spec: 
     replicas: 1 
     strategy: 
     rollingUpdate: 
      maxSurge: 1 
      maxUnavailable: 0 
     type: RollingUpdate 
     template: 
     metadata: 
      name: mypod 
      labels: 
      # Important: these labels need to match the selector above, the api server enforces this constraint 
      name: mypod 
     spec: 
      containers: 
      - name: abcd 
      image: irrelevant to the question 
      ports: 
      - containerPort: 80 
      env: 
      - name: hello 
       value: world 
      volumeMounts: 
      - mountPath: "/mnt" 
       name: nfs 
      volumes: 
      - name: nfs 
       persistentVolumeClaim: 
       claimName: nfs-claim 

當我部署我的POD我得到如下:

Volumes: 
     nfs: 
     Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) 
     ClaimName: nfs-claim 
     ReadOnly: false 
     default-token-6pd57: 
     Type: Secret (a volume populated by a Secret) 
     SecretName: default-token-6pd57 
    QoS Tier: BestEffort 
    Events: 
     FirstSeen LastSeen Count From       SubobjectPath Type  Reason  Message 
     --------- -------- ----- ----       ------------- -------- ------  ------- 
     13m  13m  1 {default-scheduler }       Normal  Scheduled Successfully assigned xxx-2140451452-hjeki to ip-10-0-0-157.us-west-2.compute.internal 
     11m  7s  6 {kubelet ip-10-0-0-157.us-west-2.compute.internal}   Warning  FailedMount Unable to mount volumes for pod "xxx-2140451452-hjeki_default(93ca148d-6475-11e6-9c49-065c8a90faf1)": timeout expired waiting for volumes to attach/mount for pod "xxx-2140451452-hjeki"/"default". list of unattached/unmounted volumes=[nfs] 
     11m  7s  6 {kubelet ip-10-0-0-157.us-west-2.compute.internal}   Warning  FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "xxx-2140451452-hjeki"/"default". list of unattached/unmounted volumes=[nfs] 

嘗試一切,我知道,一切我能想到的。我在這裏失蹤或做錯了什麼?

回答

1

我測試了Kubernetes版本1.3.4和1.3.5,而NFS mount對我無效。後來我轉向1.2.5,該版本給了我更詳細的信息(kubectl describe pod ...)。原來,hyperkube圖像中缺少'nfs-common'。在基於主節點和工作節點上的hyperkube映像的所有容器實例中添加nfs-common後,NFS共享開始正常工作(安裝成功)。這就是這種情況。我在實踐中測試了它,並解決了我的問題。

+0

我可以看到問題已經打開,所以希望官方修正這個問題:https://github.com/kubernetes/kubernetes/issues/30310 – dejwsz

+0

實際上,修正應用在'master'分支hyperkube映像(請參閱Dockerfile定義) – dejwsz