2016-06-12 112 views
0

無法在本地ubuntu集羣(如此一個節點)上使用DNS部署kubernetes。我認爲這可能與法蘭絨有關,但我不確定,更重要的是,我不確定爲什麼它指向coreos,當我嘗試在Ubuntu上進行部署時。我不得不在cluster/ubuntu下的config-default.sh中更改一些東西,以便我能夠得到這個,但是這個錯誤我無法解決,最後無法用dns啓動kubernetes。無法在本地ubuntu集羣上部署kubernetes和DNS(所以一個節點)

下面是我的錯誤跟蹤。我不知道,如果從下面的錯誤跟蹤以下行之所以我是不是能夠部署kube-up.sh

Error: 100: Key not found (/coreos.com) [1] 
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}} 
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}} 

ERROR TRACE 

$KUBERNETES_PROVIDER=ubuntu ./kube-up.sh // ran this command on terminal 
... Starting cluster using provider: ubuntu 
... calling verify-prereqs 
... calling kube-up 
~/kubernetes/cluster/ubuntu ~/kubernetes/cluster 
Prepare flannel 0.5.0 release ... 
% Total % Received % Xferd Average Speed Time Time Time Current 
Dload Upload Total Spent Left Speed 
100 608 0 608 0 0 102 0 --:--:-- 0:00:05 --:--:-- 138 
100 2757k 100 2757k 0 0 194k 0 0:00:14 0:00:14 --:--:-- 739k 
Prepare etcd 2.2.0 release ... 
% Total % Received % Xferd Average Speed Time Time Time Current 
Dload Upload Total Spent Left Speed 
100 606 0 606 0 0 101 0 --:--:-- 0:00:05 --:--:-- 175 
100 7183k 100 7183k 0 0 468k 0 0:00:15 0:00:15 --:--:-- 1871k 
Prepare kubernetes 1.2.4 release ... 
Done! All your binaries locate in kubernetes/cluster/ubuntu/binaries directory 
~/kubernetes/cluster 

Deploying master and node on machine 192.168.245.244 
make-ca-cert.sh 100% 4028 3.9KB/s 00:00 

easy-rsa.tar.gz 100% 42KB 42.4KB/s 00:00 

config-default.sh 100% 5419 5.3KB/s 00:00 

util.sh 100% 29KB 28.6KB/s 00:00 

kubelet.conf 100% 644 0.6KB/s 00:00 

kube-proxy.conf 100% 684 0.7KB/s 00:00 

kubelet 100% 2158 2.1KB/s 00:00 

kube-proxy 100% 2233 2.2KB/s 00:00 

kube-scheduler.conf 100% 674 0.7KB/s 00:00 

etcd.conf 100% 709 0.7KB/s 00:00 

kube-controller-manager.conf 100% 744 0.7KB/s 00:00 

kube-apiserver.conf 100% 674 0.7KB/s 00:00 

kube-apiserver 100% 2358 2.3KB/s 00:00 

kube-scheduler 100% 2360 2.3KB/s 00:00 

kube-controller-manager 100% 2672 2.6KB/s 00:00 

etcd 100% 2073 2.0KB/s 00:00 

reconfDocker.sh 100% 2094 2.0KB/s 00:00 

kube-apiserver 100% 58MB 58.2MB/s 00:00 

kube-scheduler 100% 42MB 42.0MB/s 00:00 

kube-controller-manager 100% 52MB 51.8MB/s 00:00 

etcdctl 100% 12MB 12.3MB/s 00:00 

etcd 100% 14MB 13.8MB/s 00:00 

flanneld 100% 11MB 10.8MB/s 00:00 

kubelet 100% 60MB 60.3MB/s 00:01 

kube-proxy 100% 35MB 34.8MB/s 00:00 

flanneld 100% 11MB 10.8MB/s 00:00 

flanneld.conf 100% 577 0.6KB/s 00:00 

flanneld 100% 2121 2.1KB/s 00:00 

flanneld.conf 100% 568 0.6KB/s 00:00 

flanneld 100% 2131 2.1KB/s 00:00 

[sudo] password to start master: // I entered my password manually 
etcd start/running, process 100639 
Error: 100: Key not found (/coreos.com) [1] 
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}} 
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}} 
docker stop/waiting 
docker start/running, process 101035 
Connection to 192.168.245.244 closed. 
Validating master 
Validating [email protected] 
Using master 192.168.245.244 
cluster "ubuntu" set. 
user "ubuntu" set. 
context "ubuntu" set. 
switched to context "ubuntu". 
Wrote config for ubuntu to /home/kant/.kube/config 
... calling validate-cluster 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 

這裏是錯誤跟蹤當我打開調試標誌爲true config-default.sh

$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh 
... Starting cluster using provider: ubuntu 
... calling verify-prereqs 
... calling kube-up 
~/kubernetes/cluster/ubuntu ~/kubernetes/cluster 
Prepare flannel 0.5.5 release ... 
Prepare etcd 2.3.1 release ... 
Prepare kubernetes 1.2.4 release ... 
Done! All your binaries locate in kubernetes/cluster/ubuntu/binaries directory 
~/kubernetes/cluster 

Deploying master and node on machine 192.168.245.237 
make-ca-cert.sh                     100% 4028  3.9KB/s 00:00  
easy-rsa.tar.gz                     100% 42KB 42.4KB/s 00:00  
config-default.sh                    100% 5474  5.4KB/s 00:00  
util.sh                       100% 29KB 28.6KB/s 00:00  
kubelet.conf                     100% 644  0.6KB/s 00:00  
kube-proxy.conf                     100% 684  0.7KB/s 00:00  
kubelet                       100% 2158  2.1KB/s 00:00  
kube-proxy                      100% 2233  2.2KB/s 00:00  
kube-scheduler.conf                    100% 674  0.7KB/s 00:00  
etcd.conf                      100% 709  0.7KB/s 00:00  
kube-controller-manager.conf                 100% 744  0.7KB/s 00:00  
kube-apiserver.conf                    100% 674  0.7KB/s 00:00  
kube-apiserver                     100% 2358  2.3KB/s 00:00  
kube-scheduler                     100% 2360  2.3KB/s 00:00  
kube-controller-manager                   100% 2672  2.6KB/s 00:00  
etcd                       100% 2073  2.0KB/s 00:00  
reconfDocker.sh                     100% 2094  2.0KB/s 00:00  
kube-apiserver                     100% 58MB 58.2MB/s 00:01  
kube-scheduler                     100% 42MB 42.0MB/s 00:00  
kube-controller-manager                   100% 52MB 51.8MB/s 00:00  
etcdctl                       100% 14MB 13.7MB/s 00:00  
etcd                       100% 16MB 15.9MB/s 00:00  
flanneld                      100% 16MB 15.8MB/s 00:00  
kubelet                       100% 60MB 60.3MB/s 00:01  
kube-proxy                      100% 35MB 34.8MB/s 00:00  
flanneld                      100% 16MB 15.8MB/s 00:00  
flanneld.conf                     100% 577  0.6KB/s 00:00  
flanneld                      100% 2121  2.1KB/s 00:00  
flanneld.conf                     100% 568  0.6KB/s 00:00  
flanneld                      100% 2131  2.1KB/s 00:00  
+ source /home/kant/kube/util.sh 
++ set -e 
++ SSH_OPTS='-oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oLogLevel=ERROR' 
++ MASTER= 
++ MASTER_IP= 
++ NODE_IPS= 
+ setClusterInfo 
+ NODE_IPS= 
+ local ii=0 
+ create-etcd-opts 192.168.245.237 
+ cat 
+ create-kube-apiserver-opts 192.168.3.0/24 NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota 30000-32767 192.168.245.237 
+ cat 
+ create-kube-controller-manager-opts 192.168.245.237 
+ cat 
+ create-kube-scheduler-opts 
+ cat 
+ create-kubelet-opts 192.168.245.237 192.168.245.237 192.168.3.10 cluster.local '' '' 
+ '[' -n '' ']' 
+ cni_opts= 
+ cat 
+ create-kube-proxy-opts 192.168.245.237 192.168.245.237 '' 
+ cat 
+ create-flanneld-opts 127.0.0.1 192.168.245.237 
+ cat 
+ FLANNEL_OTHER_NET_CONFIG= 
+ sudo -E -p '[sudo] password to start master: ' -- /bin/bash -ce ' 
     set -x 
     cp ~/kube/default/* /etc/default/ 
     cp ~/kube/init_conf/* /etc/init/ 
     cp ~/kube/init_scripts/* /etc/init.d/ 

     groupadd -f -r kube-cert 
     DEBUG=true ~/kube/make-ca-cert.sh "192.168.245.237" "IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local" 
     mkdir -p /opt/bin/ 
     cp ~/kube/master/* /opt/bin/ 
     cp ~/kube/minion/* /opt/bin/ 

     service etcd start 
     if true; then FLANNEL_NET="172.16.0.0/16" KUBE_CONFIG_FILE="./../cluster/../cluster/ubuntu/config-default.sh" DOCKER_OPTS="" ~/kube/reconfDocker.sh ai; fi 
     ' 
[sudo] password to start master: 
+ cp /home/kant/kube/default/etcd /home/kant/kube/default/flanneld /home/kant/kube/default/kube-apiserver /home/kant/kube/default/kube-controller-manager /home/kant/kube/default/kubelet /home/kant/kube/default/kube-proxy /home/kant/kube/default/kube-scheduler /etc/default/ 
+ cp /home/kant/kube/init_conf/etcd.conf /home/kant/kube/init_conf/flanneld.conf /home/kant/kube/init_conf/kube-apiserver.conf /home/kant/kube/init_conf/kube-controller-manager.conf /home/kant/kube/init_conf/kubelet.conf /home/kant/kube/init_conf/kube-proxy.conf /home/kant/kube/init_conf/kube-scheduler.conf /etc/init/ 
+ cp /home/kant/kube/init_scripts/etcd /home/kant/kube/init_scripts/flanneld /home/kant/kube/init_scripts/kube-apiserver /home/kant/kube/init_scripts/kube-controller-manager /home/kant/kube/init_scripts/kubelet /home/kant/kube/init_scripts/kube-proxy /home/kant/kube/init_scripts/kube-scheduler /etc/init.d/ 
+ groupadd -f -r kube-cert 
+ DEBUG=true 
+ /home/kant/kube/make-ca-cert.sh 192.168.245.237 IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local 
+ cert_ip=192.168.245.237 
+ extra_sans=IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local 
+ cert_dir=/srv/kubernetes 
+ cert_group=kube-cert 
+ mkdir -p /srv/kubernetes 
+ use_cn=false 
+ '[' 192.168.245.237 == _use_gce_external_ip_ ']' 
+ '[' 192.168.245.237 == _use_aws_external_ip_ ']' 
+ sans=IP:192.168.245.237 
+ [[ -n IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local ]] 
+ sans=IP:192.168.245.237,IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local 
++ mktemp -d -t kubernetes_cacert.XXXXXX 
+ tmpdir=/tmp/kubernetes_cacert.YAN8Jg 
+ trap 'rm -rf "${tmpdir}"' EXIT 
+ cd /tmp/kubernetes_cacert.YAN8Jg 
+ '[' -f /home/kant/kube/easy-rsa.tar.gz ']' 
+ ln -s /home/kant/kube/easy-rsa.tar.gz . 
+ tar xzf easy-rsa.tar.gz 
+ cd easy-rsa-master/easyrsa3 
+ ./easyrsa init-pki 
++ date +%s 
+ ./easyrsa --batch [email protected] build-ca nopass 
+ '[' false = true ']' 
+ ./easyrsa --subject-alt-name=IP:192.168.245.237,IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local build-server-full kubernetes-master nopass 
+ cp -p pki/issued/kubernetes-master.crt /srv/kubernetes/server.cert 
+ cp -p pki/private/kubernetes-master.key /srv/kubernetes/server.key 
+ ./easyrsa build-client-full kubecfg nopass 
+ cp -p pki/ca.crt /srv/kubernetes/ca.crt 
+ cp -p pki/issued/kubecfg.crt /srv/kubernetes/kubecfg.crt 
+ cp -p pki/private/kubecfg.key /srv/kubernetes/kubecfg.key 
+ chgrp kube-cert /srv/kubernetes/server.key /srv/kubernetes/server.cert /srv/kubernetes/ca.crt 
+ chmod 660 /srv/kubernetes/server.key /srv/kubernetes/server.cert /srv/kubernetes/ca.crt 
+ rm -rf /tmp/kubernetes_cacert.YAN8Jg 
+ mkdir -p /opt/bin/ 
+ cp /home/kant/kube/master/etcd /home/kant/kube/master/etcdctl /home/kant/kube/master/flanneld /home/kant/kube/master/kube-apiserver /home/kant/kube/master/kube-controller-manager /home/kant/kube/master/kube-scheduler /opt/bin/ 
+ cp /home/kant/kube/minion/flanneld /home/kant/kube/minion/kubelet /home/kant/kube/minion/kube-proxy /opt/bin/ 
+ service etcd start 
etcd start/running, process 74611 
+ true 
+ FLANNEL_NET=172.16.0.0/16 
+ KUBE_CONFIG_FILE=./../cluster/../cluster/ubuntu/config-default.sh 
+ DOCKER_OPTS= 
+ /home/kant/kube/reconfDocker.sh ai 
Error: 100: Key not found (/coreos.com) [1] 
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}} 
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}} 
docker stop/waiting 
docker start/running, process 75022 
Connection to 192.168.245.237 closed. 
Validating master 
Validating [email protected] 
Using master 192.168.245.237 
cluster "ubuntu" set. 
user "ubuntu" set. 
context "ubuntu" set. 
switched to context "ubuntu". 
Wrote config for ubuntu to /home/kant/.kube/config 
... calling validate-cluster 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 

回答

0

看起來就像你在config-default.sh不正確的配置。 如果你想在一個節點部署本地集羣(與主機和工人),你會用配置文件config-default.sh:價值

export NUM_NODES=${NUM_NODES:-1} 

roles=${roles:-"ai"} 

NUM_NODES角色數目

+0

嗨,我剛剛確定了那些。這一次我也打開了調試標誌,並在我的問題上添加了更多信息。我一直試圖解決這一個星期,但我無法得到這個權利。我懷疑它必須用法蘭絨做些什麼,但是我不是100%肯定的,因爲我對kubernetes也是新手。 – user1870400

+0

哦,我在第一次部署Kubernetes Cluster時遇到了同樣的問題。但是當我將$ NUMNODE的值更改爲1時,這是工作。然後我不知道你爲什麼仍然會出錯。 – luanbuingoc

相關問題