Install Kubernetes cluster with Kubeadm
安装1.13.0-beta.2
安装必要组件
init
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
localAPIEndpoint:
advertiseAddress: 0.0.0.0
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master212
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "${VIP}"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
serverCertSANs:
- "${VIP}"
extraArgs:
cipher-suites: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kubernetesVersion: v1.13.0-beta.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
下载镜像
kubeadm config images pull --config kubeadm.yaml
init
[root@localhost ~]# kubeadm init --config kubeadm.yaml
[init] Using Kubernetes version: v1.13.0-beta.2
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master212 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 ${VIP}]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master212 localhost] and IPs [xxx 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master212 localhost] and IPs [xxx 127.0.0.1 ::1 ${VIP}]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.002802 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master212" as an annotation
[mark-control-plane] Marking the node master212 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master212 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 1jvhzl.37osma939vn5q1uh
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join ${VIP}:6443 --token 1jvhzl.37osma939vn5q1uh --discovery-token-ca-cert-hash sha256:d39872f0d591e1b01ff3590d2a42030bc47b25baaf2748d4862636e8e0accbe9
执行完后,查看pod状态为:
[root@localhost ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-d6nnc 0/1 ContainerCreating 0 101s
coredns-576cbf47c7-g9zgf 0/1 ContainerCreating 0 101s
etcd-master212 1/1 Running 0 53s
kube-apiserver-master212 1/1 Running 0 47s
kube-controller-manager-master212 1/1 Running 0 60s
kube-proxy-wqdb2 1/1 Running 0 101s
kube-scheduler-master212 1/1 Running 0 46s
安装flannel
kubectl apply -f flannel.yml
[root@localhost ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-d6nnc 1/1 Running 0 4m11s
coredns-576cbf47c7-g9zgf 1/1 Running 0 4m11s
etcd-master212 1/1 Running 0 3m23s
kube-apiserver-master212 1/1 Running 0 3m17s
kube-controller-manager-master212 1/1 Running 0 3m30s
kube-flannel-ds-amd64-7lldk 1/1 Running 0 48s
kube-proxy-wqdb2 1/1 Running 0 4m11s
kube-scheduler-master212 1/1 Running 0 3m16s
添加新的Master
apiVersion: kubeadm.k8s.io/v1beta1
kind: JoinConfiguration
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:
bootstrapToken:
apiServerEndpoint: ${VIP}:6443
token: 1jvhzl.37osma939vn5q1uh
unsafeSkipCAVerification: true
timeout: 5m0s
tlsBootstrapToken: 1jvhzl.37osma939vn5q1uh
controlPlane:
localAPIEndpoint:
advertiseAddress: 0.0.0.0
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master213
kubeadm join --experimental-control-plane --config kubeadm.yaml
[root@master213 ~]# kubeadm join --experimental-control-plane --config kubeadm.yaml
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 18.06
[discovery] Trying to connect to API Server "${VIP}:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://${VIP}:6443"
[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "${VIP}:6443"
[discovery] Successfully established connection with API Server "${VIP}:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[join] Running pre-flight checks before initializing the new control plane instance
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 18.06
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master213 localhost] and IPs [xxx 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master213 localhost] and IPs [xxx 127.0.0.1 ::1 ${VIP}]
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master213 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 xxx ${VIP}]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Checking Etcd cluster health
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master213" as an annotation
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node master213 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master213 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Master label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.