通过kubeadm部署Kubernetes v1.13.5生产可用集群环境(无翻墙版)

1,928 阅读9分钟

[TOC]

环境准备

  • 主机准备 角色 | IP | 配置 | ------|----|-----------|--- k8s master | 10.10.40.54(应用网络) / 172.16.130.55(集群网络) | 4Core 8G k8s work node | 10.10.40.95(应用网络) / 172.16.130.82(集群网络) | 4Core 8G

  • 系统环境

[root@10-10-40-54 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.4 (Maipo)
[root@10-10-40-54 ~]# uname -a
Linux 10-10-40-54 3.10.0-693.el7.x86_64 #1 SMP Thu Jul 6 19:56:57 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@10-10-40-54 ~]# free -h
              total used free shared buff/cache available
Mem: 7.6G 141M 7.4G 8.5M 153M 7.3G
Swap: 2.0G 0B 2.0G
[root@10-10-40-54 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 13
Model name: QEMU Virtual CPU version 2.5+
Stepping: 3
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
L3 cache: 16384K
NUMA node0 CPU(s): 0-3
Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology pni cx16 x2apic hypervisor lahf_lm
[root@10-10-40-54 ~]#
[root@10-10-40-54 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:7a:e9:1b:ab:00 brd ff:ff:ff:ff:ff:ff
    inet 10.10.40.54/24 brd 10.10.40.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f87a:e9ff:fe1b:ab00/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:39:4d:ef:80:01 brd ff:ff:ff:ff:ff:ff
    inet 172.16.130.55/24 brd 172.16.130.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::f839:4dff:feef:8001/64 scope link
       valid_lft forever preferred_lft forever
[root@10-10-40-54 ~]#

安装前检查

  • 确保所有节点MAC和product uuid没有冲突

kubernetes会用到这些信息来区分各个节点, 若一样可能会部署失败 (github.com/kubernetes/…)

查看MAC

ip a

查看product uuid

cat /sys/class/dmi/id/product_uuid
  • 确保所有节点swap已经关闭

不关闭的话kubelet无法启动

# 临时关闭
swapoff -a

# 修改/etc/fstag注释掉swap行
[root@10-10-40-54 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Wed Jun 13 11:42:55 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=2fb6a9ac-835a-49a5-9d0a-7d6c7a1ba349 /boot xfs defaults 0 0
# /dev/mapper/rhel-swap swap swap defaults 0 0
[root@10-10-40-54 ~]#

# 关闭完成后可以通过swapon -s检查是否已经关闭, 若无输出则说明已经关闭
[root@10-10-40-54 ~]# swapon -s
[root@10-10-40-54 ~]#
  • 关闭SELinux

允许容器访问宿主机文件系统

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 查看SELinux设置情况, 确保是Permissive(只是纯粹发出告警)
[root@10-10-40-54 ~]# getenforce
Permissive
[root@10-10-40-54 ~]#
  • 打开bridge-nf-call-iptables

即到达Linux Bridge上的包要先经过iptables规则 参考: news.ycombinator.com/item?id=164…

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
  • 加载br_netfilter内核模块
modprobe br_netfilter

# 查看加载情况, 若能看到则说明已经加载
[root@10-10-40-54 ~]# lsmod | grep br_netfilter
br_netfilter 22209 0
bridge 136173 1 br_netfilter
[root@10-10-40-54 ~]#

安装部署流程

安装container runtime (Docker)

所有节点上执行

# Install Docker CE
## Set up the repository
### Install required packages.
yum install -y yum-utils device-mapper-persistent-data lvm2 git

### Add Docker repository.
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

## Install Docker CE.
yum install -y docker-ce-18.06.2.ce

## Create /etc/docker directory.
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=cgroupfs"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl enable docker.service
systemctl restart docker

安装kubeadm / kubelet / kubectl

  • 添加阿里云Kubernetes YUM源

官方推荐为Google YUM源, 需翻墙

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
#baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF

yum -y install epel-release
yum clean all
yum makecache
  • 安装kubeadm / kubelet / kubectl

版本可以通过yum search kubeadm --show-duplicates查看, 这里直接安装v1.3.5

yum install -y kubelet-1.13.5-0.x86_64 kubectl-1.13.5-0.x86_64 kubeadm-1.13.5-0.x86_64

* 检查kubeadm安装情况
[root@10-10-40-54 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:24:33Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
[root@10-10-40-54 ~]#

  • 启用kubelet

这个时候由于还没初始化, 所以kubelet会启动失败并且一直尝试重启, 不用管

systemctl enable --now kubelet

拉取Kubernetes组件Docker镜像

同样是无翻墙版

REGISTRY=registry.cn-hangzhou.aliyuncs.com/google_containers
VERSION=v1.13.5

## 拉取镜像
docker pull ${REGISTRY}/kube-apiserver-amd64:${VERSION}
docker pull ${REGISTRY}/kube-controller-manager-amd64:${VERSION}
docker pull ${REGISTRY}/kube-scheduler-amd64:${VERSION}
docker pull ${REGISTRY}/kube-proxy-amd64:${VERSION}
docker pull ${REGISTRY}/etcd-amd64:3.2.18
docker pull ${REGISTRY}/pause-amd64:3.1
docker pull ${REGISTRY}/coredns:1.1.3
docker pull ${REGISTRY}/pause:3.1

## 添加Tag
docker tag ${REGISTRY}/kube-apiserver-amd64:${VERSION} k8s.gcr.io/kube-apiserver-amd64:${VERSION}
docker tag ${REGISTRY}/kube-scheduler-amd64:${VERSION} k8s.gcr.io/kube-scheduler-amd64:${VERSION}
docker tag ${REGISTRY}/kube-controller-manager-amd64:${VERSION} k8s.gcr.io/kube-controller-manager-amd64:${VERSION}
docker tag ${REGISTRY}/kube-proxy-amd64:${VERSION} k8s.gcr.io/kube-proxy-amd64:${VERSION}
docker tag ${REGISTRY}/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
docker tag ${REGISTRY}/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
docker tag ${REGISTRY}/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3
docker tag ${REGISTRY}/pause:3.1 k8s.gcr.io/pause:3.1

那么怎么知道是需要哪些镜像呢?

 kubeadm config images list

Kubernetes master初始化配置

  • 修改集群初始化配置 将以下配置文件保存为kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
apiServer:
  timeoutForControlPlane: 4m0s
  extraArgs:
    advertise-address: 172.16.130.55  # 对外提供服务的IP, 若有多张网卡的情况默认采用默认网关指定网卡的IP
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""  # 高可用部署时为LB的endpoint
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
  extraArgs:
    listen-client-urls: https://172.16.130.55:2379
    advertise-client-urls: https://172.16.130.55:2379
    listen-peer-urls: https://172.16.130.55:2380
    initial-advertise-peer-urls: https://172.16.130.55:2380
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers  # 注意这里一定要修改为本地镜像仓库, 否则默认会去k8s.gcr.io拉镜像
kind: ClusterConfiguration
kubernetesVersion: v1.13.5  # 这里填写要部署的Kubernetes版本
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"  # Pod所使用的网段, 跟后面要部署的Flannel网段保持一致
  serviceSubnet: 10.96.0.0/12
scheduler: {}

也可以先输出一份默认配置然后自己修改

kubeadm config print init-defaults > kubeadm-config.yaml

Kubernetes master初始化 (kubeadm init)

真正的临门一脚

kubeadm init --config kubeadm-config.yaml

输出示例

[root@10-10-40-54 ~]# kubeadm init --config kubeadm-config.yaml
[init] Using Kubernetes version: v1.13.5
[preflight] Running pre-flight checks
 [WARNING Hostname]: hostname "10-10-40-54" could not be reached
 [WARNING Hostname]: hostname "10-10-40-54": lookup 10-10-40-54: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [10-10-40-54 localhost] and IPs [10.10.40.54 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [10-10-40-54 localhost] and IPs [10.10.40.54 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [10-10-40-54 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.40.54]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.002435 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "10-10-40-54" as an annotation
[mark-control-plane] Marking the node 10-10-40-54 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node 10-10-40-54 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: xkdkxz.7om906dh5efmkujl
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.10.40.54:6443 --token xkdkxz.7om906dh5efmkujl --discovery-token-ca-cert-hash sha256:52335ece8b859d761e569e0d84a1801b503c018c6e1bd08a5bb7f39cd49ca056

[root@10-10-40-54 ~]#

init成功的话能看到如何通过kubeadm join添加节点的提示

  • 添加kubectl 配置
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 检查部署情况
[root@10-10-40-54 ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-89cc84847-cmrkw 0/1 Pending 0 108s <none> <none> <none> <none>
kube-system coredns-89cc84847-k2nqs 0/1 Pending 0 108s <none> <none> <none> <none>
kube-system etcd-10-10-40-54 1/1 Running 0 45s 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-apiserver-10-10-40-54 1/1 Running 0 54s 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-controller-manager-10-10-40-54 1/1 Running 0 51s 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-proxy-jbqkc 1/1 Running 0 108s 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-scheduler-10-10-40-54 1/1 Running 0 45s 10.10.40.54 10-10-40-54 <none> <none>
[root@10-10-40-54 ~]#

此时应该能看到除coredns外其他Pod都处于Running状态, coredns处于pending状态因为网络插件还没部署

添加work node (kubeadm join)

到work node上执行kubeadm join

[root@10-10-40-95 ~]# kubeadm join 10.10.40.54:6443 --token hog5db.zh5p9z4xi5kvf1g7 --discovery-token-ca-cert-hash sha256:c9c8d056467c345651d1cb6d23fac08beb4ed72ea37e923cd826af12314b9ff0
[preflight] Running pre-flight checks
 [WARNING Hostname]: hostname "10-10-40-95" could not be reached
 [WARNING Hostname]: hostname "10-10-40-95": lookup 10-10-40-95: no such host
[discovery] Trying to connect to API Server "10.10.40.54:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.10.40.54:6443"
[discovery] Requesting info from "https://10.10.40.54:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.10.40.54:6443"
[discovery] Successfully established connection with API Server "10.10.40.54:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "10-10-40-95" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@10-10-40-95 ~]#
  • 查看加入的节点
[root@10-10-40-54 ~]# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
10-10-40-54 NotReady master 7h51m v1.13.5 10.10.40.54 <none> Red Hat Enterprise Linux Server 7.4 (Maipo) 3.10.0-693.el7.x86_64 docker://18.6.2
10-10-40-95 NotReady <none> 7h48m v1.13.5 10.10.40.95 <none> Red Hat Enterprise Linux Server 7.4 (Maipo) 3.10.0-693.el7.x86_64 docker://18.6.2
[root@10-10-40-54 ~]#
  • 部署网络插件 (Flannel) Flannel配置

也可以直接wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml下载下来自行修改 若存在多个网卡, 关键参数为flannel绑定的网卡

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth1 # 调整flannel进行节点外部通信使用的网络接口
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

部署flannel网络插件

[root@10-10-40-54 ~]# kubectl create -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
[root@10-10-40-54 ~]#
  • 查看flannel Pod运行状态, 到这一步安装部署就算完成了
[root@10-10-40-54 ~]# kubectl get pod -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-89cc84847-cmrkw 1/1 Running 0 8h 10.244.1.2 10-10-40-95 <none> <none>
kube-system coredns-89cc84847-k2nqs 1/1 Running 0 8h 10.244.1.4 10-10-40-95 <none> <none>
kube-system etcd-10-10-40-54 1/1 Running 0 8h 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-apiserver-10-10-40-54 1/1 Running 0 8h 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-controller-manager-10-10-40-54 1/1 Running 0 8h 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-flannel-ds-amd64-69fjw 1/1 Running 0 2m37s 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-flannel-ds-amd64-8789j 1/1 Running 0 2m37s 10.10.40.95 10-10-40-95 <none> <none>
kube-system kube-proxy-jbqkc 1/1 Running 0 8h 10.10.40.54 10-10-40-54 <none> <none>
kube-system kube-proxy-rv7hs 1/1 Running 0 8h 10.10.40.95 10-10-40-95 <none> <none>
kube-system kube-scheduler-10-10-40-54 1/1 Running 0 8h 10.10.40.54 10-10-40-54 <none> <none>
[root@10-10-40-54 ~]#
  • 创建一个busybox Pod对集群可用性进行测试 将以下spec保存为busybox.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox
  restartPolicy: Always

创建busybox Pod

[root@10-10-40-54 ~]# kubectl create -f busybox.yaml
pod/busybox created
[root@10-10-40-54 ~]# kubectl get pod -o wide -w
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 0/1 ContainerCreating 0 17s <none> 10-10-40-95 <none> <none>
busybox 1/1 Running 0 17s 10.244.1.5 10-10-40-95 <none> <none>

可以看到Pod已经能够正常运行了, Kubernetes集群搭建完毕

常见问题解答

kubeadm init 出错了如何重新init

先进行kubeadm reset重置再执行kubeadm init即可

[root@10-10-40-54 ~]# kubeadm reset
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

[root@10-10-40-54 ~]#

最后

未完待续, 欢迎拍砖