本次安装过程,如无特殊说明,在集群上均使用 root 身份进行安装
每个节点上分别安装 containerd
在集群中选择一个 master 节点上执行 kubeadm 初始化
kubectl get cs
输出内容:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy ok
kubectl get nodes
输出内容:
NAME STATUS ROLES AGE VERSION
k8s-node1 NotReady control-plane 58s v1.28.2
k8s-node1 节点 notReady 是由于网络插件尚未安装导致的
kubectl cluster-info
输出内容:
Kubernetes control plane is running at https://192.168.32.71:6443
CoreDNS is running at https://192.168.32.71:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6554b8b87f-tdtf8 0/1 Pending 0 108s
coredns-6554b8b87f-vpzg9 0/1 Pending 0 108s
etcd-k8s-node1 1/1 Running 0 2m2s
kube-apiserver-k8s-node1 1/1 Running 0 2m2s
kube-controller-manager-k8s-node1 1/1 Running 0 2m2s
kube-proxy-vfwbx 1/1 Running 0 108s
kube-scheduler-k8s-node1 1/1 Running 0 2m2s
netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 1545/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 1545/etcd
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 1752/kube-proxy
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1665/kubelet
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 1506/kube-scheduler
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 1565/kube-controlle
tcp 0 0 192.168.32.71:2380 0.0.0.0:* LISTEN 1545/etcd
tcp 0 0 192.168.32.71:2379 0.0.0.0:* LISTEN 1545/etcd
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 531/systemd-resolve
tcp 0 0 127.0.0.1:34677 0.0.0.0:* LISTEN 710/containerd
tcp 0 0 127.0.0.54:53 0.0.0.0:* LISTEN 531/systemd-resolve
tcp6 0 0 :::10256 :::* LISTEN 1752/kube-proxy
tcp6 0 0 :::22 :::* LISTEN 1/init
tcp6 0 0 :::10250 :::* LISTEN 1665/kubelet
tcp6 0 0 :::6443 :::* LISTEN 1544/kube-apiserver
journalctl -f
必须部署一个基于 Pod 网络插件的 容器网络接口 (CNI),以便 Pod 可以相互通信 master节点执行,貌似 calico 和 M1 八字不合,所以选择 Flannel 作为网络组件
根据 Flannel github 仓库描述,对于 k8s 1.17+, flannel 0.25.1 这个版本是适用的
scp config/kube-flannel.yml root@k8s-node1:/root/script
注意: kube-flannel.yml 配置文件中如下配置的 pod 的网段信息 10.244.0.0/16
要和 k8s 集群初始化时 service网络规划保持一致
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
kubectl apply -f /root/script/kube-flannel.yml
验证网络:
kubectl get nodes
输出内容:
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready control-plane 14m v1.28.2
k8s-node2 Ready control-plane 8m32s v1.28.2
k8s-node3 Ready <none> 3m7s v1.28.2
k8s-node4 Ready <none> 2m57s v1.28.2
安装 Flannel 网络插件后,发现节点状态已经变成 Ready 了
kubectl get po -n kube-system
输出内容:
NAME READY STATUS RESTARTS AGE
coredns-6554b8b87f-5j7rx 1/1 Running 0 3m45s
coredns-6554b8b87f-fkhgm 1/1 Running 0 3m45s
etcd-k8s-node1 1/1 Running 2 4m
etcd-k8s-node2 1/1 Running 0 110s
kube-apiserver-k8s-node1 1/1 Running 0 3m59s
kube-apiserver-k8s-node2 1/1 Running 1 110s
kube-controller-manager-k8s-node1 1/1 Running 1 (100s ago) 3m59s
kube-controller-manager-k8s-node2 1/1 Running 0 96s
kube-proxy-8gb7h 1/1 Running 0 3m45s
kube-proxy-d9n5m 1/1 Running 0 35s
kube-proxy-rb5rz 1/1 Running 0 40s
kube-proxy-svm8p 1/1 Running 0 111s
kube-scheduler-k8s-node1 1/1 Running 1 (95s ago) 4m
kube-scheduler-k8s-node2 1/1 Running 1 110s
kubectl get pod -n kube-flannel
输出内容:
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-2d67c 1/1 Running 0 42s
kube-flannel-ds-5l7fw 1/1 Running 0 42s
kube-flannel-ds-dmnss 1/1 Running 0 42s
kube-flannel-ds-h9tsl 1/1 Running 0 42s
ifconfig | grep flannel
输出内容:
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
ip addr show flannel.1
输出内容:
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 9a:39:96:87:be:95 brd ff:ff:ff:ff:ff:ff
inet 10.244.0.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::9839:96ff:fe87:be95/64 scope link
valid_lft forever preferred_lft forever
创建临时pod测试
kubectl run -it --rm dns-test --image=busybox:1.28.4-glibc sh
# 查看 domain
/ # cat /etc/resolv.conf
# 域名解析
/ # nslookup baidu.com
/ # nslookup kubernetes.default
#测试外网
/ # ping www.baidu.com
# 内网测试
/ # ping 192.168.32.74
输出内容:
If you don't see a command prompt, try pressing enter.
/ # cat /etc/resolv.conf
search default.svc.mars-k8s.local svc.mars-k8s.local mars-k8s.local
nameserver 10.233.0.10
options ndots:5
/ # nslookup baidu.com
Server: 10.233.0.10
Address 1: 10.233.0.10 kube-dns.kube-system.svc.mars-k8s.local
Name: baidu.com
Address 1: 110.242.68.66
Address 2: 39.156.66.10
/ # nslookup kubernetes.default
Server: 10.233.0.10
Address 1: 10.233.0.10 kube-dns.kube-system.svc.mars-k8s.local
Name: kubernetes.default
Address 1: 10.233.0.1 kubernetes.default.svc.mars-k8s.local
/ #
Helm 是 Kubernetes 的包管理器,后续流程也将使用 Helm 安装 Kubernetes 的常用组件。 这里先在 master 节点 k8s-node1 上安装helm。
wget https://get.helm.sh/helm-v3.13.3-linux-arm64.tar.gz -P ~/software
tar -zxvf ~/software/helm-v3.13.3-linux-arm64.tar.gz -C ~/software
install -m 755 ~/software/linux-arm64/helm /usr/local/bin/helm
helm version
helm help
helm list
helm list -A
恭喜,到此集群基本上已经安装完成了,接下来可以进行集群冒烟测试,参考 集群测试文档
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。