目录

1.修改源镜像地址

2.关闭、禁用防火墙

3.需要关闭 swap

4.创建 /etc/sysctl.d/k8s.conf 文件

5.安装 docker (跳过,已安装)

6.修改 docker 配置文件

7.安装 kubelet、kubeadm、kubectl

8.google_containers 配置(可以等 kubeadm init 报错时根据版本做 docker pull)

9.初始化节点,node 节点可以跳过直接到 step11

10.初始化完毕,根据日志提示执 config 命令

11.node 安装(集群模式下,单机后面的步骤都不需要了)

12.将 node 加到 master 中

13.master 节点验证集群及组件,命令查看集群信息


1.修改源镜像地址

# 安装wget命令
$ yum -y install wget
# 备份系统自带的源镜像地址
$ mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# 下载阿里云的源镜像地址
$ wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 生成缓冲
$ yum makecache

2.关闭、禁用防火墙

$ systemctl stop firewalld
$ systemctl disable firewalld

# setenforce是Linux的selinux防火墙配置命令, 执行setenforce 0 表示关闭selinux防火墙)
$ setenforce 0

3.需要关闭 swap

$ swapoff -a

# 注释掉 SWAP 的自动挂载
$ vim /etc/fstab
>>
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

# 使用free -m确认swap已经关闭。swap部分为0
$ free -m

4.创建 /etc/sysctl.d/k8s.conf 文件

$ cd /etc/sysctl.d
$ vim k8s.conf
>>
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0

# 使得配置文件生效
$ modprobe br_netfilter
$ sysctl -p /etc/sysctl.d/k8s.conf

5.安装 docker (跳过,已安装)

6.修改 docker 配置文件

$ vim /etc/docker/daemon.json
>>
{ 
  "registry-mirrors": ["https://obww7jh1.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}

# 重启docker
$ systemctl daemon-reload
$ systemctl restart docker

exec-opts是k8s需要的配置

7.安装 kubelet、kubeadm、kubectl

$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

$ yum install -y kubelet kubeadm kubectl
$ systemctl enable --now kubelet

8.google_containers 配置(可以等 kubeadm init 报错时根据版本做 docker pull)

在kubernetes init之前输入以下命令,由于coredns改名为coredns/coredns了

$ docker pull coredns/coredns:1.8.4
$ docker tag coredns/coredns:1.8.4 registry.aliyuncs.com/google_containers/coredns:v1.8.4

9.初始化节点,node 节点可以跳过直接到 step11

# 如果出错disable swap,执行步骤3
$ kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers
# 如果初始化过程出现问题,使用如下命令重置
$ kubeadm reset
$ rm -rf /var/lib/cni/
$ rm -f $HOME/.kube/config
$ systemctl daemon-reload && systemctl restart kubelet

10.初始化完毕,根据日志提示执 config 命令

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.3.69.72:6443 --token va5ioe.yovvkc002zpz2txv \
--discovery-token-ca-cert-hash sha256:90fc7d96264c7fa6e6e816de44ebf4ae36034d0cef42453d1a01e199502eaac5

根据这个提示安装

$ HOME = /root
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

还需要部署一个 Pod Network 到集群中,此处选择 flannel

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

11.node 安装(集群模式下,单机后面的步骤都不需要了)

重复step1 - step8 安装 docker kubelet kubeadm kubectl等,不需要init初始化

理论上来说,只需要在master上面操作集群,node上面不能通过kubectl等工具操作集群。

但是希望在node上面通过kubectl操作集群的话,只需要配置master相同的$HOME/.kube/config

# 在master上,将master中的admin.conf 拷贝到node中,这里是scp的方式,使用客户端手动上传也是可以的
$ scp /etc/kubernetes/admin.conf root@10.11.90.5:/root/

# 在node上
$ HOME = /root
$ mkdir -p $HOME/.kube
$ sudo cp -i $HOME/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

12.将 node 加到 master 中

# 在master上面查看kubeadm token
$ kubeadm token list

>>ts699z.y6lmkkn7td2e2lt3

# 如果执行kubeadm token list没有结果,说明token过期了
# 默认情况下,kubeadm生成的token只有24小时有效期,过了时间需要创建一个新的
$ kubeadm token create
# 也可以生成一个永不过期的token,当然这是有安全隐患的
$ kubeadm token create --ttl 0



# 在master上面查看discovery-token-ca-cert-hash
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

>>90fc7d96264c7fa6e6e816de44ebf4ae36034d0cef42453d1a01e199502eaac5

# 在node上面执行join命令
$ kubeadm join 10.11.90.5:6443 --token ts699z.y6lmkkn7td2e2lt3 --discovery-token-ca-cert-hash sha256:42e4caa1339f6929e7b844a24668d6874383e6688f057905fca375aabb9946e3

13.master 节点验证集群及组件,命令查看集群信息

$ kubectl get nodes

k8s删除添加node节点

# 查看节点数和删除node节点(master节点)
$ kubectl get node
NAME        STATUS     ROLES                  AGE   VERSION
ebgmaster   Ready      control-plane,master   37d   v1.22.0
ebgnode1    Ready      <none>                 37d   v1.22.0
ebgnode2    NotReady   <none>                 37d   v1.22.0
# 删除节点
$ kubectl  delete nodes ebgnode2

登陆node节点宿主机,在被删除的node节点清空集群信息

$ kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0924 17:09:23.935058   92357 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
Logo

助力广东及东莞地区开发者,代码托管、在线学习与竞赛、技术交流与分享、资源共享、职业发展,成为松山湖开发者首选的工作与学习平台

更多推荐