k8s1.28新版本安装
软件 | 版本 |
---|
docker | 最新版 |
containerd | 1.6.6 |
kubernetes | 1.28.1 |
calico | 3.25 |
节点 | IP | 系统 | 功能 | CPU | 内存 | 硬盘 |
---|
node1 | 10.80.10.1 | centos7.9 | k8s-master | 4核心 | 8GB | 20GB |
node2 | 10.80.10.2 | centos7.9 | k8s-node | 4核心 | 8GB | 20GB |
node3 | 10.80.10.3 | centos7.9 | k8s-node | 4核心 | 8GB | 20GB |
node1、node2、node3
修改主机名:
1 2 3 4 5 6
| # node1 # hostnamectl set-hostname k8s-master1 && bash # node2 # hostnamectl set-hostname k8s-node1 && bash # node3 # hostnamectl set-hostname k8s-node2 && bash
|
修改hosts解析:
1 2 3 4
| # vim /etc/hosts 10.80.10.1 k8s-master1 10.80.10.2 k8s-node1 10.80.10.3 k8s-node2
|
配置免密:
1 2 3 4 5
| # ssh-keygen 回车 回车 回车 # for i in k8s-master1 k8s-node1 k8s-node2; do ssh-copy-id ${i}; done
|
关闭swap:
1 2
| # swapoff -a # sed -ri 's/.*swap.*/#&/' /etc/fstab
|
加载内核参数:
1 2 3 4 5 6 7 8 9 10
| # modprobe br_netfilter # cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF # sysctl -p /etc/sysctl.d/k8s.conf # lsmod | grep br_netfilter br_netfilter 22256 0 bridge 151336 1 br_netfilter
|
添加docker源和k8s源:
1 2 3 4 5 6 7 8 9 10
| # yum install -y yum-utils device-mapper-persistent-data lvm2 # yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 EOF # yum makecache fast
|
下载安装containerd:
1 2 3
| # yum install -y containerd.io-1.6.6 # mkdir -p /etc/containerd # containerd config default > /etc/containerd/config.toml
|
修改containerd配置文件:
1 2 3 4 5
| # vim /etc/containerd/config.toml # 125行,修改配置 SystemdCgroup = true # 61行,修改配置 sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"
|
启动containerd,设置开机自启:
1 2 3
| # systemctl start containerd # systemctl enable containerd # systemctl status containerd
|
修改crictl配置文件:
1 2 3 4 5 6 7 8
| # cat > /etc/crictl.yaml << EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF # systemctl restart containerd # systemctl status containerd
|
下载安装docker:
1
| # yum install -y docker-ce
|
启动docker,设置开机自启:
1 2 3
| # systemctl start docker # systemctl enable docker # systemctl status docker
|
配置containerd镜像加速:
1 2 3
| # vim /etc/containerd/config.toml # 145行,修改配置 config_path = "/etc/containerd/certs.d"
|
1 2 3 4
| # mkdir -p /etc/containerd/certs.d/docker.io # vim /etc/containerd/certs.d/docker.io/hosts.toml [host."https://vh3bm52y.mirror.aliyuncs.com",host."https://registry.docker-cn.com"] capabilities = ["pull"]
|
1 2
| # systemctl restart containerd # systemctl status containerd
|
配置docker加速:
1 2 3 4 5
| # vim /etc/docker/daemon.json { "registry-mirrors": ["https://pmn1o05g.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"], "exec-opts": ["native.cgroupdriver=systemd"] }
|
1 2 3
| # systemctl daemon-reload # systemctl restart docker # systemctl status docker
|
下载安装k8s:
1
| # yum install -y kubelet-1.28.1 kubeadm-1.28.1 kubectl-1.28.1
|
kubeadm:kubeadm是一个工具,用来初始化k8s集群的。
kubelet:安装在集群所有节点上,用于启动pod的,kubeadm安装k8s,k8s控制节点和工作节点的组件,都是基于pod运行的,只要pod启动,就需要kubelet。
kubectl:通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
设置kubelet开机自启:
1
| # systemctl enable kubelet
|
node1
ctr导入镜像及打包k8s镜像:
1 2 3 4 5 6 7 8 9 10
| # ctr -n=k8s.io images import k8s_1.28.1.tar.gz # ctr -n k8s.io images export k8s_1.28.1.tar.gz \ registry.aliyuncs.com/google_containers/coredns:v1.10.1 \ registry.aliyuncs.com/google_containers/etcd:3.5.9-0 \ registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.1 \ registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.1 \ registry.aliyuncs.com/google_containers/kube-proxy:v1.28.1 \ registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.1 \ registry.aliyuncs.com/google_containers/pause:3.7 \ registry.aliyuncs.com/google_containers/pause:3.9
|
kubeadm初始化k8s集群:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
| # cd /root/ # kubeadm config print init-defaults > kubeadm.yaml # vim kubeadm.yaml # 12行,修改配置 advertiseAddress: 10.80.10.1 # 15行,修改配置 criSocket: unix:///run/containerd/containerd.sock # 17行,修改配置 name: k8s-master1 # 30行,修改配置 imageRepository: registry.aliyuncs.com/google_containers # 32行,修改配置 kubernetesVersion: 1.28.1 # 36行,添加配置 podSubnet: 10.244.0.0/16 # 尾行,添加配置 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd
|
1
| # kubeadm init --config kubeadm.yaml --ignore-preflight-errors=SystemVerification
|
- –image-repository registry.aliyuncs.com/google_container:手动指定国内源。
1 2 3
| # mkdir -p $HOME/.kube # sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config # sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
1 2 3
| # kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 NotReady control-plane 6m21s v1.28.1
|
生成扩容master/node节点token:
1 2
| # kubeadm token create --print-join-command kubeadm join 10.80.10.1:6443 --token wlpv2k.kshfc6owxef6cogq --discovery-token-ca-cert-hash sha256:78c8d9d29bf86561e1d63ea3e94f0eccfc58639782c131ba1da33f9b054d5aa8
|
node2、node3
扩容node节点:
1
| # kubeadm join 10.80.10.1:6443 --token wlpv2k.kshfc6owxef6cogq --discovery-token-ca-cert-hash sha256:78c8d9d29bf86561e1d63ea3e94f0eccfc58639782c131ba1da33f9b054d5aa8 --ignore-preflight-errors=SystemVerification
|
node1
工作节点打标签:
1 2 3 4 5 6 7
| # kubectl label nodes k8s-node1 node-role.kubernetes.io/work=work # kubectl label nodes k8s-node2 node-role.kubernetes.io/work=work # kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 NotReady control-plane 7m22s v1.28.1 k8s-node1 NotReady work 28s v1.28.1 k8s-node2 NotReady work 28s v1.28.1
|
ctr导入镜像及打包calico镜像:
1 2 3 4 5
| # ctr -n=k8s.io images import calico_3.25.tar.gz # ctr -n k8s.io images export calico_3.25.tar.gz \ docker.io/calico/cni:v3.25.0 \ docker.io/calico/kube-controllers:v3.25.0 \ docker.io/calico/node:v3.25.0
|
安装calico插件:
版本对应:https://docs.tigera.io/calico/3.25/getting-started/kubernetes/requirements
1 2 3 4 5 6
| # cd /usr/local/src/ # wget https://docs.tigera.io/archive/v3.25/manifests/calico.yaml --no-check-certificate # vim calico.yaml # 4568行,添加配置 - name: IP_AUTODETECTION_METHOD value: interface=ens33
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
| # kubectl apply -f calico.yaml # kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 Ready control-plane 10m v1.28.1 k8s-node1 Ready work 3m22s v1.28.1 k8s-node2 Ready work 3m22s v1.28.1 # kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-658d97c59c-mgkfr 1/1 Running 0 45s calico-node-24jtg 1/1 Running 0 45s calico-node-8nm4j 1/1 Running 0 45s calico-node-h26xg 1/1 Running 0 45s coredns-66f779496c-k4j57 1/1 Running 0 10m coredns-66f779496c-nm8xj 1/1 Running 0 10m etcd-k8s-master1 1/1 Running 0 10m kube-apiserver-k8s-master1 1/1 Running 0 10m kube-controller-manager-k8s-master1 1/1 Running 0 10m kube-proxy-26vcp 1/1 Running 0 3m33s kube-proxy-8765p 1/1 Running 0 3m33s kube-proxy-tqd6b 1/1 Running 0 10m kube-scheduler-k8s-master1 1/1 Running 0 10m
|
创建pod测试coredns:
1 2 3 4
| # kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh # ping www.baidu.com -c 3 # nslookup kubernetes.default.svc.cluster.local # exit
|