官方文档:

Installing kubeadm

Creating a cluster with kubeadm

参考:

使用kubeadm安装Kubernetes 1.11

Kubernetes kubectl run 命令详解

k8s的API手册

1. 安装docker

curl -sSL https://get.docker.com | sh
cat > /etc/docker/daemon.json <
{
"registry-mirrors": ["https://dic5s40p.mirror.aliyuncs.com"]
}
EOF

使用国内的镜像仓库:


# vi /etc/docker/daemon.json 

{
  "registry-mirrors": [
    "https://hub-mirror.c.163.com",
    "https://mirror.baidubce.com",
    "https://dic5s40p.mirror.aliyuncs.com"
  ]
}
# systemctl restart docker.service

然后查看修改是否生效:

# docker info | tail -10
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Registry Mirrors:
  https://hub-mirror.c.163.com/
  https://mirror.baidubce.com/
  https://dic5s40p.mirror.aliyuncs.com/
 Live Restore Enabled: false

废弃方法:然后在/etc/default/docker中添加:

DOCKER\_OPTS="--registry-mirror=https://dic5s40p.mirror.aliyuncs.com"

然后执行:

systemctl restart docker

2. 安装kubeadm、kubelet、kubectl

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

#!/bin/bash
set -e
apt-get -y install apt-transport-https ca-certificates curl software-properties-common
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
add-apt-repository \
"deb http://mirrors.ustc.edu.cn/kubernetes/apt \
kubernetes-xenial \
main"
apt-get update
apt-get install -y kubelet=1.17.17-00 kubeadm=1.17.17-00 kubectl=1.17.17-00
systemctl enable kubelet && systemctl start kubelet

3. Cgroup设置

Cgroup驱动设置

Debian11默认使用cgroupv2,但是没通过systemd管理,然后kubeadm要求cgroupv2必须使用systemd管理。所以切回cgroup1。kubelet默认也是通过cgroup驱动管理,如果用systemd还需要修改配置文件,麻烦:

# 在内核启动参数中加入
systemd.unified_cgroup_hierarchy=false systemd.legacy_systemd_cgroup_controller=false

4. 无网环境部署k8s集群

提前拉取所需镜像

kubeadm config images list
kubeadm config images pull --image-repository registry.cn-hangzhou.aliyuncs.com/google\_containers

5. 初始化master节点

kubeadm init --pod-network-cidr=10.17.0.0/16 --service-cidr=10.18.200.0/24 --kubernetes-version=v1.18.5 --image-repository registry.cn-hangzhou.aliyuncs.com/google\_containers

成功会显示

kubeadm join 192.168.100.12:6443 --token yskexa.twu83wmh7n64oczk \
    --discovery-token-ca-cert-hash sha256:d6dcfecc04d8452875155de28dc229eb4f7842eb55e8f998cade89cc625a679e

6. 为了可以执行kubectl

rm -rf $HOME/.kube
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

7. 安装pod network

wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml
# 什么都不要改,会自动检测出pod ip的范围
kubectl create -f calico.yaml

通过meta-plugin部署macvlan网络(前提也得是先部署calico,但是支持让macvlan口变为默认接口):

spidernet-io

为什么选择该plugin呢,因为道客应该用的就是该方案:

Daocloud部署macvlan

在VPC环境中,macvlan可能不通,所以可以换成ipvlan,配置基本一致,只不过ipvlan的mode是l2/l3/l3s:

配置ipvlan

定义MacVlan网络(原生的ipam不支持跨节点管理ip资源分配,因此需要为每个节点创建相应的网络):

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
   name: macvlan-overlay-big1
   namespace: kube-system
spec:
   config: |-
      {
          "cniVersion": "0.3.1",
          "name": "macvlan-overlay-big1",
          "plugins": [
              {
                  "type": "macvlan",
                  "master": "eth0",
                  "mode": "bridge",
                  "ipam": {
                      "type": "host-local",
                      "subnet": "172.16.252.0/22",
                      "rangeStart": "172.16.253.102",
                      "rangeEnd": "172.16.253.151",
                      "routes": [
                          { "dst": "0.0.0.0/0" }
                      ],
                      "gateway": "172.16.252.1"
                  }
              },{
                  "type": "router",
                  "service_hijack_subnet": ["10.18.0.0/16"],
                  "overlay_hijack_subnet": ["10.17.0.0/16"],
                  "additional_hijack_subnet": [],
                  "migrate_route": -1,
                  "rp_filter": {
                      "set_host": true,
                      "value": 0
                  },
                  "overlay_interface": "eth0",
                  "skip_call": false
              }
          ]
      }
---
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
   name: macvlan-standalone-big1
   namespace: kube-system
spec:
   config: |-
      {
          "cniVersion": "0.3.1",
          "name": "macvlan-standalone-big1",
          "plugins": [
              {
                  "type": "macvlan",
                  "master": "eth0",
                  "mode": "bridge",
                  "ipam": {
                      "type": "host-local",
                      "subnet": "172.16.252.0/22",
                      "rangeStart": "172.16.253.2",
                      "rangeEnd": "172.16.253.51",
                      "routes": [
                          { "dst": "0.0.0.0/0" }
                      ],
                      "gateway": "172.16.252.1"
                  }
              },{
                  "type": "veth",
                  "service_hijack_subnet": ["10.18.0.0/16"],
                  "overlay_hijack_subnet": ["10.17.0.0/16"],
                  "additional_hijack_subnet": [],
                  "migrate_route": -1,
                  "rp_filter": {
                      "set_host": true,
                      "value": 0
                  },
                  "skip_call": false
              }
          ]
      }

定义IPVlan网络(原生的cni也不支持固化pod的ip,但是可以通过创建只有一个ip的网络来实现):

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
   name: ipvlan-standalone-251
   namespace: kube-system
spec:
   config: |-
      {
          "cniVersion": "0.3.1",
          "name": "ipvlan-standalone-251",
          "plugins": [
              {
                  "type": "ipvlan",
                  "master": "eth0",
                  "mode": "l3",
                  "ipam": {
                      "type": "host-local",
                      "subnet": "172.16.252.0/22",
                      "rangeStart": "172.16.254.251",
                      "rangeEnd": "172.16.254.251",
                      "routes": [
                          { "dst": "0.0.0.0/0" }
                      ],
                      "gateway": "172.16.252.1"
                  }
              },{
                  "type": "veth",
                  "service_hijack_subnet": ["10.18.0.0/16"],
                  "overlay_hijack_subnet": ["10.17.0.0/16"],
                  "additional_hijack_subnet": [],
                  "migrate_route": -1,
                  "rp_filter": {
                      "set_host": true,
                      "value": 0
                  },
                  "skip_call": false
              }
          ]
      }

8. 添加worker节点

在worker节点上执行kubeadm init成功后返回的命令,即

kubeadm join 192.168.100.12:6443 --token 7u1jah.da6w4tilh0j5097w \
    --discovery-token-ca-cert-hash sha256:bcd0ce4354f2e8b794b830d7a14389b6a06e46e225486ece8218424a1744583f

注意:token的有效期只有24小时,我们可以用如下命令查看可用的token

kubeadm token list

如果为空,我们可以通过如下命令创建token

kubeadm token create

如果你连cert-hash也忘了,那么可以通过如下命令查看

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'

设置worker节点role:

kubectl label node deb11-vhu1-big2 kubernetes.io/role=worker

9. 删除worker节点

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
# 下面三条命令在worker节点上运行
kubeadm reset
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
ipvsadm -C # 如果使用了ipvs
kubectl delete node <node name>

10. 删除master节点

# 在master节点上运行
kubeadm reset

11 开启关闭dns

kubectl -n kube-system scale --replicas=0 deployment/coredns
kubectl -n kube-system scale --replicas=1 deployment/coredns

12. 让master节点也可以调度pod

kubectl taint nodes --all node-role.kubernetes.io/master-