k8s 集群部署(https)

master

api-server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
# 创建node节点第一次访问时候的token
# 生成随机数
$ head -c 16 /dev/urandom | od -An -t x | tr -d ' '
538d66be23b7d8e87ca8e0cf7b4191ae
$ echo 538d66be23b7d8e87ca8e0cf7b4191ae,kubelet-bootstrap,10001,system:kubelet-bootstrap > token.csv

# 环境变量文件
$ cat kube-apiserver-env

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=0 \
--etcd-servers=http://172.16.100.92:2379 \
--bind-address=172.16.100.92 \
--secure-port=6443 \
--advertise-address=172.16.100.92 \
--allow-privileged=true \
--service-cluster-ip-range=10.254.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/soyuan/k8s/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/soyuan/k8s/ssl/server.pem \
--tls-private-key-file=/soyuan/k8s/ssl/server-key.pem \
--client-ca-file=/soyuan/k8s/ssl/ca.pem \
--service-account-key-file=/soyuan/k8s/ssl/ca-key.pem "


# systemd 服务配置文件
$ cat /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=k8sapiserver
Documentation=k8sapiserver
After=etcd.server
Wants=etcd.server

[Service]
EnvironmentFile=-/soyuan/k8s/cfg/kube-apiserver-env
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=failure
Type=notify
LimitNOFILE=65536


[Install]
WantedBy=multi-user.target


# 启动服务
$ systemctl daemon-reload
$ systemctl start kube-apiserver
$ systemctl enable kube-apiserver

–logtostderr 启用日志
—v 日志等级
–etcd-servers etcd集群地址
–bind-address 监听地址
–secure-port https安全端口
–advertise-address 集群通告地址
–allow-privileged 启用授权
–service-cluster-ip-range Service虚拟IP地址段
–enable-admission-plugins 准入控制模块
–authorization-mode 认证授权,启用RBAC授权和节点自管理
–enable-bootstrap-token-auth 启用TLS bootstrap功能,后面会讲到
–token-auth-file token文件

scheduler

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# 环境变量文件
$ cat kube-scheduler-env

KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=0 \
--master=127.0.0.1:8080 \
--leader-elect"

# systemd 系统服务文件
$ cat /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=-/soyuan/k8s/cfg/kube-scheduler-env
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target


# 启动服务
$ systemctl daemon-reload
$ systemctl start kube-scheduler
$ systemctl enable kube-scheduler

–master 连接本地apiserver
–leader-elect 当该组件启动多个时,自动选举(HA)

controller-manager

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# 环境变量文件
$ cat kube-controller-manager-env

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=0 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.254.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/soyuan/k8s/ssl/ca.pem \
--cluster-signing-key-file=/soyuan/k8s/ssl/ca-key.pem \
--root-ca-file=/soyuan/k8s/ssl/ca.pem \
--service-account-private-key-file=/soyuan/k8s/ssl/ca-key.pem"

# systemd 系统服务文件
$ cat /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=controller manager
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=-/soyuan/k8s/cfg/kube-controller-manager-env
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

# 启动服务
$ systemctl daemon-reload
$ systemctl start kube-controller-manager
$ systemctl enable kube-controller-manager

Node

Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

第一次访问使用token访问,然后自动申请证书

将kubelet-bootstrap用户绑定到系统集群角色

1
2
3
$ kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

创建kubeconfig文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 创建kubelet bootstrapping kubeconfig ,BOOTSTRAP_TOKEN 要和上面的apiserver生成的保持一直
$ BOOTSTRAP_TOKEN=538d66be23b7d8e87ca8e0cf7b4191ae
$ KUBE_APISERVER="https://172.16.100.92:6443"

# 设置集群参数
$ kubectl config set-cluster kubernetes \
--certificate-authority=/soyuan/k8s/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
$ kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
$ kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
$ kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

创建kube-proxy kubeconfig文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ kubectl config set-cluster kubernetes \
--certificate-authority=/soyuan/k8s/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig

$ kubectl config set-credentials kube-proxy \
--client-certificate=/soyuan/k8s/ssl/kube-proxy.pem \
--client-key=/soyuan/k8s/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

$ kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

$ kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

部署kubelet组件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
# 环境变量文件
$ cat kubelet-env

KUBELET_OPTS="--logtostderr=true \
--v=0 \
--hostname-override=172.16.100.63 \
--kubeconfig=/opt/k8s/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/k8s/cfg/bootstrap.kubeconfig \
--config=/opt/k8s/cfg/kubelet.config \
--cert-dir=/opt/k8s/ssl "

# kubelet.config
$ cat kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 172.16.100.63
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.254.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true

# systemd 服务文件
$ cat /usr/lib/systemd/system/kubelet.service


[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/opt/k8s/cfg/kubelet-env
ExecStart=/usr/local/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target


# 启动kubelet
$ systemctl daemon-reload
$ systemctl start kubelet
$ stytemctl enable kubelet

# master节点的 kubelet-bootstrap 用户一定不要忘了创建,没有的话会报下面的错误
cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope

# 在master节点审批node节点加入集群
# 启动后还没有加入到集群中,需要手动允许该节点加入

$ kubectl get csr

NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-B_510xY-Sd9or1Yu3kTLt6bnpCdbU0CS6nPFGNfp5eo 3m58s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending


$ kubectl certificate approve node-csr-B_510xY-Sd9or1Yu3kTLt6bnpCdbU0CS6nPFGNfp5eo
$ kubectl get node

–hostname-override 在集群中显示的主机名
–kubeconfig 指定kubeconfig文件位置,会自动生成
–bootstrap-kubeconfig 指定刚才生成的bootstrap.kubeconfig文件
–cert-dir 颁发证书存放位置
–pod-infra-container-image 管理Pod网络的镜像

部署kube-proxy组件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# 环境变量文件
$ cat kube-proxy-env

KUBE_PROXY_OPTS="--logtostderr=true \
--v=0 \
--hostname-override=172.16.100.63 \
--cluster-cidr=10.254.0.0/24 \
--masquerade-all=true \
--kubeconfig=/opt/k8s/cfg/kube-proxy.kubeconfig"

# systemd 系统文件
$ cat /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=/opt/k8s/cfg/kube-proxy-env
ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target


# 启动服务
$ systemctl daemon-reload
$ systemctl start kube-proxy
$ systemctl enable kube-proxy

查看集群状态

kubectl get node


k8s 集群部署(https)
https://zhaops-hub.github.io/2021/11/24/k8s/k8s 集群部署(https)/
作者
赵培胜
发布于
2021年11月24日
许可协议