二级制方法搭建k8s集群
1 准备工作
1.1 服务器或虚拟机
操作系统:
CentOS7
内核:
三台或更多节点及其地址
master:
192.168.2.20 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
node1:
192.168.2.21 kubelet,kube-proxy,docker,flannel,etcd
node2:
192.168.2.21 kubelet,kube-proxy,docker,flannel,etcd
1.2 修改主机名 ( 所有节点操作 )
master:
hostnamectl set-hostname master
node1:
hostnamectl set-hostname node1
node2:
hostnamectl set-hostname node2
如果命令失败, 直接修改 /etc/hostname, 重启后生效
hostname # 可查看是否修改成功
1.3 关闭防火强 ( 所有节点操作 )
systemctl stop firewalld
systemctl disable firewalld
所有节点相同操作可使用同时发送到所有会话
1.4 关闭selinux ( 所有节点操作 )
setenforce 0 # 临时关闭
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 永久关闭
1.5 关闭swap ( 所有节点操作 )
swapoff -a # 临时关闭;关闭swap主要是为了性能考虑
sed -ri 's/.*swap.*/#&/' /etc/fstab
free # 可以通过这个命令查看swap是否关闭了
1.6 同步时间 ( 所有节点操作 )
yum install ntpdate -y
ntpdate time.windows.com
1.7 将桥接的IPv4流量传递到iptables的链 ( 所有节点操作 )
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# 加载配置文件
sysctl --system
1.8 添加主机名与IP对应的关系 ( master操作 )
cat >> /etc/hosts << EOF
192.168.2.20 master
192.168.2.21 node1
192.168.2.22 node2
EOF
2 cfssl证书 (master操作)
2.1 下载文件
# 安装wget
yum -y install wget
# 下载cfssl工具:
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
# 添加执行权限
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
# 移动配置文件
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
若wget失败验证失败, 加 --no-check-certificate参数可解决
2.2 创建配置文件
创建以下三个文件:
ca-config.json
ca-csr.json
server-csr.json
mkdir ssl #创建一个目录来存放配置文件
cd ssl
ca-config.json
cat > ca-config.json << EOF
{
'signing': {
'default': {
'expiry': '87600h'
},
'profiles': {
'www': {
'expiry': '87600h',
'usages': [
'signing',
'key encipherment',
'server auth',
'client auth'
]
}
}
}
}
EOF
ca-csr.json
cat > ca-csr.json << EOF
{
'CN': 'etcd CA',
'key': {
'algo': 'rsa',
'size': 2048
},
'names': [
{
'C': 'CN',
'L': 'Beijing',
'ST': 'Beijing'
}
]
}
EOF
server-csr.json
cat > server-csr.json << EOF
{
'CN': 'etcd',
'hosts': [
'192.168.2.20',
'192.168.2.21',
'192.168.2.22'
],
'key': {
'algo': 'rsa',
'size': 2048
},
'names': [
{
'C': 'CN',
'L': 'BeiJing',
'ST': 'BeiJing'
}
]
}
EOF
etcd集群的ip地址
“hosts”:
“192.168.2.20”,
“192.168.2.21”,
“192.168.2.22”
2.3 生成证书
#生成证书:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
>>>
2021/09/25 23:56:21 [WARNING] This certificate lacks a 'hosts' field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ('Information Requirements').
ls *pem
ca-key.pem ca.pem server-key.pem server.pem
#四个.pem文件证书
3 部署etcd
3.1 下载二进制包 ( master操作 )
#二进制包下载地址:
https://github.com/etcd-io/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz
3.2 解压二进制包 ( master操作 )
以下部署步骤在三个etcd节点操作一样,唯一不同的是etcd配置文件中的服务器IP要写当前本机的, 所以我们在master节点上配置,然后将配置文件复制到node节点上在修改名称和IP,这样方便一点。
# 解压二进制包:
mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
/opt/etcd/bin 里面存放的是可执行文件 etcd 、 etcdctl
/opt/etcd/cfg 里面存放的是配置文件 etcd.conf
/opt/etcd/ssl 里面存放的是证书ca-key.pem 、 ca.pem 、server-key.pem 、server.pem
3.3 配置etcd.conf ( master操作 )
#创建etcd.conf配置文件:
cat > /opt/etcd/cfg/etcd << EOF
#[Member]
ETCD_NAME='etcd01'
ETCD_DATA_DIR='/var/lib/etcd/default.etcd'
ETCD_LISTEN_PEER_URLS='https://192.168.2.20:2380'
ETCD_LISTEN_CLIENT_URLS='https://192.168.2.20:2379'
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS='https://192.168.2.20:2380'
ETCD_ADVERTISE_CLIENT_URLS='https://192.168.2.20:2379'
ETCD_INITIAL_CLUSTER='etcd01=https://192.168.2.20:2380,etcd02=https://192.168.2.21:2380,etcd03=https://192.168.2.22:2380'
ETCD_INITIAL_CLUSTER_TOKEN='etcd-cluster'
ETCD_INITIAL_CLUSTER_STATE='new'
EOF
ETCD_NAME 节点名称 (master 是etcd01 node1是etcd02 node2是etcd03)
ETCD_DATA_DIR 数据目录 (不用修改)
ETCD_LISTEN_PEER_URLS 集群通信监听地址 (修改为本机ip)
ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址 (修改为本机ip)
ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址 (修改为本机ip)
ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址 (修改为本机ip)
ETCD_INITIAL_CLUSTER 集群节点地址 (跟上面节点名称要一一对应)
ETCD_INITIAL_CLUSTER_TOKEN 集群Token
ETCD_INITIAL_CLUSTER_STATE (加入集群的当前状态,new是新集群,existing表示加入已有集群,因为我们是新创建的所以设置为new,否则用existing)
3.4 配置etcd.service ( master操作 )
三个节点相同
# systemd管理etcd:(方便使用systemctl启动,而不是用一长串的绝对路径启动)
rm -f /usr/lib/systemd/system/etcd.service
vi /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd --name=${ETCD_NAME} --data-dir=${ETCD_DATA_DIR} --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem --enable-v2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
\ 为换行连接符,不可省略
3.5 拷贝证书 ( master操作 )
# 把刚才生成的证书拷贝到配置文件中的位置:(对应上面systemd管理中的路径)
cp ~/ssl/ca*pem ~/ssl/server*pem /opt/etcd/ssl/
3.6 将文件复制到node节点 ( master操作 )
主要有两个部分文件需要复制
/opt/etcd/ 下面的文件
# 因为证书我们是在master上配置的,所以我们需要拷贝给所有node
scp -r /opt/etcd/ root@192.168.2.21:/opt/ # 发送到node1
scp -r /opt/etcd/ root@192.168.2.22:/opt/ # 发送到node2
/usr/lib/systemd/system/etcd.service 文件
scp /usr/lib/systemd/system/etcd.service root@192.168.2.21:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.2.22:/usr/lib/systemd/system/
传送完成检查node节点上文件是否存在
3.7 修改node节点etcd.conf 中的名称和IP
node1 操作
cat > /opt/etcd/cfg/etcd<< EOF
#[Member]
ETCD_NAME='etcd02'
ETCD_DATA_DIR='/var/lib/etcd/default.etcd'
ETCD_LISTEN_PEER_URLS='https://192.168.2.21:2380'
ETCD_LISTEN_CLIENT_URLS='https://192.168.2.21:2379'
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS='https://192.168.2.21:2380'
ETCD_ADVERTISE_CLIENT_URLS='https://192.168.2.21:2379'
ETCD_INITIAL_CLUSTER='etcd01=https://192.168.2.20:2380,etcd02=https://192.168.2.21:2380,etcd03=https://192.168.2.22:2380'
ETCD_INITIAL_CLUSTER_TOKEN='etcd-cluster'
ETCD_INITIAL_CLUSTER_STATE='new'
EOF
修改ETCD_NAME 和IP
node2 操作
cat > /opt/etcd/cfg/etcd<< EOF
#[Member]
ETCD_NAME='etcd03'
ETCD_DATA_DIR='/var/lib/etcd/default.etcd'
ETCD_LISTEN_PEER_URLS='https://192.168.2.22:2380'
ETCD_LISTEN_CLIENT_URLS='https://192.168.2.22:2379'
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS='https://192.168.2.22:2380'
ETCD_ADVERTISE_CLIENT_URLS='https://192.168.2.22:2379'
ETCD_INITIAL_CLUSTER='etcd01=https://192.168.2.20:2380,etcd02=https://192.168.2.21:2380,etcd03=https://192.168.2.22:2380'
ETCD_INITIAL_CLUSTER_TOKEN='etcd-cluster'
ETCD_INITIAL_CLUSTER_STATE='new'
EOF
3.8 启动并设置开启启动:
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
3.9 检查etcd集群状态:
# 移动命令路径
cp /opt/etcd/bin/etcdctl /bin
# 配置etcdctl 3 环境变量
export ETCDCTL_API=3
/opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints='https://192.168.2.20:2379,https://192.168.2.21:2379,https://192.168.2.22:2379' endpoint status --write-out=table
>>>
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.2.20:2379 | 3ae8ea54109f8cad | 3.4.14 | 20 kB | true | false | 61 | 16 | 16 | |
| https://192.168.2.21:2379 | cb408b517db6d952 | 3.4.14 | 20 kB | false | false | 61 | 16 | 16 | |
| https://192.168.2.22:2379 | eca9a4bbb09d42e4 | 3.4.14 | 20 kB | false | false | 61 | 16 | 16 | |
/opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints='https://192.168.2.20:2379,https://192.168.2.21:2379,https://192.168.2.22:2379' endpoint health
# 用 etcd2 重新配置后查看集群状态
ETCDCTL_API=2 /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints='https://192.168.2.20:2379,https://192.168.2.21:2379,https://192.168.2.22:2379' cluster-health
# 重新配置子网 删除原来的配置
/opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints='https://192.168.2.20:2379,https://192.168.2.21:2379,https://192.168.2.22:2379' del /coreos.com/network/config
# 重新配置
ETCDCTL_API=2 /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints='https://192.168.2.20:2379,https://192.168.2.21:2379,https://192.168.2.22:2379' set /coreos.com/network/config '{ 'Network': '172.17.0.0/16', 'Backend': {'Type': 'vxlan'}}'
4 在work node节点上安装docker
4.1 node1 node2安装docker
# 安装依赖库
yum install -y yum-utils
# 安装docker仓库
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# 国外的仓库很慢,可以使用国内的镜像快一点
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装docker
yum install docker-ce docker-ce-cli containerd.io
4.2 systemd 管理 docker
cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
# 启动
systemctl start docker
systemctl enable docker
5 kube-apiserver (master操作)
5.1 生成 kube-apiserver 需要的配置文件
cd TLS/k8s
#保存 ca-config.json ca-csr.json server-csr.json
配置 ca-config.json
cat > ca-config.json << EOF
{
'signing': {
'default': {
'expiry': '87600h'
},
'profiles': {
'kubernetes': {
'expiry': '87600h',
'usages': [
'signing',
'key encipherment',
'server auth',
'client auth'
]
}
}
}
}
EOF
配置 cca-csr.json
cat > ca-csr.json << EOF
{
'CN': 'kubernetes',
'key': {
'algo': 'rsa',
'size': 2048
},
'names': [
{
'C': 'CN',
'L': 'Beijing',
'ST': 'Beijing',
'O': 'k8s',
'OU': 'System'
}
]
}
EOF
配置 server-csr.json
cat > server-csr.json << EOF
{
'CN': 'kubernetes',
'hosts': [
'10.0.0.1',
'127.0.0.1',
'192.168.2.20',
'192.168.2.21',
'192.168.2.22',
'kubernetes',
'kubernetes.default',
'kubernetes.default.svc',
'kubernetes.default.svc.cluster',
'kubernetes.default.svc.cluster.local'
],
'key': {
'algo': 'rsa',
'size': 2048
},
'names': [
{
'C': 'CN',
'L': 'BeiJing',
'ST': 'BeiJing',
'O': 'k8s',
'OU': 'System'
}
]
}
EOF
5.2 生成证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
ls *pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
ls server*pem
# 拷贝证书
cp ~/k8s/ca*pem ~/k8s/server*pem /opt/kubernetes/ssl/
5.2 部署 kube apiserver
下载kubernetes-server-linux-amd64.tar.gz,包含了所需的所有组件
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG- 1.20.1.md#v1183
解压
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
# 拷贝
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/
5.3 配置kube-apiserver.conf
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS='--logtostderr=false \--v=2 \--log-dir=/opt/kubernetes/logs \--etcd-servers=https://192.168.2.20:2379,https://192.168.2.21:2379,https://192.168.2.22:2379 \--bind-address=192.168.2.20 \--secure-port=6443 \--advertise-address=192.168.2.20 \--allow-privileged=true \--service-cluster-ip-range=10.0.0.0/24 \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \--authorization-mode=RBAC,Node \--enable-bootstrap-token-auth=true \--token-auth-file=/opt/kubernetes/cfg/token.csv \--service-node-port-range=30000-32767 \--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \--tls-cert-file=/opt/kubernetes/ssl/server.pem \--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \--client-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \--etcd-cafile=/opt/etcd/ssl/ca.pem \--etcd-certfile=/opt/etcd/ssl/server.pem \--etcd-keyfile=/opt/etcd/ssl/server-key.pem \--audit-log-maxage=30 \--audit-log-maxbackup=3 \--audit-log-maxsize=100 \--audit-log-path=/opt/kubernetes/logs/k8s-audit.log'
EOF
上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用 EOF 保留换 行符。
logtostderr:启用日志
v:日志等级
log-dir:日志目录
etcd-servers:etcd 集群地址
bind-address:监听地址
secure-port:https 安全端口
advertise-address:集群通告地址
allow-privileged:启用授权
service-cluster-ip-range:Service 虚拟 IP 地址段
enable-admission-plugins:准入控制模块
authorization-mode:认证授权,启用 RBAC 授权和节点自管理
enable-bootstrap-token-auth:启用 TLS bootstrap 机制
token-auth-file:bootstrap token 文件
service-node-port-range:Service nodeport 类型默认分配端口范围
kubelet-client-xxx:apiserver 访问 kubelet 客户端证书
tls-xxx-file:apiserver https 证书
etcd-xxxfile:连接 Etcd 集群证书
audit-log-xxx:审计日志
5.4 配置token.csv
cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,'system:node-bootstrapper'
EOF
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
a266c491e557753ef5d43e467c600d10
cat >> /etc/profile<< EOF
export KUBERNETES_MASTER='127.0.0.1:8080'
EOF
保存退出
source /etc/profile
刷新环境变量
5.5 配置kube-apiserver.service
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
5.6 设置自启动与启动
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
systemctl status kube-apiserver
>>>
kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2021-09-27 01:25:36 EDT; 48s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 10945 (kube-apiserver)
CGroup: /system.slice/kube-apiserver.service
└─10945 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --etcd-servers=https://192.168.2.20:2379.
5.7 授权 kubelet-bootstrap 用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
6 部署 kube-controller-manager(master操作)
6.1 配置kube-controller-manager.conf
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS='--logtostderr=false \--v=2 \--log-dir=/opt/kubernetes/logs \--leader-elect=true \--master=127.0.0.1:8080 \--bind-address=127.0.0.1 \--allocate-node-cidrs=true \--cluster-cidr=10.244.0.0/16 \--service-cluster-ip-range=10.0.0.0/24 \--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \--root-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \--experimental-cluster-signing-duration=87600h0m0s'
EOF
6.2 systemd 管理 controller-manager
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
6.3 设置kube-controller-manager的自启动并启动
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
systemctl status kube-controller-manager
>>>
● kube-controller-manager.service - Kubernetes Controller Manager
Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2021-09-27 01:33:06 EDT; 137ms ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 11011 (kube-controller)
CGroup: /system.slice/kube-controller-manager.service
└─11011 /opt/kubernetes/bin/kube-controller-manager --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect=true --master=12...
Sep 27 01:33:06 master systemd[1]: Started Kubernetes Controller Manager.
7 部署 kube-scheduler(master操作)
7.1 配置kube-scheduler.conf
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS='--logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect --master=127.0.0.1:8080 --bind-address=127.0.0.1'
EOF
7.2 配置 kube-scheduler.service
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
7.3 kube-scheduler自启动
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
systemctl status kube-scheduler
>>>
● kube-scheduler.service - Kubernetes Scheduler
Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2021-09-27 01:36:04 EDT; 123ms ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 11065 (kube-scheduler)
CGroup: /system.slice/kube-scheduler.service
└─11065 /opt/kubernetes/bin/kube-scheduler --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect --master=127.0.0.1:8080 -...
Sep 27 01:36:04 master systemd[1]: Started Kubernetes Scheduler.
7.4 查看集群状态
master节点的组件已经部署完成,查看状态
[root@master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {'health': 'true'}
etcd-0 Healthy {'health': 'true'}
etcd-2 Healthy {'health': 'true'}
8 部署work node 的组件
这里我们在master上也部署node的组件,打上污点不使用,不影响
8.1 拷贝文件
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
# 从master拷贝需要的文件
scp -r root@192.168.2.20:/root/kubernetes/server/bin/kubelet /opt/kubernetes/bin
scp -r root@192.168.2.20:/root/kubernetes/server/bin/kube-proxy /opt/kubernetes/bin
scp -r root@192.168.2.20:/root/kubernetes/server/bin/kubectl /usr/bin/
scp -r root@192.168.2.20:/opt/kubernetes/ssl /opt/kubernetes
8.2 配置kubelet.conf
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS='--logtostderr=tr \--v=2 \--log-dir=/opt/kubernetes/logs \--hostname-override=node1 \--network-plugin=cni \--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \--config=/opt/kubernetes/cfg/kubelet-config.yml \--cert-dir=/opt/kubernetes/ssl \--pod-infra-container-image=lizhenliang/pause-amd64:3.0'
EOF
hostname-override:显示名称,集群中唯一
network-plugin:启用CNI
kubeconfig:空路径,会自动生成,后面用于连接apiserver
bootstrap-kubeconfig:首次启动向apiserver申请证书
config:配置参数文件
cert-dir:kubelet证书生成目录
pod-infra-container-image:管理Pod网络容器的镜像
8.3 配置kubelet-config.yml
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.2.21
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
8.4 生成bootstrap.kubeconfig文件
KUBE_APISERVER='https://192.168.2.20:6443' # apiserver IP:PORT ?
TOKEN='c47ffb939f5ca36231d9e3121a252940' # 与token.csv里保持一致
8.5 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials 'kubelet-bootstrap' --token=${TOKEN} --kubeconfig=bootstrap.kubeconfig
kubectl config set-context default --cluster=kubernetes --user='kubelet-bootstrap' --kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
# 拷贝
mv bootstrap.kubeconfig /opt/kubernetes/cfg
8.6 systemd管理kubelet.service
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
8.7 kubelet 启动
systemctl daemon-reload
systemctl enabel kubelet
systemctl start kubelet
systemctl status kubelet
>>>
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2021-09-27 02:26:01 EDT; 1min 18s ago
Main PID: 26821 (kubelet)
CGroup: /system.slice/kubelet.service
└─26821 /opt/kubernetes/bin/kubelet --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --hostname-override=m1 --network-plugin=cni ..
Sep 27 02:26:01 node1 systemd[1]: Started Kubernetes Kubelet.
8.8 批准加入集群 (master操作)
# 查看kubelet证书请求
[root@master bin]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-JZLQ2x99c69eRpOjkzoxoDYi3NGF0t5PXEu8AQhkr8M 3m5s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
# 批准请求
[root@master bin]# kubectl certificate approve node-csr-JZLQ2x99c69eRpOjkzoxoDYi3NGF0t5PXEu8AQhkr8M
certificatesigningrequest.certificates.k8s.io/node-csr-JZLQ2x99c69eRpOjkzoxoDYi3NGF0t5PXEu8AQhkr8M approved
# 查看节点
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady <none> 24m v1.18.2
9 部署kube-proxy
9.1 kube-proxy.conf
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS='--logtostderr=false \--v=2 \--log-dir=/opt/kubernetes/logs \--config=/opt/kubernetes/cfg/kube-proxy-config.yml'
EOF
9.2 kube-proxy-config.yml
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: m1
clusterCIDR: 10.0.0.0/24
EOF
9.3 生成证书 (master操作)
# 因为cfssl安装在master上, 在master上生成证书发到nodo上
cd /root/k8s
cat > kube-proxy-csr.json << EOF
{
'CN': 'system:kube-proxy',
'hosts': [],
'key': {
'algo': 'rsa',
'size': 2048
},
'names': [
{
'C': 'CN',
'L': 'BeiJing',
'ST': 'BeiJing',
'O': 'k8s',
'OU': 'System'
}
]
}
EOF
# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
# 拷贝到node
scp -r /root/k8s root@192.168.2.21:/opt/ # node1
scp -r /root/k8s root@192.168.2.22:/opt/ # node2
9.4 生成kubeconfig (node操作)
cd /opt/k8s
kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy --client-certificate=./kube-proxy.pem --client-key=./kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
9.5 配置kube-proxy.service (node操作)
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
9.6 kube-proxy启动 (node操作)
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
systemctl status kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node1 k8s]#
[root@node1 k8s]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2021-09-27 03:19:53 EDT; 137ms ago
Main PID: 31089 (kube-proxy)
CGroup: /system.slice/kube-proxy.service
└─31089 /opt/kubernetes/bin/kube-proxy --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --config=/opt/kubernetes/cfg/kube-proxy-c...
Sep 27 03:19:53 node1 systemd[1]: Started Kubernetes Proxy.
10 部署网络组件
Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案
#安装calico
kubectl apply -f calico.yaml
#列出“kube-system”命名空间下的所有pod
kubectl get pods -n kube-system
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready <none> 28m v1.18.2
k8s-node1 Ready <none> 13m v1.18.2
k8s-node2 Ready <none> 14m v1.18.2
授权 apiserver 访问 kubelet
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: 'true'
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ''
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ''
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
kubectl apply -f apiserver-to-kubelet-rbac.yaml
11 新增加 Worker-Node
11.1 拷贝已部署好的Node相关文件到新节点
在Master节点中,将Worker Node涉及文件拷贝到另外2个节点
#拷贝工作目录
scp -r /opt/kubernetes root@192.168.200.201:/opt/
#拷贝配置文件
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.2.21:/usr/lib/systemd/system
#拷贝证书文件
scp /opt/kubernetes/ssl/ca.pem root@192.168.2.21:/opt/kubernetes/ssl
#对node2机器同样执行一遍
scp -r /opt/kubernetes root@192.168.2.22:/opt/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.2.22:/usr/lib/systemd/system
scp /opt/kubernetes/ssl/ca.pem root@192.168.2.22:/opt/kubernetes/ssl
11.2 删除 kubelet 证书和 kubeconfig 文件
由于这几个文件是证书申请审批后自动生成的,每个Node不同,因此必须删除
#在node1、node2两个节点上分别执行如下命令
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*
11.3 修改主机名
在两台机器上分别打开2个配置文件,并修改主机名
# 修改node1 kubelet配置文件
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1
# 或者sed -i直接替换
# sed -i 's/k8s-master1/k8s-node1/g' /opt/kubernetes/cfg/kubelet.conf
#修改kube-proxy配置文件
vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node1
# 或者sed -i直接替换
# sed -i 's/k8s-master1/k8s-node1/g' /opt/kubernetes/cfg/kube-proxy-config.yml
node2操作
# sed -i 's/k8s-master1/k8s-node2/g' /opt/kubernetes/cfg/kubelet.conf
# sed -i 's/k8s-master1/k8s-node2/g' /opt/kubernetes/cfg/kube-proxy-config.yml
11.4 启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet kube-proxy
systemctl enable kubelet kube-proxy
systemctl status kubelet kube-proxy
>>>
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-28 06:14:10 EDT; 40min ago
Main PID: 3279 (kube-proxy)
Tasks: 8
Memory: 12.3M
CGroup: /system.slice/kube-proxy.service
└─3279 /opt/kubernetes/bin/kube-proxy --logtostderr=false --v=2 --log-...
Sep 28 06:14:10 k8s-master1 systemd[1]: Started Kubernetes Proxy.
Hint: Some lines were ellipsized, use -l to show in full.
11.5 在Master上批准新的Node kubelet证书申请
# 先查看证书请求:
kubectl get csr
在两台node机器上分别执行完前4步操作之后(node节点中启动kubelet后就会有请求信息),查
#授权ndoe1
kubectl certificate approve node-csr-MYWfZXy8o2n8Gb6dZ-fVKw1qVFkmgEkSFdg7m1VL56w
#授权node2
kubectl certificate approve node-csr-XAp5v8JU6qlAkDwzXPiN0DkJ9xlqsq4KbMekUFqBwKM
11.6:查看Node状态
[root@k8s-master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready <none> 45m v1.18.2
k8s-node1 Ready <none> 30m v1.18.2
k8s-node2 Ready <none> 31m v1.18.2
12 测试kubernetes集群
在Kubernetes集群中创建一个pod,验证是否正常运行:
$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc
访问地址:http://NodeIP:Port